id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.06260
|
An experiment on an automated literature survey of data-driven speech
enhancement methods
|
The increasing number of scientific publications in acoustics, in general,
presents difficulties in conducting traditional literature surveys. This work
explores the use of a generative pre-trained transformer (GPT) model to
automate a literature survey of 116 articles on data-driven speech enhancement
methods. The main objective is to evaluate the capabilities and limitations of
the model in providing accurate responses to specific queries about the papers
selected from a reference human-based survey. While we see great potential to
automate literature surveys in acoustics, improvements are needed to address
technical questions more clearly and accurately.
|
Arthur dos Santos, Jayr Pereira, Rodrigo Nogueira, Bruno Masiero, Shiva Sander-Tavallaey, Elias Zea
|
2023-10-10T02:07:24Z
|
http://arxiv.org/abs/2310.06260v1
|
# An experiment on an automated literature survey of data-driven speech enhancement methods
###### Abstract
The increasing number of scientific publications in acoustics, in general, presents difficulties in conducting traditional literature surveys. This work explores the use of a generative pre-trained transformer (GPT) model to automate a literature survey of 116 articles on data-driven speech enhancement methods. The main objective is to evaluate the capabilities and limitations of the model in providing accurate responses to specific queries about the papers selected from a reference human-based survey. While we see great potential to automate literature surveys in acoustics, improvements are needed to address technical questions more clearly and accurately.
speech enhancement methods data-driven acoustics literature survey natural language processing large language models
## 1 Introduction
A recent study has shown an increasing publication rate after analyzing 45 million scientific articles produced in the past six decades [14]. In the context of applications of data-driven methods in acoustics alone, as shown in the Scopus1 search in Fig. 1, the number of articles in the first half of 2023 had exceeded the total number of articles in the entire year of 2019. Given this growth in the literature, the acoustics community faces the limitations of traditional survey methods. At the same time, the remarkable advancements in the field of natural language processing (NLP) and large language models (LLMs) in recent years--leading to the "boom" of the generative pre-trained transformer (GPT) [12], offers a unique opportunity to guide and advance knowledge in acoustics through automated large-scale text processing. This can provide more accessible information for researchers, practitioners, and engineers interested in data-driven methods for acoustics and vibration in the broader sense.
Footnote 1: [https://www.scopus.com/](https://www.scopus.com/)
Recent literature surveys in acoustics have reviewed the theory and applications of machine learning (ML) in acoustics (Bianco et al., 2019), sound source localization (SSL) using deep learning methods (Grumiaux et al., 2022), as well as noise-induced hearing loss in several contexts (Neitzel and Fligor, 2019; Radziwon et al., 2019; Malowski et al., 2022). The survey by Gannot et al. analyzed \(393\) papers on speech enhancement and source separation through four queries (Gannot et al., 2017): what is the acoustic impulse response model, what is the spatial filter design, what is the parameter estimation algorithm, and what is the post-filtering technique? Other related review papers have covered more specific applications of acoustics, such as source-range estimation for underwater acoustics (Song and Byun, 2022), SSL for wireless acoustic sensor networks (Cobos et al., 2017), the LOCATA challenge for source localization and tracking (Evers et al., 2020), and 15 years of SSL in robotics applications (Argentieri et al., 2015).
Writing a literature survey can be viewed as the art of _making a long story short_, which can be pretty laborious. Typically, it starts by selecting a topic of interest and elaborating a list of questions. Then, a search for relevant literature items must be fulfilled, which, nowadays, can be facilitated by search engines and databases that assess the credibility and reliability of sources (e.g., Scopus, Google Scholar,2 etc.). This is followed by processing the selected literature, organizing items into categories based on their similarities and differences, analyzing them, and noting essential trends, patterns, knowledge gaps, etc. To do so, several tools exist to provide researchers with ways to document the whole process, with mechanisms to build quality assessment checklists, data extraction forms, among others (e.g., Covidence,3 Parsif.al,4 Rayyan,5 etc.). However, until now, one has to _read through_ all the literature.
Footnote 2: [https://scholar.google.com/](https://scholar.google.com/)
Footnote 3: [https://www.covidence.org/](https://www.covidence.org/)
Footnote 4: [https://parsif.al/](https://parsif.al/)
Footnote 5: [https://www.rayyan.ai/](https://www.rayyan.ai/)
Reading a scientific paper typically involves scanning the text for the research problem, assumptions, methods, evaluations, and main findings; interpreting relevant mathematical terminology; understanding the structure and organization of the text; and synthesizing information to form a coherent understanding of it as a whole (Pain, 2016). Thus, the time taken to read an academic paper varies depending on various factors, such as its length, the complexity of the topic, and the reader's familiarity with the subject matter. Assuming that a familiar reader has a typical reading speed of approximately \(200\)-\(300\) words per minute (Frank, 1990), it would take roughly \(1\)-\(2\) hours to read a \(10\)-page academic paper. Math-intensive documents might take even longer. Therefore, scanning 100 articles would take approximately one month of uninterrupted work to read through the literature.
The usage of LLMs for automated text summarization and generation is relatively new and has had applications in medicine and news enterprises. A relevant study to this work was published recently by Tang et al. (Tang et al., 2023), who performed zero-shot medical evidence summarization generated with GPT-3.5 and ChatGPT and compared them to human-generated summarization. Similar methodologies have been applied to, for example, compare abstracts generated by ChatGPT to real abstracts from medical journals (Gao et al., 2023), identify and assess key research questions in gastroenterology (Lahat et al., 2023), and answer multiple-choice questions about human genetics (Duong and Solomon, 2023). LLMs have also been used for automatic news summarization (Syed et al., 2020; Goyal et al., 2023). A common element in these studies is that LLM-based methodologies have substantial potential in medical and news applications, but more work is needed to increase the accuracy and fidelity.
Figure 1: Four decades of articles on applications of data-driven methods in acoustics. The results have been obtained from a Scopus search on August 2, 2023.
In this paper, we employ a GPT model to query a literature corpus comprising \(116\) texts on data-driven speech enhancement methods. The main goal is to speed up literature surveys. The structure of this paper is as follows: Section 2 describes the methodology, including the literature corpus, a short description of the GPT model, and the queries posed to the model. Section 3 presents the results of the GPT model and a comparison with a reference (human-based) survey [11]. Lastly, conclusions are drawn in Sec. 4.
## 2 Methodology
### Text corpus
In this study, the corpus consists of 116 articles published in the English language between January and December \(2021\), matching the search strings "_audio enhancement_" OR "_dereverberation_" AND in the context of "_machine learning_" OR "_deep learning_," from various databases, including the AES E-Library,6 ACM Digital Library,7 Google Scholar, IEEE Digital Library,8 JASA,9 MDPI,10 ResearchGate,11 Research Square,12 ScienceDirect,13 Springer,14 arXiv,15 and some repositories of higher education institutions and subsidiary research departments of corporations.
Footnote 6: [https://www.aes.org/e-lib/](https://www.aes.org/e-lib/)
Footnote 7: [https://dl.acm.org/](https://dl.acm.org/)
Footnote 8: [https://ieeexplore.ieee.org/](https://ieeexplore.ieee.org/)
Footnote 9: [https://asa.scitation.org/journal/jas](https://asa.scitation.org/journal/jas)
Footnote 10: [https://www.mdpi.com/](https://www.mdpi.com/)
Footnote 11: [https://www.researchgate.net/](https://www.researchgate.net/)
Footnote 12: [https://www.researchsquare.com/](https://www.researchsquare.com/)
Footnote 13: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 14: [https://link.springer.com/](https://link.springer.com/)
Footnote 15: [https://arxiv.org/](https://arxiv.org/)
Footnote 16: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 17: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 18: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Conference, journal, and challenge papers, book series and chapters, extended abstracts, technical notes, M.Sc. theses, and Ph.D. dissertations were included in the search. The average number of pages per article was \(8\), varying from \(2\) to \(30\) (except for the M.Sc. and Ph.D. monographies, which varied from \(31\) to \(118\)). For the complete list of texts reviewed, please refer to this external link16.
Footnote 16: [https://www.aes.org/e-lib/](https://www.aes.org/e-lib/)
### Generative pre-trained transformer model
First released in 2018 [10] and then continuously updated, the generative pre-trained transformer (GPT) is a large autoregressive language model designed to generate human-like responses to natural language input. It can be used for various tasks, including chatbots, language translation, and text summarization. Its ability to generate coherent text has made it a valuable tool for researchers and developers in NLP and ML applications. For example, it is possible today to ask ChatGPT or Bard to summarize a scientific paper or generate a list of sources for a literature survey on a specific topic. However, it has been seen that generated responses are often partially (sometimes entirely) fake [12], and the answering accuracy can deteriorate when the answer to the query lies in the middle of the context [13]. Therefore, we have focused on applying the underlying GPT model, not on the direct usage of chatbots.
In this study, we used the large language model of Open AI GPT3.5-turbo-16k17 to process the research papers and extract relevant information. This allows us to explore the model's ability to handle long contexts (i.e., 16k tokens or up to about 50 pages of pure text, assuming an average of 300 tokens/page), enabling a comprehensive analysis of an entire scientific paper. This is in contrast to previous studies on automatic literature summarization [14], which examined scientific abstracts. It should be stressed that articles in PDF in acoustics most often translate into fewer pages of pure text due to figures, tables, etc. We utilize the GPT model's ability to answer questions to help us address specific inquiries about the papers. First, we convert the PDF files into text using the PyPDF2 library18. Next, we prompt the GPT model with each paper's full text and specific questions to obtain comprehensive answers. This iterative process is performed for every paper to address the four queries presented in the following section. Compared to the human pace, this methodology requires much less time to analyze academic papers and provide an answer to the question posed.
Footnote 17: [https://www.researchgate.net/](https://www.researchgate.net/)
Footnote 18: [https://www.researchsquare.com/](https://www.researchsquare.com/)
Footnote 19: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 20: [https://link.springer.com/](https://link.springer.com/)
Footnote 21: [https://arxiv.org/](https://arxiv.org/)
Footnote 22: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 23: [https://link.springer.com/](https://link.springer.com/)
Footnote 24: [https://arxiv.org/](https://arxiv.org/)
Footnote 25: [https://arxiv.org/](https://arxiv.org/)
Footnote 26: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 27: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 28: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 29: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://arxiv.org/](https://arxiv.org/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://arxiv.org/](https://arxiv.org/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://arxiv.org/](https://arxiv.org/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://arxiv.org/](https://arxiv.org/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://arxiv.org/](https://arxiv.org/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://link.springer.com/](https://link.springer.com/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://link.springer.com/](https://link.springer.com/)
Footnote 37: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 38: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 31: [https://link.springer.com/](https://link.springer.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 3: [https://link.springer.com/](https://link.springer.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://link.springer.com/](https://link.springer.com/)
Footnote 32: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 33: [https://link.springer.com/](https://link.springer.com/)
Footnote 34: [https://link.springer.com/](https://link.springer.com/)
Footnote 35: [https://arxiv.org/](https://arxiv.org/)
Footnote 36: [https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/](https://drive.google.com/file/d/1rpRiSyNpHIF9GzNzy8qTQEmKHLmzbKN/)
Footnote 37: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 38: [https://pypi.org/project/PyPDF2/](https://pypi.org/project/PyPDF2/)
Footnote 39: [https://www.sciencedirect.com/](https://www.sciencedirect.com/)
Footnote 30: [https://link.springer.com/](https://link.springer.com/)
Footnote 31: [https://link.springer.com/](https://link.springer.com/)
### Queries
Four questions are considered in this study: two relatively "simple" and two relatively "hard":
* **Query 1 (Q1)**: What country were the authors based in? The output of this question is a list of the authors' countries of affiliation.
* **Query 2 (Q2)**: Was it single-channel or multi-channel scenario? The output of this question is either one of the two classes (single or multi), and we are interested in determining the probability of the GPT obtaining the class right.
* **Query 3 (Q3)**: What type of architecture was used? This question is relatively more difficult than the previous one, requiring domain knowledge for proper comprehension. This question relates to determining the data-driven model used in the studies. Thus, the output of this question is a string, and we are interested in knowing the probability that the GPT will produce the string as accurately as possible.
* **Query 4 (Q4)**: In what context were these applications used (e.g., hearing aids, communication, speech enhancement)? This is the most challenging question posed to the GPT in this study, which involves determining the application area of speech enhancement considered in previous studies. Thus, the output of this question is a string, and we are interested in determining the probability that the GPT will produce the string as accurately as possible.
## 3 Results
### Outputs from questions
These four questions were selected from our reference literature survey (dos Santos et al., 2022), whose answers are taken as ground truth. Section 3.1.1 summarizes the answers presented in (dos Santos et al., 2022), whereas Sec. 3.1.2 compares the answers produced by the GPT model with the answers in the reference survey.
#### 3.1.1 Human-based survey
Authors' affiliations include higher education institutions, subsidiary research departments of corporations (e.g., Adobe, Facebook, Google, Microsoft), and semi-private and fully financed government research institutions. The main contributors were the United States of America (USA), China, and Japan, as illustrated in Figure 2 (Q1), with 28 countries represented. Other contributing countries include South Korea, Germany, the United Kingdom (UK), India, Switzerland, France, Denmark, the Netherlands, Canada, Ireland, Italy, Norway, Spain, Taiwan, Vietnam, Austria, Brazil, Chile, Greece, Hong Kong, Israel, Malaysia, Pakistan, Poland, and Singapore.
Not all corpora account for multi-channel scenarios. Among the reviewed articles, only \(23\%\) explicitly addressed multi-channel scenarios, whereas \(24\%\) focused on single-channel scenarios, as illustrated in Figure 2 (Q2.1). Other scenarios include binaural, Ambisonics, and stereo signals. However, most articles did not specify this information. For articles with a complete system format or configuration details, most are single-input-single-output (SISO) systems, followed by multiple-input-multiple-output (MIMO) and multiple-input-single-output (MISO) systems, as shown in Figure 2 (Q2.2). Other formats include multiple-input systems without a specified output format (MIXX), single-input systems without a specified output format (SIXX), and systems with completely unspecified input-output formats (XXXX).
The most commonly used model architectures are \(1\)-D and \(2\)-D Convolutional Neural Networks (CNN), uni- or bi-directional Long Short-Term Memory (LSTM) blocks, U-net, Fully Connected (FC) architectures, attention networks, recurrent neural networks (RNN), and temporal convolutional networks (TCN), as illustrated in Figure 2 (Q3). Other architectures include adversarial, convolutional, encoder/decoder, feedforward, geometrical, neural beamformer, recurrent, reinforcement learning, Seq2Seq, and statistical/probabilistic models.
Applications are often joint, including speech enhancement, dereverberation, noise suppression, speech recognition, and source separation. These applications focus mainly on communication, hearing aids, and audio-visual speech enhancement (AVSE), as illustrated in Figure 2 (Q4). Additional applications include suppressing nonlinear distortions, enhancing heavily compressed signals in speech and musical domains, audio inpainting applied to both speech and music signals, law enforcement and forensic scenarios, acoustic-to-articulatory inversion, input-to-output mapping of auditory models, studio recordings, and selective noise suppression.
#### 3.1.2 Machine-based survey
Because the GPT model is designed to generate human-like responses to natural language input, even if prompted with the same questions posed by humans, its answers are expected to vary from those of humans. To quantify the extent to which these variations differ from the desired responses, a tier list was elaborated as follows to compare the machine-based results with the human ground truth:
* No answer / Completely wrong / Not a pertinent answer: the model fails to provide any response or provides a completely incorrect or irrelevant answer (e.g., the author failed to mention, yet GPT prompts a specific answer);
* Marginally correct: the model provides a response that contains at least some correct information;
* Mostly correct with minor errors or omissions: the model produces the majority of the information correctly but might miss a few details or make minor mistakes;
* Perfectly correct: the model produces completely accurate and correct responses.
The first author, who also conducted the reference human-based survey, performed the tier-based assessment of responses to the survey questions (Q1-Q4 in Sec. 2.3). This choice has been taken to prevent the need for an analysis of the subjective interpretation of the machine-generated responses, which is beyond the scope of this paper. It is worth noting that evaluating more technical questions, such as Q3 and Q4, requires domain knowledge of speech enhancement and data-driven methods. However, assessing responses to more straightforward questions like Q1 and Q2 requires little to no domain knowledge.
Figure 3 illustrates the stacked bar charts containing the tier distribution for each question after comparing the machine-based responses with the human-based responses. For full results of the raw human-based survey in comparison with machine outputs for Q1-Q4, please refer to this external link19. In what follows, the results are analyzed in more detail.
Footnote 19: [https://drive.google.com/drive/folders/1jfd4LUkwBQd8KhhKWhxXUkaxZX09ign?usp=sharing](https://drive.google.com/drive/folders/1jfd4LUkwBQd8KhhKWhxXUkaxZX09ign?usp=sharing)
### Analysis of results
From Fig. 3, most answers are either perfectly correct or have minor errors for Q1 ("_What country were the authors based in?_"), which is a simple question that can be answered based on authors' affiliations. In this case, the errors could be related to the fact that the country of affiliation was not included or correctly linked to their names in the provided metadata.
Table 1 illustrates examples of human-based ground-truth versus machine-based responses for Q1. It can be seen that some machine-based answers are more concise than others, specifically, stating only the country versus starting a sentence with "The authors were based in..." This reflects the coherent and diverse capacity of the GPT model to
Figure 2: Simplified pie-charts for the human-based survey (dos Santos et al., 2022).
respond to such a question. We strongly suspect that the model's accuracy can be improved by providing only the article's metadata as context instead of the complete text, thus minimizing potential issues due to the length of the context (Liu et al., 2023).
Regarding Q2 ("_Was it single-channel or multi-channel scenario?_"), most predictions are perfectly correct; however, there is an increase in completely inaccurate answers compared to Q1. This is partially due to cases with no specified response (the authors failed to mention) and the GPT model prompting a response (Alkaissi and McFarlane, 2023) instead of stating that it could not find that information in the text provided. In addition, there are cases where the speech enhancement method could be used for multi-channel purposes, yet the authors chose not to. Interestingly, the GPT model may assign these cases as single- and multi-channel scenarios.
Table 2 illustrates examples of human-based ground-truth versus machine-based responses for Q2. The examples show that the GPT model provides nuanced answers for the different tiers, including "The proposed system includes..." or "The approach described in the paper is..." This reflects the complexity of the text generation capabilities of the GPT used in this experiment. As pointed out in Sec. 2.3, the prompt is a classification question: whether the approach is of a single-channel or multi-channel class. However, when looking at the Tier 2 example in Table 2, the authors discussed a single-channel approach (see Ref. 107 in (dos Santos et al., 2022)); however, it can also be applied in multi-channel scenarios, which is the machine response. Similarly, for Tier 3, the GPT model does not explicitly produce the result "Binaural" or "Monoaural". However, it responds that the approach belongs to the multi-channel class. Making the question more precise may help to improve the model's accuracy further.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & **Human-based ground truth** & **Machine-based responses** \\ \hline
**Tier 1** & Germany; UK & The authors were based in the USA and China. \\
**Tier 2** & China, USA, Denmark & China \\
**Tier 3** & Germany, Canada & Germany \\
**Tier 4** & UK, Switzerland & The authors were based in the UK (specifically, the University of Edinburgh) and Switzerland (specifically, ETH Zurich). \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of human-based ground truth versus machine-based responses for Q1.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & **Human-based ground truth** & **Machine-based responses** \\ \hline
**Tier 1** & Not specified & The approach described in the paper was a single-channel \\ & & approach. \\
**Tier 2** & Single-channel & The proposed system includes both single-channel and \\ & & multi-channel approaches. \\
**Tier 3** & Binavral, Monaural & The proposed method is a multichannel approach. \\
**Tier 4** & Multi-channel & The approach described in the paper is a multi-channel approach. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of human-based ground truth versus machine-based responses for Q2.
Figure 3: Stacked bar charts for the machine-based answers to the four questions using the corpus of 116 papers.
For question Q3 ("_What type of architecture was used?_"), there is an observable balance between all tiers. One of the most common reasons for completely wrong answers is that the GPT model identifies the name of the "trade" architecture as the type of architecture (e.g., "VGGNet" instead of "fully connected, CNN"). We suspect this can be improved by fine-tuning the GPT model to determine the underlying architecture instead of its variant name. Another common error is simply outputting the answer "DNN" (e.g., "the network architecture is a DNN") instead of detailing its type. Once again, we strongly suspect that providing the GPT model with the necessary context would prevent these mistakes. At any rate, most answers are partially correct, i.e., either it got something or almost everything right, which, together with the wrong answers, reduces the quantity of perfectly correct answers.
Table 3 presents examples of human-based ground-truth predictions versus machine-based predictions for Q3. Interestingly, for Tier 3, it can be seen that the GPT model not only (nearly) produces the right architecture type of CNN, but it also adds "variable dilation factors." Based on our observations with other papers on the survey, this extracted additional information, if accurate, holds significant value and analysis depth in the context of large-scale surveys.
Finally, for question Q4 ("_In what context were these applications used?_"), most answers were perfectly correct or entirely incoherent. This is because, in most cases, the authors do not mention the context of their applications in the full texts. This adds a higher degree of complexity to the GPT model to infer the application from partially incomplete information, something a human with domain knowledge might infer more accurately at this point. Still, it is interesting that the GPT model considers the broader field of study (e.g., dereverberation and speech enhancement) as an application context and attempts to answer the query nonetheless. Table 4 illustrates examples of human-based ground-truth versus machine-based predictions for Q4. As can be seen, the Tier 4 response example is remarkably similar to the human response. However, for Tier 1, the GPT model answers even though the human has found that it is not specified in the text. Further examination and understanding must rely on the GPT model to answer these queries more accurately.
## 4 Conclusions
In this experiment, we demonstrate the capabilities and limitations of a generative pre-trained transformer (GPT) model to survey a collection of scientific papers on data-driven speech enhancement methods. To the best of the authors' knowledge, this is the first study that examines the use of large language models to automate a literature survey in acoustics. In essence, the GPT model poses four queries to a corpus of \(116\) articles, and the machine-generated answers are compared to a human-based ground truth survey. Our findings indicate that simple questions can be answered with
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & **Human-based ground truth** & \multicolumn{1}{c}{**Machine-based responses**} \\ \hline
**Tier 1** & Not specified & The applications of the Neural Beamforming Front-End described in the paper include real-time audio signal enhancement for human-computer interaction, telecommunication, web conferencing, and geo-audio applications. \\
**Tier 2** & Voice-Over-IP applications & The applications of high fidelity speech regeneration with application to speech enhancement include speech enhancement for better communication and collaboration, particularly in the context of physical distancing during the COVID-19 pandemic. \\
**Tier 3** & Podcasts, video voice-overs, and audiobooks that require studio-quality audio with full bandwidth and balanced equalization (EQ) & The applications of signal processing to audio and acoustics discussed in the paper are focused on speech enhancement, specifically improving the quality of real-world consumer-grade recordings to sound like audio recordings. This can be beneficial for various content creation tasks such as podcasts, video voice-overs, and audiobooks. \\
**Tier 4** & Law-enforcement related audio recordings & The applications mentioned in the text are related to law where it’s difficult to understand the speech of one specific speaker amongst all the others & The applications mentioned in the text are related to law enforcement and investigations involving audio recordings. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Examples of human-based ground truth versus machine-based responses for Q4.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & **Human-based ground truth** & **Machine-based responses** \\ \hline
**Tier 1** & Fully connected, LSTM & The network architecture used in the proposed method is a deep neural network (DNN).
**Tier 2** & Attention Network, Autoencoder, Conformer & The network architecture used in the study is a conformer-based architecture. \\
**Tier 3** & CNN, RNN & The network architecture used in the study is a convolutional neural network (CNN) with variable dilation factors. \\
**Tier 4** & CNN & The network architecture used in the study was Convolutional Neural Networks (CNNs). \\ \hline \hline \end{tabular}
\end{table}
Table 3: Examples of human-based ground truth versus machine-based responses for Q3.
significant accuracy. In contrast, more nuanced technical questions require improving the accuracy and clarity of the questions or careful contextualization and fine-tuning of the model. In the future, we hope this paper stimulates the adoption of artificially intelligent systems to aid humans in surveying larger corpora (e.g., thousands of articles) in acoustics.
## 5 Acknowledgments
This study was partially sponsored by the Sao Paulo Research Foundation (FAPESP) under grants #2017/08120-6, #2019/22795-1, and #2022/16168-7. We also thank Prof. Roberto Lottofo and Prof. Renato Lopes for their valuable discussions and suggestions.
|
2301.06521
|
Possible realization of hyperbolic plasmons in a few-layered rhenium
disulfide
|
The in-plane structural anisotropy in low-symmetric layered compound rhenium
disulfide ($\text{ReS}_2$) makes it a candidate to host and tune
electromagnetic phenomena specific for anisotropic media. In particular,
optical anisotropy may lead to the appearance of hyperbolic plasmons, a highly
desired property in optoelectronics. The necessary condition is a strong
anisotropy of the principal components of the dielectric function, such that at
some frequency range, one component is negative and the other is positive,
i.e., one component is metallic, and the other one is dielectric. Here, we
study the effect of anisotropy in $\text{ReS}_2$ and show that it can be a
natural material to host hyperbolic plasmons in the ultraviolet frequency
range. The operating frequency range of the hyperbolic plasmons can be tuned
with the number of $\text{ReS}_2$ layers.
|
Ravi Kiran, Dimitar Pashov, Mark van Schilfgaarde, Mikhail I. Katsnelson, A. Taraphder, Swagata Acharya
|
2023-01-16T17:10:01Z
|
http://arxiv.org/abs/2301.06521v1
|
# Possible realization of hyperbolic plasmons in a few-layered rhenium disulfide
###### Abstract
The in-plane structural anisotropy in low-symmetric layered compound rhenium disulfide (\(\mathrm{ReS}_{2}\)) makes it a candidate to host and tune electromagnetic phenomena specific for anisotropic media. In particular, optical anisotropy may lead to the appearance of hyperbolic plasmons, a highly desired property in optoelectronics. The necessary condition is a strong anisotropy of the principal components of the dielectric function, such that at some frequency range, one component is negative and the other is positive, i.e., one component is metallic, and the other one is dielectric. Here, we study the effect of anisotropy in \(\mathrm{ReS}_{2}\) and show that it can be a natural material to host hyperbolic plasmons in the ultraviolet frequency range. The operating frequency range of the hyperbolic plasmons can be tuned with the number of \(\mathrm{ReS}_{2}\) layers.
## I Introduction
The rise of hyperbolic materials in recent years promises important applications in optoelectronics and nanophotonics [1; 2; 3; 4; 5]. Light can acquire hyperbolic dispersion while passing through such materials, which occurs in some frequency range, when different principal components of the longitudinal dielectric function (dielectric permittivity) have opposite signs. For the case of isotropic medium, it behaves as a dielectric, that is, supports propagating electromagnetic waves, when the sign of the dielectric function is positive. When the latter is negative, the incident light is reflected, with only an exponentially decaying evanescent field penetrating the material, like for metals below plasma threshold.
Anisotropy in electronic, optical, vibrational, and transport behaviour can occur when structural anisotropy is present, and if it is sufficiently strong, the different components of the dielectric permittivity tensor may aquire opposite signs to turn the material hyperbolic. In two-dimensional crystals, in-plane anisotropy strong enough to make it hyperbolic is a unique situation that allows to confine short wavelengths (large wave vectors) inside a material, promising smaller sizes for optoelectronic devices. Optical anisotropy in rhenium disulfide (\(\mathrm{ReS}_{2}\)) has been established both for bulk crystals [6; 7] and for thin layers [8]. In this work we predict that (\(\mathrm{ReS}_{2}\)) that appears in a distorted 1T phase can realize hyperbolic plasmons depending on the number of layers.
It has been suggested that anisotropic 2D materials can be tuned to become hyperbolic via electrostatic tuning, strain or dimensionality and can host hyperbolic plasmons [9; 10]. Several studies [11; 12; 13] have investigated hyperbolic plasmons (HP) and its existence in naturally occurring materials. The strong anisotropy of \(\mathrm{ReS}_{2}\) hints at its potential as a natural hyperbolic material, offering possibilities for studying HP. Previous works [14; 15] have studied the band structure and anisotropic optical response of \(\mathrm{ReS}_{2}\), but the study of HP remains largely unexplored. Here we use the ladder-vertex corrected and local-field corrected plasmonic response in \(\mathrm{ReS}_{2}\) within a self-consistent solution of Bethe-Salpeter equation (BSE) as implemented in _Questaal_[16].
The rest of the paper is organized as follows. In Sec. II.1, we describe the anisotropic atomic structure of \(\mathrm{ReS}_{2}\) and in Sec. II.2, we briefly describe theoretical methods and provide computational details. In Secs. III.1 and III.2 we present our results on the electronic structure and optical properties of \(\mathrm{ReS}_{2}\). In Sec. IV, we briefly summarize our results and conclude the paper.
## II Atomic structure and computational details
### Atomic structure
\(\mathrm{ReS}_{2}\) belongs to the family of two-dimensional (2D) layered transition metal dichalcogenide (TMDs) of the form \(\mathrm{MX}_{2}\) where M is a transition metal atom (Mo, W, Re,...) and X is a group-16 atom (S, Se, Te). The atomic structure of \(\mathrm{ReS}_{2}\) layers has neither H or T character. Unlike other TMDs, which usually have 1H or 1T structure in their ground state, \(\mathrm{ReS}_{2}\) crystallizes in a distorted-1T structure with clustering of Re units forming parallel metal chains along the van der Waals plane (see Fig. 1).
The compound \(\mathrm{ReS}_{2}\) belongs to the triclinic symmetry group \(\mathrm{P}^{1}\), resembling a distorted \(\mathrm{CdCl}_{2}\) structure. It comprises three atomic layers, S-Re-S, where covalent bonds join Re and S. The adjacent layers of \(\mathrm{ReS}_{2}\) are coupled by weak van der Waals (vdW) forces to form bulk crystals. The unit cell is derived from hexagonal
symmetry towards a distorted 1T structure, in which Re atoms group into parallelograms of four Re atoms. The formation of Re chains breaks the hexagonal symmetry and doubles the unit cell size. Hence the unit cell of single layer \(\text{ReS}_{2}\) in the distorted-1T phase is composed of four Re and eight S atoms. In pristine \(\text{ReS}_{2}\), the valence band maximum is composed from 5d orbitals of Re atoms and 3p orbitals of S atoms, and the conduction band minimum is derived from 5d orbitals of Re atoms. The Brillouin zone of the \(\text{ReS}_{2}\) is hexagonal but with unequal sides as a result of the distorted atomic structure. Energy band structures were generated along the symmetry lines shown in Fig. 1.
### Computational Details
#### ii.2.1 **Lda, Qsgw**, and Qsg\(\widehat{W}\) self-consistency
Single-particle calculations (LDA, and the the quasi-particle self-consistent _GW_[18] (QS_GW_) self-energy \(\Sigma^{0}(k)\)) were performed on a \(12\times 12\times 12\) points (Monkhorst pack) for bulk and \(12\times 12\times 1\) for ML and BL. An energy cutoff of 400 eV was used and Gaussian smearing with a width of \(0.05\) eV. The tolerance of \(10^{-5}\) eV and \(2\times 10^{-6}\) has been taken in convergence of energy and RMS density respectively. The charge density was made self-consistent for each iteration in the QS_GW_ self-consistency cycle. The QS_GW_ cycle was iterated until the RMS change in \(\Sigma^{0}\) reached \(10^{-5}\) Ry. Thus the calculation was self-consistent in both \(\Sigma^{0}(k)\) and the density. Numerous checks were made to verify that the self-consistent \(\Sigma^{0}(k)\)(k) was independent of starting point. For ML-\(\text{ReS}_{2}\), we performed a rigorous check for vacuum correction to all band gap and dielectric screening by increasing the size from \(10\,\text{\AA}\) to \(45\,\text{\AA}\). Since along the \(z\)-direction we have a vacuum, the dielectric constant, which is the real part of the macroscopic dielectric response at \(\omega\)=0, should be close to unity.
In the present work, the electron-hole two-particle correlations are incorporated within a self-consistent ladder BSE implementation [19] with Tamm-Dancoff approximation [20]. Ladder diagrams are included in the polarizability \(P\) that makes \(W\), via the solution of a Bethe-Salpeter equation (BSE); thus this form of \(GW\) goes beyond the RPA in constructing the self-energy \(\Sigma=iGW\). The electron-hole attraction from the ladders enhances \(P\), thus reducing \(W\), which in turn reduces the bandgap. A static vertex is used to construct \(P\). \(G\) and \(W\) are calculated self-consistently, in quasiparticlized form [18]: \(G\) and \(W\) are updated iteratively until all of them converge (QS_GW_). When ladders are incorporated into \(W\), we denote the process as QS\(G\widehat{W}\) to signify \(W\) was computed from the BSE. The macroscopic dielectric function we present here, \([\epsilon_{\mathbf{G}=0,\mathbf{G}^{\prime}=0}^{-1}(q{\rightarrow}0,\omega)]^{ -1}\), is also computed with the BSE.
The tetrahedron method is employed for integration over the Brillouin zone to calculate the optical spectrum. When calculating the dielectric response within BSE, the valence and conduction states that form the two-particle Hamiltonian are increased until the two-particle eigenvalues converge within an accuracy of 10 meV. For \(\text{ReS}_{2}\) the excitons are essentially Wannier-Mott [21] in nature and only the states at the valence band top and conduction band bottom contribute to their formation, so the convergence in the two-particle Hamiltonian size is much faster compared to the cases of CrX\({}_{3}\)[22] where the excitons have Frenkel character and many valence and conductions bands over several electron volts form them. However, for the present work, our focus is the plasmonic response, while we will note the excitonic binding energies in different layered variants later.
Table (1) contains the lattice parameters for \(\text{ReS}_{2}\) used throughout the calculations.
Figure 1: The ball-stick model of the bulk distorted 1T diamond-chain \(\text{ReS}_{2}\) obtained using VESTA software [17] is presented. Top view (Fig.1(a)) and side view (Fig.1(b))of the distorted 1T-\(\text{ReS}_{2}\); Re atoms are in red and S atoms are in green. The black outline shows the unit cell used for the calculation. The Re chain is along b direction. Brillouin zone (bottom) of the corresponding hexagonal lattice with lines connecting high-symmetry points \(\Gamma\)-K1-K2-M2-\(\Gamma\)-K4-K3-M2-\(\Gamma\). \(a^{*}\) and \(b^{*}\) denote reciprocal lattice vectors.
## III Results and Discussions
### Quasiparticle Energies and Band Structure
The nature of the bandgap of \(\mathrm{ReS_{2}}\) has been widely debated in the literature. In a typical TMD family, the bandgap is direct when their thickness is reduced to the monolayer, ensuring that coupling with light is strong. One study from 2014 [23] reported direct bandgap for bulk \(\mathrm{ReS_{2}}\), thus generating considerable interest in the system. However, both older studies such as [24; 25] and more recent studies such as [26; 27] report that the bulk \(\mathrm{ReS_{2}}\) is an indirect-bandgap semiconductor.
The electronic band structures (with spin-orbit coupling included) for bulk, BL and ML are shown in Fig. 2 and the band gaps at different levels of theory are summarized in Table 2. The free-standing ML of \(\mathrm{ReS_{2}}\) has been simulated for these calculations with the parameters shown in Table 1. Similar to prior work on monolayers of chromium trihalides [28], we check for convergence and scaling of band gap and the dielectric constant \(\epsilon_{\infty}\) with vacuum size. We obtain the LDA band gap of \(\sim\)1.29 eV which is significantly lower than the QS_GW_ band gap of \(\sim\)2.75 eV. LDA is known to underestimate the band gaps in semiconductors, and the enhancement in the QS_GW_ band gap relative to LDA is standard [18]. QS_GW_ usually overcorrects the gap because \(W\) is universally too large within the random phase approximation (RPA), and for the same reason it underestimates the dielectric constant \(\epsilon_{\infty}\)[19]. Adding ladders largely eliminates both tendencies. In the present case extending QS_GW_\(\rightarrow\)QS_G_\(\widehat{W}\)_ causes only a modest reduction in the gap, to \(\sim\)2.66 eV, suggesting insignificant corrections to the self-energy originating from BSE. This result is consistent with previous theoretical work [29] on ML-\(\mathrm{ReS_{2}}\). The self-energy and the reduced screening increase the band gap and modify the band topology, which is observed in the ML-\(\mathrm{ReS_{2}}\). The band gap at the level of LDA is direct, but the two-particle interactions lower the valence band maxima (VBM), which was at the \(\Gamma\) point by about 150 meV. A similar kind of change in band topology has been observed for \(\mathrm{ReSe_{2}}\) in [29] but is absent in other some theoretical work [14].
We obtain a similar band topology in bulk and BL variants of \(\mathrm{ReS_{2}}\); however, the band gap values are \(\sim\)1.7 eV and \(\sim\)2.3 eV respectively. The nature of band gap in \(\mathrm{ReS_{2}}\) is different from more commonly studied semiconducting TMDs (e.g., \(\mathrm{MoS_{2}}\), \(\mathrm{WS_{2}}\), etc.), where the bulk and few-layer variants show indirect band gap and ML is direct. In this work, we observe a direct band gap at the LDA level and an indirect band gap for all the variants at the QSG\(\widehat{W}\) level. This nature of the quasiparticle band gap does not conflict with experimental measurement because, different from our calculated free-standing cases, these measured samples are on substrates and are inevitably doped. The resulting self-energy corrections will be reduced under these conditions, resulting in a slightly indirect band gap for the measured samples. We note that the direct to indirect transition may not be sharp because the energy difference between the direct and indirect gap is small and external perturbations can affect the conclusion. However, our observations on the indirect nature of the band gap is important for many reasons. In most monolayer TMDs, the gap is direct and it makes them sufficiently bright and also the electron-hole radiative lifetimes are extremely small (often in picoseconds). While in systems with indirect band gaps, radiative and non-radiative processes compete, since the indirect states have longer lifetimes making them candidates for optoelectronics and photovoltaics [30].
### Optical Absorption Spectra : Hyperbolic Plasmons
The anisotropic optical absorption has been previously studied using DFT [31]. Experimentally [26] it is demonstrated that the reduced crystal symmetry of \(\mathrm{ReS_{2}}\) leads to anisotropic optical properties that persist from the
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{2}{c}{**Band Gap(eV)**} \\ \hline
**Theory** & **LDA** & **QS_GW_** & **QSG\(\widehat{W}\)** \\ \hline \hline
**Bulk** & 1.15 & 1.75 & 1.7 \\
**ML** & 1.29 & 2.75 & 2.66 \\
**BL** & 1.23 & 2.35 & 2.3 \\ \hline \end{tabular}
\end{table}
Table 2: Bandgap of bulk, BL and ML variants of \(\mathrm{ReS_{2}}\) at different levels of theory (with spin-orbit coupling). The gap increases from LDA to QS_GW_ level. The effect of screening is only moderately increased when two particle interactions are added (via a BSE, \(W\rightarrow\hat{W}\)) are added, thus only weakly decreasing the QS_GW_ band gap.
\begin{table}
\begin{tabular}{c c c c c c c} Structure & a(Å) & b(Å) & c(Å) & \(\alpha\)(\({}^{\circ}\)) & \(\beta\)(\({}^{\circ}\)) & \(\gamma\)(\({}^{\circ}\)) \\ \hline Bulk & 6.41695 & 6.52047 & 7.28252 & 91.8128 & 103.5630 & 118.8390 \\ ML & 6.41910 & 6.52306 & 45 & 90.7434 & 95.7909 & 118.8366 \\ BL & 6.41910 & 6.52306 & 28.66062 & 84.0127 & 89.7412 & 61.1634 \\ \hline \end{tabular}
\end{table}
Table 1: Lattice parameters of bulk, monolayer (ML) and bilayer (BL) \(\mathrm{ReS_{2}}\)
bulk down to the monolayer limit. The absence of excitonic correlations and underestimated band gaps in LDA studies hide several physical consequences in ReS\({}_{2}\). Advanced theoretical studies such as [29] tackle anisotropic optical responses at the BSE level for monolayer ReS\({}_{2}\), where ladder vertex corrected optical properties are computed on top of a single shot DFT based G\({}_{0}\)W\({}_{0}\) one-particle description.
The large structural anisotropy in 2D materials, for example, a 4:3 anisotropy of the in-plane lattice constants in black phosphorus [9] and solid nitrogen [32; 33; 34]) makes them perfect candidates for hyperbolic materials and a natural place to look for HP. This offers new possibilities as hyperbolic materials showcase a wide variety of interesting properties, such as modes which transport heat by photon tunnelling with a high efficiency close to the theoretical limit[35], and broadband absorption [36]. We first define the condition for hyperbolicity. The hyperbolic region appears when
\[\epsilon_{1}^{xx}(\omega)\cdot\epsilon_{1}^{yy}(\omega)<0 \tag{1}\]
where, \(\epsilon_{1}^{xx}(\omega)\) and \(\epsilon_{1}^{yy}(\omega)\) are the real part of the dielectric response along the x and y direction respectively. We assume that \(\epsilon_{1}^{xy}(\omega)=0\) by symmetry and, thus, \(x\) and \(y\) are the principal directions of the dielectric permeability. For different variants of ReS\({}_{2}\), the real and imaginary part of dielectric response is plotted in Fig. 3. We observe that significant difference in optical response for incident polarized light along different directions. While ML-ReS\({}_{2}\) (Fig. 3, left panel), hosts some strongly bound anisotropic excitons deep inside the one-particle gap, HP are absent. For the BL-ReS\({}_{2}\) (Fig. 3, center panel), the Re(\(\epsilon_{yy}\)) goes to negative at \(\omega=6.65\,\mathrm{eV}\), which results in a hyperbolic region starting at that frequency with an energy window of \(0.43\,\mathrm{eV}\). For bulk-ReS\({}_{2}\) this energy window increases to \(0.76\,\mathrm{eV}\). The sign change is key to the appearance of the hyperbolic region and it becomes more apparent in Fig. 4 where we plot the product \(\epsilon_{1}^{xx}(\omega)\times\epsilon_{1}^{yy}(\omega)\) which becomes negative in the energy window. \(\epsilon_{2}\) remains large for both bulk and BL in the hyperbolic energy window suggesting large damping of the HP. In the BL, these plasmons are less damped compared to bulk. Also, note that these HP in ReS\({}_{2}\) are in ultraviolet range, in contrast to the infrared HP in CuS nanocrystals [13].
The inherent anisotropy in ReS\({}_{2}\) provides an opportunity to tune its magnitude by applying strain. We apply uni-directional strains (\(\gamma\)) along \(x\) and \(y\) respectively and explore the hyperbolic region. We apply up to 4% strain and see that the ML never hosts HP. However, in bulk on application of compressive unidirectional strain (\(\gamma_{x}\)) along \(x\), the HP window increases upto \(\sim\)1.3 eV. This enhanced HPs also have lesser damping compared to the
Figure 2: QSG\(\widehat{W}\) band structures (with spin-orbit coupling) with contributions from Re (Red) and S (green). The nature of the band gap is indirect for all the variants of ReS\({}_{2}\) with values, \(2.66\,\mathrm{eV}\) for ML (left), \(2.3\,\mathrm{eV}\) for BL (center) and \(1.7\,\mathrm{eV}\) for bulk (right).
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{3}{c}{**Dielectric Constant**} \\ & \(\epsilon_{\infty}^{xx}\) & \(\epsilon_{\infty}^{yy}\) & \(\epsilon_{\infty}^{zz}\) & \begin{tabular}{c} **Plasmonic frequency** \\ **range** \\ (eV) \\ \end{tabular} &
\begin{tabular}{c} **Exciton** \\ **Binding energy** \\ (eV) \\ \end{tabular} \\ \hline \hline
**Bulk** & 9.67 & 9.37 & 6.21 & 6.02 - 6.78 (0.76) & - \\
**BL** & 6.97 & 7.09 & 2.66 & 6.65 - 7.08 (0.43) & 0.3 \\
**ML** & 2.97 & 3.19 & 1.42 & - & 0.74 \\ \hline \end{tabular}
\end{table}
Table 3: Dielectric Constant(real part of the dielectric response at \(\omega=0\)) calculated using QSG\(\widehat{W}\). As the dimensionality of ReS\({}_{2}\) is lowered, the frequency range where Re(\(\epsilon_{xx}\)) \(\cdot\) Re(\(\epsilon_{yy}\)) is less than zero becomes smaller.
un-strained compound. On the other hand, \(\gamma_{y}\) reduces the HP window. In the BL, application of \(\gamma_{x}\), almost entirely kills the HP window while \(\gamma_{y}\) enhances the HP window and also hosts less damped plasmonic modes. In short, we observe that while strain can be used to tune the hyperbolic energy window and the lifetimes of the plasmons, we could not produce HPs in the monolayer sample under any condition. However, the HPs remain pretty robust in both the bulk and BL variants, also their stability could be enhanced on selective applications of uni-directional strain.
## IV Conclusion
Anisotropy is a key to tuning material properties. Discontinuities at surfaces, residual strains and metamaterials have been used as platforms for realizing anisotropic optical properties. Naturally occurring structurally anisotropic materials are not necessarily hyperbolic always.
In this work we show that the structural anisotropy in \(\mathrm{Re}\mathrm{S}_{2}\), even though much weaker compared to materials like black Phosphorus and solid Nitrogen, leads to the occurrence of hyperbolic plasmons in a narrow energy window. The plasmonic resonances can be tuned by controlling the number of layers in the far ultraviolet frequency range. The ability of such a tunability of the
Figure 4: The product of the real part of the dielectric response along \(\mathrm{X}(\epsilon_{1}^{xx}(\omega))\) and \(\mathrm{Y}(\epsilon_{1}^{yy}(\omega))\) direction for ML(left),BL(center) and Bulk(right). The vertical dashed line marks the frequency range in which the product is negative. This frequency range is hyperbolic plasmonic frequency range.
Figure 3: Real part(\(\epsilon_{1}\)) of the dielectric response (top row) and imaginary part (bottom row) along \(x\) and \(y\) direction for ML (left), BL (center) and bulk (right). The vertical dashed line marks the plasmonic frequency range, where \(\epsilon_{1}^{xx}(\omega)\times\epsilon_{1}^{yy}(\omega)<0\).
plasmons opens up new opportunities in regard to optoelectronic devices. We further show that the hyperbolic region and its stability can be enhanced by unidirectional strains.
## V Acknowledgement
MIK and SA are supported by the ERC Synergy Grant, project 854843 FASTCORR (Ultrafast dynamics of correlated electrons in solids). MvS and DP were supported by the Computational Chemical Sciences program within the Office of Basic Energy Sciences, U.S. DOE under Contract No. DE-AC36-08GO28308. This research used resources of the National Energy Research Scientific Computing Center (NERSC), award BES-ERCAP0021783, under DOE Contract No. DE-AC02-05CH11231. We acknowledge PRACE for awarding us access to Irene-Rome hosted by TGCC, France and Juwels Booster and Cluster, Germany.
|
2310.10402
|
Real-Fake: Effective Training Data Synthesis Through Distribution
Matching
|
Synthetic training data has gained prominence in numerous learning tasks and
scenarios, offering advantages such as dataset augmentation, generalization
evaluation, and privacy preservation. Despite these benefits, the efficiency of
synthetic data generated by current methodologies remains inferior when
training advanced deep models exclusively, limiting its practical utility. To
address this challenge, we analyze the principles underlying training data
synthesis for supervised learning and elucidate a principled theoretical
framework from the distribution-matching perspective that explicates the
mechanisms governing synthesis efficacy. Through extensive experiments, we
demonstrate the effectiveness of our synthetic data across diverse image
classification tasks, both as a replacement for and augmentation to real
datasets, while also benefits such as out-of-distribution generalization,
privacy preservation, and scalability. Specifically, we achieve 70.9% top1
classification accuracy on ImageNet1K when training solely with synthetic data
equivalent to 1 X the original real data size, which increases to 76.0% when
scaling up to 10 X synthetic data.
|
Jianhao Yuan, Jie Zhang, Shuyang Sun, Philip Torr, Bo Zhao
|
2023-10-16T13:45:26Z
|
http://arxiv.org/abs/2310.10402v2
|
# Real-Fake: Effective Training Data Synthesis Through Distribution Matching
###### Abstract
Synthetic training data has gained prominence in numerous learning tasks and scenarios, offering advantages such as dataset augmentation, generalization evaluation, and privacy preservation. Despite these benefits, the efficiency of synthetic data generated by current methodologies remains inferior when training advanced deep models exclusively, limiting its practical utility. To address this challenge, we analyze the principles underlying training data synthesis for supervised learning and elucidate a principled theoretical framework from the distribution-matching perspective that explicates the mechanisms governing synthesis efficacy. Through extensive experiments, we demonstrate the effectiveness of our synthetic data across diverse image classification tasks, both as a replacement for and augmentation to real datasets, while also benefits challenging tasks such as out-of-distribution generalization and privacy preservation1.
Footnote 1: Code is released at [https://github.com/BAAI-DCAI/Training-Data-Synthesis](https://github.com/BAAI-DCAI/Training-Data-Synthesis). Corresponding to Bo Zhao \(\langle\) [email protected]\(\rangle\).
## 1 Introduction
Large-scale annotated datasets (Deng et al., 2009; Lin et al., 2014) are essential in deep learning for image classification, providing the comprehensive information that models need to effectively discover patterns, learn representations, and generate accurate predictions. However, manually collecting such datasets is time-consuming and labor-intensive, and may cause privacy concerns. Given these challenges, training data synthesis offers a promising alternative to traditional data collection for augmenting or replacing original real datasets.
Among numerous image synthesis strategies, deep generative models have gained significant attention, primarily due to their capacity to produce high-fidelity images. Early studies (Besnier et al., 2020; Li et al., 2022; Zhang et al., 2021; Zhao and Bilen, 2022) utilize Generative Adversarial Networks (GANs) to synthesize annotated training data for image classification and segmentation. Recently, more works have focused on synthesizing training data with the powerful diffusion models for self-supervised pre-training (Tian et al., 2023), transfer learning (He et al., 2022), domain generalization (Yuan et al., 2022; Bansal and Grover, 2023), and supervised image classification (Azizi et al., 2023; Sarryildiz et al., 2023; Lei et al., 2023). However, despite the extensive research, a notable performance gap persists when comparing the performances of models trained on synthetic data to those trained on real data. A primary reason for this discrepancy is the misalignment between the synthetic and real data distributions, even the diffusion model is trained on web-scale datasets, as illustrated in Fig. 1(a). While previous works have attempted to address this issue through heuristic-driven approaches such as prompt engineering (Sarryildiz et al., 2023; Lei et al., 2023) and the expensive inversion approaches (Zhao and Bilen, 2022; Zhou et al., 2023), these solutions are neither sufficient nor efficient. Furthermore, there is an absence of theoretical frameworks that can adequately explain and analyze the efficacy of synthetic training data in a principled way.
To further enhance the quality and utility of synthetic training data produced by deep generative models, we present a theoretical framework for training data synthesis from a distribution-matching perspective. Specifically, starting from the first principle of supervised learning (Sutskever, 2023; Cunningham et al., 2008), we recast the training data synthesis as a distribution matching problem. This emphasizes two primary principles of synthetic data: **(1) The distribution discrepancy between target and synthetic data**, and **(2) The cardinality of the training set**.
Building on this foundation, we employ the state-of-the-art text-to-image diffusion model, Stable Diffusion (Rombach et al., 2022), to undertake a careful analysis and refinement of the training objectives, condition generation, and prior initialization to achieve a better alignment between synthetic and target data distributions. We empirically validate our theoretical framework and synthesis method across diverse benchmarks, covering various scenarios: (1) training exclusively with synthetic data, (2) augmenting the real training data, and (3) evaluating the scaling law between synthetic data and performance. In particular, for ImageNet1k classification using ResNet50 (He et al., 2016), training solely with synthetic data equivalent to 1 \(\times\) the original real data size yielded a 70.9% Top1 classification accuracy, which increases to 76.0% when using 10 \(\times\) synthetic data. Additionally, we explore (1) out-of-distribution (OOD) generalization and (2) privacy preservation when learning with synthetic data, and present promising results. Beyond advancing the state of the art, our findings offer insights into potential strategies for refining the training data synthesis pipeline. Our primary contributions are as follows:
* Introducing a principled distribution matching framework for training data synthesis, emphasizing two foundational aspects that drive the effectiveness of synthetic data.
* Employing the state-of-the-art text-to-image diffusion model, with a comprehensive analysis and refinement of its components, to better align synthetic and target data distributions.
* Advancing the state-of-the-art in training data synthesis for image classification tasks, while demonstrating the advantages in OOD generalization and privacy preservation.
## 2 Background
### Training Data Synthesis
Synthesizing informative training samples remains a compelling yet challenging area of research. One line of work focuses on synthesis through pre-trained deep generative models. Early attempts (Zhang et al., 2021; Li et al., 2022; Zhao and Bilen, 2022) explore Generative Adversarial Networks (GANs) (Goodfellow et al., 2014; Brock et al., 2018) for informative and annotated training sample synthesis. Recently, diffusion models have gained more attention. He et al. (2022); Tian et al. (2023) employ synthetic data from text-conditioned diffusion models for self-supervised pre-training and few/zero-shot learning, highlighting the transfer learning capacity of synthetic training data. Also, Yuan et al. (2022); Bansal and Grover (2023); Vendrow et al. (2023) demonstrate the utility of synthetic data in out-of-distribution settings by augmenting and diversifying the training dataset. Additionally, Sarryildiz et al. (2023); Lei et al. (2023); Azizi et al. (2023) utilize Stable Diffusion (Rombach et al., 2022) and Imagen (Saharia et al., 2022) with prompt engineering to synthesize class-conditioned samples, showcasing the potential of using synthetic data solely for training classification models.
Figure 1: **Left:** Visualization of the synthetic and real ImageNet data distribution using the first two principal components of features extracted by the CLIP image encoder. Our synthetic data better aligns with real data distribution than the baseline (vanilla Stable Diffusion). **Middle:** Our synthetic data achieves better performance compared with the baseline, and can effectively augment real data across all datasets. **Right:** Scaling up synthetic training data can improve the image classification performances in both in-distribution and out-of-distribution (OOD) tasks, even outperform training with real data in OOD tasks.
Despite the promising results in various tasks, a noticeable performance gap remains between models trained on real and synthetic datasets. This difference can be attributed to the misalignment between the distributions of the synthetic training data and the target downstream data. Existing research (Saryldiz et al., 2023; Lei et al., 2023) employs prompt engineering to bridge the domain gap, which is insufficient. Zhou et al. (2023) implement diffusion inversion to obtain synthetic images close to real ones, which is expensive and unscalable to large datasets. To understand the mechanisms underlying the efficacy of synthetic data, we aim to find the theoretical principle and further tackle the challenges associated with suboptimal training data synthetic.
Another line of work focuses on synthesizing informative training samples by distilling the large real dataset into smaller synthetic one, i.e., dataset distillation. These methods optimize synthetic images (pixels) through the minimization of meta-loss (Wang et al., 2018), gradient matching loss (Zhao et al., 2021), trajectory matching loss (Cazenavette et al., 2022) and distribution matching loss (Zhao and Bilen, 2023; Zhao et al., 2023). In particular, distribution matching approaches have gained prominence due to their model-agnostic nature. However, the intricate and expensive optimization processes of dataset distillation often pose challenges in scalability. Despite these challenges, the principled approaches of dataset distillation have proven effective and inspired our work.
### Diffusion Probabilistic Models
Diffusion Probabilistic Models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Nichol and Dhariwal, 2021) are latent variable models that learn a data distribution \(q(\mathbf{x}_{0})\) by reversing a gradual noising process. They define a forward diffusion Markov process \(q\left(\mathbf{x}_{1:T}\mid\mathbf{x}_{0}\right)=\prod_{t=1}^{T}q\left(\mathbf{x}_{t} \mid\mathbf{x}_{t-1}\right)\) that uses a handcrafted Gaussian transition kernel, \(q\left(\mathbf{x}_{t}\mid\mathbf{x}_{t-1}\right)=\mathcal{N}\left(\mathbf{x}_{t};\sqrt{1- \beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}\right)\), with a noise schedule \(\beta_{t}\in(0,1)\), to transform the data distribution into a known prior Gaussian distribution. Then, a reverse Markovian process \(p_{\mathbf{\theta}}\left(\mathbf{x}_{0:T}\right)=p\left(\mathbf{x}_{T}\right)\prod_{t=1}^{T }p_{\mathbf{\theta}}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right)\) is learned to gradually remove noise added in the forward process, with Gaussian transitions parameterized by a neural network: \(p_{\mathbf{\theta}}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right)=\mathcal{N}\left(\mathbf{x }_{t-1};\mathbf{\mu}_{\mathbf{\theta}}\left(\mathbf{x}_{t},t\right),\mathbf{\Sigma}_{\mathbf{ \theta}}\left(\mathbf{x}_{t},t\right)\right)\). The training objective of the diffusion model is to minimize the KL-divergence between the joint distributions of the forward and backward processes. This can be further simplified to optimizing the standard variational bound on the negative log-likelihood of the model (Yang et al., 2022) (detailed proof can be found in Apx. A).
\[\mathrm{KL}\left[q\left(\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{T}\right)\|p_{ \mathbf{\theta}}\left(\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{T}\right)\right]\geq \mathbb{E}\left[-\log p_{\mathbf{\theta}}\left(\mathbf{x}_{0}\right)\right] \tag{1}\]
In practice, by marginalizing over the intermediate sampling steps, an analytic form of the forward process can be obtained: \(q\left(\mathbf{x}_{t}\mid\mathbf{x}_{0}\right)=\mathcal{N}\left(\mathbf{x}_{t};\sqrt{ \bar{\alpha}_{t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\right)\), where \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=0}^{t}\alpha_{s}\). By further setting the variance in the reverse process to a non-learnable constant \(\sigma_{t}\) and choosing a specific parameterization of \(\epsilon_{\mathbf{\theta}}\) to predict the added noise \(\epsilon\), the overall training objective becomes:
\[\mathbb{E}_{\mathbf{x},\mathbf{\epsilon}\sim\mathcal{N}(0,1),t}\left[\frac{\beta_{t}^ {2}}{2\sigma_{t}^{2}\alpha_{t}(1-\bar{\alpha}_{t})}\left\|\mathbf{\epsilon}-\mathbf{ \epsilon}_{\mathbf{\theta}}\left(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{ \alpha}_{t}}\mathbf{\epsilon},t\right)\right\|^{2}\right], \tag{2}\]
where \(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{ \epsilon}\) is derived from the reparameterization of the marginal distribution \(q\left(\mathbf{x}_{t}\mid\mathbf{x}_{0}\right)\). This objective (Ho et al., 2020) aligns with the denoising score matching loss of Score-based Generative Models (SGM) (Song and Ermon, 2019) when appropriately reweighted.
Leveraging the theoretical foundation, in this study, we focus on the state-of-the-art Latent Diffusion Model (LDM), Stable Diffusion (Rombach et al., 2022), to provide an alternative interpretation of its training objectives and generative sampling process from a distribution matching perspective for training data synthesis.
## 3 Training Data Synthesis: A Distribution Matching Perspective
The goal of training data synthesis is to generate synthetic data from the target distribution \(D:=q(x,y)\) of data \(x\) and annotation \(y\). In supervised training (Sutskever, 2023; Cunningham et al., 2008), the difference between training and testing error over the whole target distribution \(D\) is bounded by the inverse of the square root of the sampled training set cardinality \(|S|^{2}\). Please refer to Apx. B for
the detailed proof.
\[\Pr_{S\sim D^{|S|}}\left[\mathrm{Test}_{D}(f)-\mathrm{Train}_{S}(f)\leq\sqrt{ \frac{\log|\mathcal{F}|+\log 1/\delta}{|S|}}\forall f\in\mathcal{F}\right]>1-\delta \tag{3}\]
For training data synthesis given a fixed model space \(\mathcal{F}\) (i.e. fixed model structure with finite parameters), a salient takeaway from Eq. (3) is the identification of two pivotal factors: **(1)**_Training and testing data distribution discrepancy_ **(2)**_Cardinality of training set_. This formalizes the first principle of training data synthesis: _With infinite training samples from target distribution, the testing error will converge to the minimized training error_.
However, direct sampling from data distribution can be intractable, we instead learn a generative model \(p_{\mathbf{\theta}}\), parameterized with \(\mathbf{\theta}\), that can synthesize data following the same distribution, i.e., \(p_{\mathbf{\theta}}(\mathbf{x},y)=q(\mathbf{x},y)\)(Bishop, 2006). This effectively converts the problem of informative training data synthesis into a distribution matching problem. We can further reframe such distribution matching problems as \(q(y|\mathbf{x})q(\mathbf{x})=p_{\mathbf{\theta}}(y|\mathbf{x})p_{\mathbf{\theta}}(\mathbf{x})\) using Bayes rule. This allows us to separate the problem of matching a joint data-annotation distribution as two sub-problems: **(1)** data distribution matching \(q(\mathbf{x})=p_{\mathbf{\theta}}(\mathbf{x})\); **(2)** conditioned class likelihood matching \(q(y|\mathbf{x})=p_{\mathbf{\theta}}(y|\mathbf{x})\). Under the classification protocol, the former ensures in-distribution data synthesis, and the latter ensures a robust decision boundary between classes. Overall, the objective of training data synthesis for supervised learning can be framed as the following optimization:
\[\boxed{S^{*}=\operatorname*{arg\,min}_{S\sim p_{\mathbf{\theta}}(\mathbf{x},y)}\left( D(q(\mathbf{x}),p_{\mathbf{\theta}}(\mathbf{x}))+D(q(y|\mathbf{x}),p_{\mathbf{\theta}}(y|\mathbf{x}))- \lambda|S|)\right)} \tag{4}\]
where \(S^{*}\) denotes the optimal synthetic data sampled from the learned distribution \(S\sim p_{\mathbf{\theta}}(\mathbf{x},y)\), \(D(\cdot,\cdot)\) is a distance measure between two distributions. The regularization term \(\lambda|S|\) with \(\lambda\in\mathbb{R}^{+}\) encourages a larger training set. Fortunately, both distribution matching problems can be solved with deep generative models. In particular, with the diffusion model, the data distribution is learned through a denoising process, and the class likelihood can be modeled through classifier or classifier-free guidance (Dariwal and Nichol, 2021; Ho and Salimans, 2022).
Based on this theoretical framework, we perform an analysis of each component of the diffusion model in the context of distribution matching and propose potential improvements. Specifically, we introduce a distribution matching-centric synthesis framework tailored for training data synthesis, including three aspects of (1) feature distribution alignment (2) conditioned visual guidance (3) latent prior initialization.
### Distribution Matching with Maximum Mean Discrepancy
Firstly, we quantify the distribution discrepancy between target and synthetic data. Although the objective of KL-divergence minimization in the diffusion model has implicitly provided an upper bound for the data distribution matching (Yang et al., 2022), it is observed to be loose (Kingma et al., 2021) due to the gap between the bound and the negative log-likelihood Eq. (1). More importantly, the discrete nature of empirical distribution (i.e. data samples) in the context of training data synthesis further makes it a suboptimal measure. Instead, we measure this discrepancy with the Maximum Mean Discrepancy (MMD) (Gretton et al., 2008), which alleviates the problem of potentially biased discrete distribution as a sample test and is widely and successfully applied in dataset distillation Zhao and Bilen (2023) because of its non-parametric nature and simplicity. At the end, we find a mathematical equivalence between the training objective of diffusion model and the minimization of the MMD upper bound under certain assumptions, which allows us to further relax the variational bound and better align the data distribution in the feature space.
Consider a (real) target dataset \(\mathcal{R}=\left\{\left(k_{1},y_{1}\right),\ldots,\left(k_{|\mathcal{R}|},y_ {|\mathcal{R}|}\right)\right\}\) and a synthetic dataset \(\mathcal{S}=\left\{\left(s_{1},y_{1}\right),\ldots,\left(s_{|\mathcal{S}|},y_ {|\mathcal{S}|}\right)\right\}\). Our objective is to minimize the MMD between target data distribution \(q(x)\) and the synthetic data distribution \(p_{\mathbf{\theta}}(x)\) represented by some feature extractor \(\psi\in\mathcal{F}\).
\[\mathrm{MMD}[\mathcal{F},p,q]=\sup_{\left\|\psi_{\phi}\right\|_{\mathcal{R}} \leq 1}\left(\mathbb{E}_{q}\left[\psi(\mathcal{R})\right]-\mathbb{E}_{p}\left[ \psi(\mathcal{S})\right]\right), \tag{5}\]
where \(\psi\) represents a function residing within a unit ball in the universal Reproducing Kernel Hilbert Space \(\mathcal{H}\) (RKHS) (Hilbert, 1904). By following the reproductive properties of RKHS and empirically
estimating the expectations for all distributions, we can simplify Eq. (5) to Eq. (6):
\[\mathrm{MMD}^{2}[\mathcal{F},p,q]=||\frac{1}{|\mathcal{T}|}\sum_{i=1}^{|\mathcal{ T}|}\psi_{\vartheta}\left(k_{i}\right)-\frac{1}{|\mathcal{S}|}\sum_{j=1}^{| \mathcal{S}|}\psi_{\vartheta}\left(\mathbf{s}_{j}\right)||_{\mathcal{H}}^{2}. \tag{6}\]
Please refer to Apx. C for the detailed proof.
In Latent Diffusion Model, \(\psi\) can be conceptualized as the Variational Autoencoder (VAE) with the latent embedding serving as the feature map for MMD computation. The distributions of target and synthetic data can be effectively approximated by the initial noise-free latent embedding \(\mathbf{x}_{0}\) and the predicted denoised latent embedding \(\mathbf{x}_{\mathbf{\theta}}\left(\mathbf{x}_{t},t\right)\). Assuming that MMD is computed within the same batch, where \(|\mathcal{R}|=|\mathcal{S}|=|\mathcal{N}|\), our objective can be further refined as \(||\frac{1}{|\mathcal{N}|}\sum_{i=1}^{|\mathcal{N}|}(\mathbf{x}-\mathbf{x}_{\mathbf{\theta }}\left(\mathbf{x}_{t},t\right))||_{\mathcal{H}}^{2}\). Following Ho et al. (2020), choosing the parameterization with \(\mathbf{\epsilon_{\mathbf{\theta}}}\) as a predictor of added noise \(\mathbf{\epsilon}\) from \(x_{t}\) in denoising, allows us to frame our distribution matching objective (DM) as Eq. (7):
\[L_{DM}:=||\frac{1}{|\mathcal{N}|}\sum_{i=1}^{|\mathcal{N}|}\left(\mathbf{\epsilon} _{0}-\mathbf{\epsilon_{\mathbf{\theta}}}\left(\mathbf{x}_{t},t\right)\right)||_{\mathcal{ H}}^{2}\leq\frac{1}{|\mathcal{N}|}\sum_{i=1}^{|\mathcal{N}|}||\left(\mathbf{ \epsilon}_{0}-\mathbf{\epsilon_{\mathbf{\theta}}}\left(\mathbf{x}_{t},t\right)\right)||_{ \mathcal{H}}^{2}. \tag{7}\]
Since the mean and norm operations are both convex functions, by applying Jensen's inequality, we obtain an upper bound on the original distribution matching objective under equal weighting to each time step. Note that, this expression resembles the diffusion model training loss Eq. (2). This indicates when training the diffusion model, we implicitly optimize an upper bound of the MMD between synthetic and real data distribution. Moreover, this allows us to directly optimize the objective in Eq. (7) to mitigate the losseness in the variational bound. We use this objective to augment the original diffusion model loss during finetuning, which could ensure a more aligned feature distribution under the MMD measure.
### Conditioned Generation via Text-Vision Guidance
Beyond the general feature-level distribution matching, i.e., \(q(\mathbf{x})=p_{\mathbf{\theta}}(\mathbf{x})\), a pivotal aspect of training data synthesis is to ensure a congruent conditional class distribution with well-defined decision boundaries, i.e., \(q(y|\mathbf{x})=p_{\mathbf{\theta}}(y|\mathbf{x})\). Classifier(-free) guidance (Dhariwal and Nichol, 2021; Ho and Salimans, 2022) plays a crucial role in the conditioned sampling process in the diffusion model. Under SGM framework, to match the conditioned class likelihood, we can equivalently match the score function of each distribution, i.e. \(\nabla_{\mathbf{x}_{t}}\log q(y|\mathbf{x})=\nabla_{\mathbf{x}_{t}}\log p_{\mathbf{\theta}}(y |\mathbf{x})\), which is estimated through the noise prediction \(\epsilon_{\mathbf{\theta}}\left(\mathbf{x}_{t}\right)\) by reformulating the conditional score function as Eq. (8):
\[\nabla_{\mathbf{x}_{t}}\log p_{\mathbf{\theta}}\left(y\mid\mathbf{x}_{t}\right)=\nabla_{ \mathbf{x}_{t}}\log p_{\mathbf{\theta}}\left(\mathbf{x}_{t}\mid y\right)-\nabla_{\mathbf{x}_{t }}\log p_{\mathbf{\theta}}\left(\mathbf{x}_{t}\right)=\frac{1}{\sqrt{1-\alpha_{t}}} \left(\mathbf{\epsilon_{\mathbf{\theta}}}\left(\mathbf{x}_{t},y\right)-\mathbf{\epsilon_{\mathbf{ \theta}}}\left(\mathbf{x}_{t}\right)\right) \tag{8}\]
Most works of training data synthesis align the conditioned distribution via text-prompt engineering with class-level description (He et al., 2022), instance-level description (Lei et al., 2023), and lexical definition (Sarryildiz et al., 2023). Following (Lei et al., 2023), we incorporate the class name with the BLIP2 (Li et al., 2023) caption of each instance as the text prompt. Moreover, while text condition offers certain adaptability, it ignores intrinsic visual information, including both low-level ones, e.g., exposure and saturation, and high-level ones, e.g. co-occurrence of objects and scenes. To address this, we adopt a more direct prompting strategy by conditioning on image features. In particular, we extract image features encoded with the CLIP (Radford et al., 2021) image encoder, compute the mean of image embeddings for randomly sampled images of a class, and estimate the intra-class feature distribution, i.e. the mean feature. This is then concatenated with the text embeddings for jointly finetuning the diffusion model using LoRA (Hu et al., 2021), thus injecting the extra conditional control into the cross-attention layer of the denoising UNet (Ronneberger et al., 2015). The resultant multi-modal condition (embedding) takes form in "photoof[classname], [Image Caption], [Intra-class Visual Guidance]".
### Latent Prior Initialization
Another key aspect in the sampling process of Latent Diffusion Model is the latent prior \(p_{\mathbf{\theta}}\left(\mathbf{x}_{T}\right)\) which acts as the informative guide for the reverse diffusion process, governed by Langevin dynamics (Song and Ermon, 2019; Parisi, 1981), as shown in Eq. (9):
\[\mathbf{x}_{t-1}=\mathbf{x}_{t}+\sqrt{2\alpha_{t}}\mathbf{z}_{t}-\alpha_{t}\nabla_{\mathbf{x} _{t}}\log p_{\mathbf{\theta}}(\mathbf{x}_{t}|\mathbf{x}_{T}),\mathbf{z}_{t}\sim\mathcal{N} \left(\mathbf{0},\mathbf{I}\right), \tag{9}\]
where \(p_{\mathbf{\theta}}(\mathbf{x}_{t}|\mathbf{x}_{T})\) represents the conditional distribution of \(\mathbf{x}_{t}\) given the latent prior \(\mathbf{x}_{T}\), and the corresponding score function \(\nabla_{\mathbf{x}_{t}}\log p_{\mathbf{\theta}}(\mathbf{x}_{t}|\mathbf{x}_{T})\) guides the Langevin dynamics towards regions of higher probability in the data distribution, ensuring the synthetic samples align closely with the target distribution. While diffusion models often employ a Gaussian distribution as the initial prior \(\mathbf{x}_{T}\sim q\left(\mathbf{x}_{T}\right):=\mathcal{N}\left(\mathbf{x}_{T};\mathbf{0}, \mathbf{I}\right)\), recent studies using latent inversion (Zhou et al., 2023; Lian et al., 2023) and learning-based prior rendering (Liao et al., 2023) have highlighted the advantages of informative non-Gaussian latent priors with better sampling speed and synthesis quality. However, obtaining such an informative prior can be expensive due to intensive computation or need for external architecture. Instead, following (Meng et al., 2021), we leverage the VAE encoder to obtain the latent code of specific real samples, which is extremely cheap. This also provides an informative latent prior initialization closely aligned with the target distribution, resulting in better synthetic samples.
## 4 Experiments
Settings.We empirically evaluate the proposed distribution-matching framework and assess the utility of synthetic training data. We explore the application of synthetic data in various supervised image classification scenarios: **(1)** Replacing the real training set (Sec. 4.1), **(2)** Augmenting the real training set (Sec. 4.2), and **(3)** Evaluating the scaling law of synthetic training data (Sec. 4.3). We aim to validate two main factors identified in Eq. (4): _better alignment between target and synthetic data_ (in scenarios 1 and 2) and the advantages of _larger training set cardinality_ (in scenario 3). Then, we further explore the **(4)** Out-of-distribution generalization (Sec. 4.4) and **(5)** Privacy-preservation (Sec. 4.5) of synthetic training data. For all experiments, we finetune Stable Diffusion 1.5 (SDV1.5) (Rombach et al., 2022) with LoRA.
Datasets.We conduct benchmark experiments with ResNet50 (He et al., 2016) across three ImageNet datasets: ImageNette (IN-10) (Howard, 2019), ImageNet100 (IN-100) (Tian et al., 2020), and ImageNet1K (IN-1K) (Deng et al., 2009). Beyond these, we also experiment with several fine-grained image classification datasets, CUB (Wah et al., 2011), Cars (Krause et al., 2013), PET (Parkhi et al., 2012), and satellite images, EuroSAT (Helber et al., 2018).
More details of dataset specifications (Apx. D), data synthesis (Apx. E), model training (Apx. F), and experiment setting (Apx. G) are provided in the Appendix.
### Image Classification With Synthetic Data Only
We begin by assessing the informativeness of synthetic training data as a replacement for real training data on image classification tasks. To replace the real training set, we synthesize training samples with the same number as the real training set in each dataset.
As shown in Tab. 1, in comparison to real data, our synthetic data reduces the performance gap to less than 3% for small-scale dataset IN-10. In the context of the more challenging large-scale dataset, IN-1K, the performance differential remains under 10%. Regarding training data synthesis methods, our method outperforms all state-of-the-art techniques across all benchmarks. It is crucial to highlight that our synthetic data exhibits improvements of 16.8% and 28.0% in IN-1K top-1 accuracy, compared to CiP (Lei et al., 2023) and FakeIt (Sanyildiz et al., 2023) respectively, given the same generative model backbone.
In fine-grained classification tasks, our method demonstrates a more significant improvement compared to the state-of-the-art method (FakeIt), which can be attributed to the need for a more aligned decision boundary.
Ablation Study.We next perform a more comprehensive ablation study to evaluate the efficacy of our proposed enhancements: distribution matching objective (Sec. 3.1), conditioned visual guidance (Sec. 3.2), and latent prior initialization (Sec. 3.3). Due to computational cost, we conducted the ablation study on the IN-10 and IN-100 datasets. As illustrated in Tab. 2, every proposed module contributes to remarkable performance improvement. The combination of three modules achieves the best results which outperform the baseline by 10.7% and 14.0% on IN-10 and IN-100 respectively.
### Augmenting Real Data with Synthetic Data
We study whether the synthetic data can serve as dataset augmentation. We compare the models trained with only real data and those trained with both real and synthetic data. As shown in lower part of Tab. 1, we observe improvements across all benchmarks when combining the synthetic and real data. Especially, our synthetic data boosts the performances by 2.1% and 1.9% on IN-10 and IN-100 datasets respectively. This validates that our synthetic data align well with the real data distribution.
### Scaling Up Synthetic Training Data
Besides data distribution alignment, training set cardinality is another key factor influencing the utility of synthetic data identified in Eq. (4). Fortunately, it is easy to synthesize more data for deep generative model. In the experiment, we scale up the synthetic dataset and train the image classifiers using increasing amounts of synthetic data, ranging from 1\(\times\) to 10\(\times\) the size of the original real dataset. As shown in Fig. 2, solely scaling up the synthetic dataset enables the image classification performance to surpass that of models trained on real data, even **without** trained on it. For the small-scale dataset IN-10, the threshold scale of synthetic data to outperform the real data is only 2.5\(\times\) the size of the real data. While, the challenge is greater for larger-scale dataset IN-100. To surpass the real data efficacy, synthetic data with 6.5\(\times\) size is needed on IN-100. As shown in Tab. 3, we also scale up synthetic training images for the large dataset IN-1K, which has 1.3M real samples. The results show that the trend of increasing image classification accuracy along with the increasing synthetic data, still holds, where we achieve \(72.82\%\) and \(76.00\%\) with 2 \(\times\) and 10 \(\times\) ImageNet-1K size, respectively. By scaling up
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & Model & IN-10 & IN-100 & IN-1K & CUB & Cars & PET & EuroSAT \\ \hline _Training without real data_ & & & & & & & & & \\ BigGAN (Brock et al., 2018) & BigGAN & - & - & \(42.7\) & & & & \\ VQ-VAE-2 (Razavi et al., 2019) & VQ-VAE & - & - & \(54.8\) & & & & \\ CDM (Ho et al., 2022) & CDM & - & - & \(63.0\) & & & & \\ Fakelt (Sarvildiz et al., 2023) & SDv1.5 & - & - & \(42.9\) & \(33.7\) & \(47.1\) & \(75.9\) & \(94.0\) \\ Imagen (Azizi et al., 2023) & Imagen & - & - & \(69.2\) & & & & \\ CiP (Lei et al., 2023) & SDv1.5 & \(79.4\) & \(62.4\) & \(54.1\) & & & & \\ OURS & SDv1.5 & \(90.5\) & \(80.0\) & \(70.9\) & \(64.3\) & \(81.8\) & \(89.2\) & \(94.6\) \\ \(\Delta\)_with the previous state-of-the-art_ & & & \(+10.1\) & \(+17.6\) & \(+17.7\) & \(+30.6\) & \(+34.7\) & \(+13.3\) & \(+0.6\) \\ \hline _Training with real data_ & & & & & & & & \\ Baseline _real data only_ & & & & & & & & \\ OURS + _real data_ & & & & & & & & \\ \(\Delta\)_with the real data_ & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Synthetic Image Classification Performance** Top-1 accuracies of ResNet50 reported on seven datasets. The upper part indicates training with synthetic data only, and the lower part indicates joint training with combined real and synthetic data.
\begin{table}
\begin{tabular}{c c c|c|c c} \hline \hline Latent Prior & Visual Guidance & Distribution Matching & Finetune & ImageNet & ImageNet100 \\ Sec. 3.3 & Sec. 3.2 & Sec. 3.1 & & & & 79.8 & 66.0 \\ \hline & & & & & & 80.8 & 73.3 \\ & ✓ & & & ✓ & 82.3 & 74.0 \\ & ✓ & ✓ & ✓ & 81.2 & 73.8 \\ & ✓ & ✓ & ✓ & 82.9 & 75.1 \\ \hline ✓ & & & & & 88.0 & 76.3 \\ ✓ & ✓ & & ✓ & 89.5 & 79.3 \\ ✓ & & ✓ & ✓ & 88.9 & 79.8 \\ ✓ & ✓ & ✓ & ✓ & 90.5 & 80.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation study on proposed improvements with IN-10 and IN-100 Top-1 accuracy.**
\begin{table}
\begin{tabular}{c c c} \hline \hline Synthetic _vs._ & Real \\ real data size & & Top-1 \\ \hline \(\times 1\) (1.3M) & \(70.9\) & \(89.9\) \\ \(\times 2\) (2.6M) & \(72.9\) & \(91.1\) \\ \(\times 5\) (6.4M) & \(74.5\) & \(92.1\) \\ \(\times 10\) (13M) & \(76.0\) & \(93.1\) \\ \hline \(\times 1\) (1.3M) & ✓ & \(79.6\) & \(94.6\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Scaling-up synthetic ImageNet-1K.**
synthetic data, we reduce the gap between real and synthetic data down to \(3.6\%\) in top-1 accuracy and more encouraging \(1.5\%\) in top-5 accuracy.
### Generalization to Out-of-Distribution Data
We also investigate the Out-of-distribution (OOD) generalization performance on four OOD variants of ImageNet: (1) ImageNet-v2 (IN-v2) (Recht et al., 2019) (2) ImageNet-Sketch (IN-Sketch) (Wang et al., 2019) (3) ImageNet-R (IN-R) (Hendrycks et al., 2021) (4) ImageNet-A (IN-A) (Hendrycks et al., 2021). We test the ResNet50 (He et al., 2016) trained with in-distribution real data (i.e. ImageNet-1K training set) or synthetic data (i.e. the generative model is only tuned on ImageNet-1K training set like above experiments) on the OOD test sets.
As illustrated in Tab. 4, we observe that when training with \(1\times\) synthetic data only, our method achieves the best generalization performance across three out of four benchmarks, outperforming previous synthesis strategies. When jointly training with real data, our synthetic data further boosts the OOD generalization performance of real data. More importantly, when we scale up the synthetic data, its OOD generalization performance exceeds that of real data, e.g. on ImageNet-R and ImageNet-A, even before achieving comparable performance on the in-distribution test set. This further highlights the promising utility of synthetic training data in enhancing OOD generalization.
### Privacy Analysis
Synthetic training data is a promising solution to privacy-preserving learning. In this subsection, we examine the privacy-preserving nature of our synthetic data from the perspectives of both privacy attacks and visual similarity. More experiment details are provided in Apx. G.4.
Membership Inference Attack.We implement the membership inference attack that enables an adversary to query a trained model and determine whether a specific example is part of the model's training set. Specifically, we follow the state-of-the-art Likelihood Ratio Attack (LiRA) Carlini
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \begin{tabular}{c} ImageNet-v2 \\ (Recht et al., 2019) \\ Top-1 \\ \end{tabular} & \begin{tabular}{c} ImageNet-Sketch \\ (Wang et al., 2019) \\ Top-5 \\ \end{tabular} & \begin{tabular}{c} ImageNet-R \\ (Hendrycks et al., 2021) &
\begin{tabular}{c} ImageNet-A \\ (Hendrycks et al., 2021) \\ Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 \\ \hline \hline \multicolumn{7}{l}{_Training **without real data_} \\ Fakelt Sanyildiz et al. (2023) & \(43.0\) & \(70.3\) & \(16.6\) & \(35.2\) & \(26.3\) & \(45.3\) & \(3.6\) & \(15.1\) \\ CiPLei et al. (2023) & \(53.8\) & \(80.5\) & \(18.5\) & \(35.5\) & \(33.6\) & \(51.1\) & \(5.2\) & \(21.7\) \\ \hline UURS & \(67.2\) & \(88.0\) & \(21.9\) & \(38.2\) & \(35.4\) & \(51.9\) & \(4.5\) & \(23.5\) \\ UURS\(\times\)2 & \(69.5\) & \(89.2\) & \(25.2\) & \(43.0\) & \(36.1\) & \(52.5\) & \(6.8\) & \(29.2\) \\ OURS\(\times\)5 & \(71.1\) & \(90.6\) & \(27.8\) & \(46.2\) & \(39.7\) & \(56.3\) & \(9.4\) & \(34.9\) \\ \hline \hline \multicolumn{7}{l}{_Training **with real data_} \\ Baseline _real data only_ & \(74.7\) & \(92.2\) & \(28.1\) & \(45.8\) & \(39.4\) & \(54.1\) & \(8.1\) & \(34.7\) \\ OURS + _real data_ & \(75.7\) & \(92.7\) & \(29.0\) & \(46.8\) & \(40.5\) & \(55.9\) & \(9.0\) & \(36.4\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **OOD Generalization Performance. \(\times k\)** indicates training solely with the synthetic dataset of scale \(k\) times that of real dataset. Top-1 and 5 classification accuracies are reported.
Figure 2: **Effect of Scaling Up Synthetic Dataset Top-1 image classification performance with synthetic data only (indicated by blue solid curve) increases with synthetic dataset size, eventually outperforming real data (indicated by red dash line). The horizontal axis represents the amount of synthetic data used as multiples of the original real dataset size.**
et al. (2022) and report the MIA results for two training approaches: the privacy training dataset and synthetic data, in the low false-positive rate regime. To account for training duration, we conduct 10 sampling iterations. Concretely, we initially divide the privacy data in IN-10 into two halves: member data and non-member data. We only employ LoRA to finetune the diffusion model on the member data, then generate synthetic data of equal size. Subsequently, we train a ResNet50 on synthetic data. As depicted in Fig. 3, regardless of online attack, offline attack, or fixed variance scenarios, the model trained on synthetic data demonstrates superior defense against MIA.
Visual Similarity.We follow the previous works (Zhang et al., 2023; Sompalli et al., 2023) and employ the Self-Supervised Content Duplication (SSCD) (Pizzi et al., 2022) method for content plagiarism detection. SSCD is a self-supervised approach based on SimCLR (Chen et al., 2020), utilizing InfoNCE loss (Oord et al., 2018), entropy regularization of latent space representation, and various data augmentation techniques. We use the ResNeXt101 model trained on the DISC dataset (Douze et al., 2021) For a given query image and all reference images, we infer through the detection model to obtain a 1024-dimensional feature vector representation for each image then normalize each vector using L2 normalization.
By computing the inner product between the query feature and each reference feature, we obtain the similarity between them. Subsequently, we select the the top five reference features with the highest similarity to the query feature, in order to retrieve the corresponding images. Then, we rank the similarity and present the top 3 images with their corresponding real data in Fig. 4. From the figure, it can be observed that there are no apparent issues of copy or memorization Carlini et al. (2023). At least visually, these synthesized images should provide sufficient privacy protection.
## 5 Conclusion
In this work, we propose a principled theoretical framework for training data synthesis from a distribution-matching perspective. Based on this, we empirically push the limit of synthetic training data by advancing the state-of-the-art performances over diverse image classification benchmarks. We also demonstrate promising benefits of improving the OOD generalization and privacy preservation performances by training models with our synthetic data.
Figure 4: **Visualization on synthetic and retrieved real data with SSCD.** The synthesized data does not exhibit evident copying or memorization.
Figure 3: **Membership Inference Attack (MIA) Performance with LiRA.** LiRA achieves a TPR of 0.001% at a low FPR of 0.1% when applied to synthetic data, while the results for private data is 0.01%, which indicates training with synthetic data is much more privacy-preserving.
|
2303.08375
|
Auxiliary Splines Space Preconditioning for B-Splines Finite Elements:
The case of $\bm{H}(\bm{curl},Ω)$ and $\bm{H}(div,Ω)$ elliptic
problems
|
This paper presents a study of large linear systems resulting from the
regular $B$-splines finite element discretization of the $\bm{curl}-\bm{curl}$
and $\bm{grad}-div$ elliptic problems on unit square/cube domains. We consider
systems subject to both homogeneous essential and natural boundary conditions.
Our objective is to develop a preconditioning strategy that is optimal and
robust, based on the Auxiliary Space Preconditioning method proposed by
Hiptmair et al. \cite{hiptmair2007nodal}. Our approach is demonstrated to be
robust with respect to mesh size, and we also show how it can be combined with
the Generalized Locally Toeplitz (GLT) sequences analysis presented in
\cite{mazza2019isogeometric} to derive an algorithm that is optimal and stable
with respect to spline degree. Numerical tests are conducted to illustrate the
effectiveness of our approach.
|
Abdeladim El Akri, Khalide Jbilou, Ahmed Ratnani
|
2023-03-15T05:24:17Z
|
http://arxiv.org/abs/2303.08375v1
|
Auxiliary Splines Space Preconditioning for B-Splines Finite Elements: The case of \(\boldsymbol{H(curl,\Omega)}\) and \(\boldsymbol{H(div,\Omega)}\) elliptic problems
###### Abstract
This paper presents a study of large linear systems resulting from the regular \(B\)-splines finite element discretization of the \(\boldsymbol{curl-curl}\) and \(\boldsymbol{grad-div}\) elliptic problems on unit square/cube domains. We consider systems subject to both homogeneous essential and natural boundary conditions. Our objective is to develop a preconditioning strategy that is optimal and robust, based on the Auxiliary Space Preconditioning method proposed by Hiptmair et al. [40]. Our approach is demonstrated to be robust with respect to mesh size, and we also show how it can be combined with the Generalized Locally Toeplitz (GLT) sequences analysis presented in [49] to derive an algorithm that is optimal and stable with respect to spline degree. Numerical tests are conducted to illustrate the effectiveness of our approach.
## 1 Introduction
The _Isogeometric Analysis_ (IgA) is a mathematical approach that combines _Finite Element Methods_ (FEMs) with _Computer-Aided Design_ (CAD) to design and analyze the numerical approximation of _Partial Differential Equations_ (PDEs). Like FEM, IgA formulates problems through variational methods and specifies a finite-dimensional subspace for the solution. However, IgA employs the same functions used to describe the underlying domain, typically \(B\)-spline or Non-Uniform Rational \(B\)-spline (NURBS) functions commonly used in CAD. This approach offers several advantages over FEM, including exact geometry, which eliminates geometric approximation errors, and the use of \(B\)-spline functions, which makes higher \(C^{p}\)-continuous interpolation more practical than standard Lagrange and Hermite polynomials used in FEM.
The field of IgA has rapidly developed in recent years, with significant contributions since the pioneering work by Hughes in 2005 [43]. This approach has been applied in various areas, including electromagnetism [13, 15, 57], incompressible fluid dynamics [11], fluid-structure interaction [6, 42], structural and contact mechanics [47, 62], plasmas physics problems [58], and kinetic systems [1, 4, 22], among others. For a comprehensive overview, readers can refer to the review paper by Da Veiga et al. [23] and Cottrell et al. [21].
Despite the large success of the method, it is important to note that dealing with higher-order IgA finite elements can be challenging. Specifically, using higher-order \(B\)-spline functions can generate huge, sparse, ill-conditioned matrices. Although the discrete systems produced by IgA methods are typically better conditioned than those produced by standard finite elements, their condition numbers cannot be uniformly bounded with respect to the discretization parameter \(h\), and can even grow rapidly as \(h\) approaches zero. For example, this is the case for the Full-Wave problem with high wave numbers [50]. (See also [35] for explicit bounds of the spectral condition number in the case of the Poisson equation). As a result, direct solvers are not suitable for IgA discrete systems, and even standard iterative methods may fail. Preconditioning is therefore necessary to obtain convergence in a reasonable amount of time.
The literature offers several techniques to address the problem of preconditioning IgA discrete systems. These include overlapping Schwarz preconditioners [24, 25], non-overlapping decomposition methods [7, 12, 26, 27], FETI-type preconditioners [9, 44, 54], multilevel algorithms [18, 28, 33], multigrid methods [30, 34], and preconditioning based on the solution of Sylvester equations [60].
A review of the current state of the art indicates a growing interest in developing efficient and rapid IgA preconditioning techniques in recent years. However, most research has focused on scalar elliptic problems, with limited technical generalizations to linear elasticity systems. To the best of our knowledge, only a few papers [49, 50] have studied \(\boldsymbol{H}(\boldsymbol{curl})\) and \(\boldsymbol{H}(div)\) problems. In these works, the construction of solvers exploits a detailed spectral analysis of the involved matrices based on the theory of the Generalized Locally Toeplitz (GLT) sequences. However, results of practical interest can be precisely developed only if addressed to specific models. In contrast, this paper presents a more general and systematic approach, providing abstract techniques that can be applied to a broader range of problems.
We shall consider two model problems; the \(\boldsymbol{curl}-\boldsymbol{curl}\) problem: finds a vector field \(\boldsymbol{u}\,:\,\overline{\Omega}\to\mathbb{R}^{3}\) such that
\[\boldsymbol{curl}\,\boldsymbol{curl}\,\boldsymbol{u}+\tau\boldsymbol{u}= \boldsymbol{f},\quad\text{in}\ \Omega, \tag{1}\]
and the \(\boldsymbol{grad}-div\) problem: finds \(\boldsymbol{u}\,:\,\overline{\Omega}\to\mathbb{R}^{3}\) such that
\[-\boldsymbol{gard}\,div\,\boldsymbol{u}+\tau\boldsymbol{u}=\boldsymbol{f}, \quad\text{in}\ \Omega, \tag{2}\]
and subject to both homogeneous natural and essential boundary conditions and where \(\boldsymbol{f}\,:\,\Omega\to\mathbb{R}^{3}\) is a vector field and \(\tau\) is a small positive parameter. The variational forms of these problems can be written in unified form as follows: Finds \(\boldsymbol{u}\in\mathcal{H}(\mathcal{D},\Omega)\) such that
\[(D\boldsymbol{u},D\boldsymbol{v})_{(L^{2}(\Omega))^{3}}+\tau(\boldsymbol{u}, \boldsymbol{v})_{(L^{2}(\Omega))^{3}}=(\boldsymbol{f},\boldsymbol{v})_{(L^{2} (\Omega))^{3}},\quad\forall v\in\mathcal{H}(\boldsymbol{D},\Omega), \tag{3}\]
where \(D\in\{\boldsymbol{curl},div\}\) and the space
\(mathcalH(\mathcal{D},\Omega)\) fulfill the boundary conditions on the case of essential boundary conditions (see the next section for a precise definition).
Preconditioning for these types of problems is particularly challenging. This is because, as pointed out in [40], the operator \(\mathcal{D}\) has a large null space. Unlike the scalar Laplacian operator, which has a null space of dimension one, the kernel of \(\mathcal{D}\) has infinite dimension. Another challenge when discretizing (3) is the loss of coercivity as \(\tau\to 0\). While the continuous problem is well-posed, discrete stability can only be achieved with very fine meshes, which leads to a rapidly growing spectral condition number as \(\tau\) approaches \(0\). As a result, the preconditioning approach must not only consider the structure of the space \(\mathcal{H}(\mathcal{D},\Omega)\) but also be robust with respect to the parameter \(\tau\).
Over the last decades, a promising technique, called Auxiliary Space Preconditioning (ASP) method [17, 39, 40, 45, 46, 52, 63], has leads to a general abstract framework for the derivation of stable preconditioners in the case of conforming finite element discretizations. The basic idea of ASP is to transfer the original problem to an auxiliary space where it is easier to solve, then transfer the solution back to the original space and correct the error between the auxiliary space and the full space by applying a smoothing scheme. The choice of the auxiliary space is typically based on a stable decomposition of the space \(\mathcal{H}(\mathcal{D},\Omega)\), known as a regular decomposition [8, 10, 20, 40, 53, 64]. However, the main challenge in developing the method lies in adapting these regular space decompositions to the discrete level.
The ASP method has already been successfully applied to various preconditioning problems for large-scale finite element systems. In this paper, we extend the method to the isogeometric context, building upon the work presented in [40]. As a first step, we assume that \(\Omega=\left(0,1\right)^{d}\), where \(d=2\) or \(3\).
The paper is organized as follows. Section 2 presents the notations, definitions, and preliminary results relevant to our analysis. We introduce the abstract theory of ASP method and briefly recall the notations for \(B\)-splines spaces and related de Rham sequence. In Section 3, we present the main theoretical result of the paper, which is a uniform discrete regular decomposition. We then use this regular decomposition to design robust and efficient ASP preconditioners. Section 4 provides several numerical examples for both \(2\)-\(d\) and \(3\)-\(d\) cases to illustrate the performance of our preconditioners. Finally, Section 5 concludes the paper.
_Remark 1.1_.: Although the results presented in the paper are applicable to both \(2\)-\(d\) and \(3\)-\(d\) problems, the focus of the analysis is on the \(3\)-\(d\) setting. However, the results for the \(2\)-\(d\) case can be easily derived from those of the \(3\)-\(d\) problems.
## 2 Preliminaries
In this section, we establish the notation and recall some preliminary results which will be used later in the paper. Firstly, we provide the basic definitions and properties of Sobolev spaces, and we introduce a regular decomposition of space \(\mathcal{H}(\mathcal{D},\Omega)\). This decomposition is critical for our analysis of the discrete \(\boldsymbol{curl-curl}\) and \(\boldsymbol{grad-div}\) problems. Additionally, we summarize the key aspects of the abstract theory of the Auxiliary Space Preconditioning method. Finally, we introduce the Isogeometric discrete spaces and their relevant properties.
### Functional Spaces: Notation and Results
In this paper, we will work with Sobolev spaces. We will provide standard notations, but for a more detailed presentation, we refer the reader to [2, 36, 51]. For the unit cube (or square) domain \(\Omega\), we denote by \(L^{2}(\Omega)\) the Hilbert space of Lebesgue square-integrable functions on \(\Omega\), equipped with the standard \(L^{2}(\Omega)\) norm. Given a positive integer \(r\), we denote by \(H^{r}(\Omega)\) the Sobolev space of order \(r\) on \(\Omega\), which is the space of functions in \(L^{2}(\Omega)\) with \(r\)th-order derivatives, in the sense of distributions, also in \(L^{2}(\Omega)\), endowed with the standard norm \(\|\cdot\|_{H^{r}(\Omega)}\). By definition, we let \(H^{0}(\Omega)=L^{2}(\Omega)\). We denote by \(H^{r}_{0}(\Omega)\) the subspaces of functions with Dirichlet boundary conditions. Note that by definition, we have
\[L^{2}_{0}(\Omega)=\left\{u\in L^{2}(\Omega):\;\int_{\Omega}u=0\right\}.\]
We use boldface letter notation for vectorial spaces, i.e., \(\boldsymbol{L}^{2}(\Omega)=\left(L^{2}(\Omega)\right)^{3}\), \(\boldsymbol{H}^{r}(\Omega)=\left(H^{r}(\Omega)\right)^{3}\) and \(\boldsymbol{H}^{r}_{0}(\Omega)=\left(H^{r}_{0}(\Omega)\right)^{3}\).
We also consider the following spaces
\[\mathbf{H}(\mathbf{curl},\,\Omega)=\left\{\mathbf{u}\in\mathbf{L}^{2}(\Omega),\,\, \mathbf{curl}\,\mathbf{u}\in\mathbf{L}^{2}(\Omega)\right\},\] \[\mathbf{H}(div,\,\Omega)=\left\{\mathbf{u}\in\mathbf{L}^{2}(\Omega),\,\,div\, \mathbf{u}\in L^{2}(\Omega)\right\},\]
equipped with their default inner products
\[(\mathbf{u},\mathbf{v})_{\mathbf{H}(\mathbf{curl},\,\Omega)}=(\mathbf{u},\mathbf{v})_{\mathbf{ L}^{2}(\Omega)}+(\mathbf{curl}\,\mathbf{u},\mathbf{curl}\,\mathbf{v})_{\mathbf{L}^{2}(\Omega)},\] \[(\mathbf{u},\mathbf{v})_{\mathbf{H}(div,\,\Omega)}=(\mathbf{u},\mathbf{v})_{\mathbf{L}^{2 }(\Omega)}+(div\,\mathbf{u},div\,\mathbf{v})_{L^{2}(\Omega)}.\]
The corresponding norms are denoted by \(\|\mathbf{u}\|_{\mathbf{H}(\mathbf{curl},\,\Omega)}\) and \(\|\mathbf{u}\|_{\mathbf{H}(div,\,\Omega)}\), respectively.
To deal with the essential boundary conditions, we introduce the spaces
\[\mathbf{H}_{0}(\mathbf{curl},\,\Omega)=\left\{\mathbf{u}\in\mathbf{H}(\mathbf{curl}, \,\Omega),\,\,\mathbf{u}|_{\partial\Omega}\times\mathbf{n}=0\right\},\] \[\mathbf{H}_{0}(div,\,\Omega)=\left\{\mathbf{u}\in\mathbf{H}(div,\,\Omega),\, \mathbf{u}|_{\partial\Omega}\cdot\mathbf{n}=0\right\},\]
where \(\mathbf{n}\) is the unit outward normal of \(\partial\Omega\). As subspaces of \(\mathbf{H}(\mathbf{curl},\,\Omega)\) and \(\mathbf{H}(div,\,\Omega)\), spaces \(\mathbf{H}_{0}(\mathbf{curl},\,\Omega)\) and \(\mathbf{H}_{0}(div,\,\Omega)\) are endowed with \((\cdot,\cdot)_{\mathbf{H}(\mathbf{curl},\,\Omega)}\) and \((\cdot,\cdot)_{\mathbf{H}(div,\,\Omega)}\), respectively, as their default inner products. We write \(\|\cdot\|_{\mathbf{H}_{0}(\mathbf{curl},\,\Omega)}\) and \(\|\cdot\|_{\mathbf{H}_{0}(div,\,\Omega)}\) for the corresponding norms. It is worth mentioning however that the semi-norms \(\|\mathbf{curl}(\cdot)\|_{\mathbf{L}^{2}(\Omega)}\) and \(\|div(\cdot)\|_{L^{2}(\Omega)}\) are norms which are equivalent to \(\|\cdot\|_{\mathbf{H}_{0}(\mathbf{curl},\,\Omega)}\) and \(\|\cdot\|_{\mathbf{H}_{0}(div,\,\Omega)}\) in spaces \(\mathbf{H}_{0}(\mathbf{curl},\,\Omega)\) and \(\mathbf{H}_{0}(div,\,\Omega)\), respectively.
Next we provide regular decomposition for spaces \(\mathbf{H}(\mathbf{curl},\,\Omega)\), \(\mathbf{H}(div,\,\Omega)\),
\(\mathbf{H}_{0}(\mathbf{curl},\,\Omega)\) and \(\mathbf{H}_{0}(div,\,\Omega)\). For this purpose, following ideas of [40], we introduce a generic notation \(\mathcal{H}(\mathcal{D},\Omega)\) to indicate any of the four spaces listed above. Here, \(\mathcal{D}\) denotes either \(\mathbf{curl}\) or \(div\). We also use \(\mathcal{D}^{-}\) and \(\mathcal{D}^{+}\) to represent the differential operators characterizing the null and range spaces of \(\mathcal{D}\), respectively. The corresponding Sobolev spaces are denoted by \(\mathcal{H}(\mathcal{D}^{-},\Omega)\) and \(\mathcal{H}(\mathcal{D}^{+},\Omega)\). Table 1 summarizes these notations.
With these notations, we have the following result:
**Proposition 2.1**.: _The de Rham complex_
\[\mathcal{H}(D^{-},\Omega)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
_with estimates_
\[\|\boldsymbol{\varphi}\|_{\boldsymbol{L}^{2}(\Omega)}\leq\|\boldsymbol{u}\|_{ \boldsymbol{L}^{2}(\Omega)}, \tag{4}\]
_and_
\[\|\boldsymbol{\varphi}\|_{\boldsymbol{X}(\Omega)}\leq C\|D\boldsymbol{u}\|_{ \boldsymbol{L}^{2}(\Omega)}, \tag{5}\]
_for some positive constant \(C\)._
In Section 3 (see Theorem 3.7), we shall show a discrete version of Theorem 2.2.
### Auxiliary Space Preconditioning (ASP) Method
This subsection provides a brief overview of the Auxiliary Space Preconditioning (ASP) method. For a more detailed discussion, the reader is refereed to [17, 39, 40, 45, 46, 52, 63] and the references therein.
Let \(V\) be a Hilbert space with an inner product \(a:V\times V\to\mathbb{R}\). The ASP method involves three main components: auxiliary spaces, transfer operators, and a smoother. The auxiliary spaces, denoted as \(W_{i}\) for \(i=1,\cdots,I\), are equipped with inner products \(a_{i}:W_{i}\times W_{i}\to\mathbb{R}\). The transfer operators are linear operators \(\pi_{i}:W_{i}\to V\) that map the auxiliary spaces to \(V\). The smoother is an inner product \(s:V\times V\to\mathbb{R}\) that is distinct from \(a\) and is often provided by a relaxation method such as the Jacobi or symmetric Gauss-Seidel schemes.
Given these components, the ASP preconditioner is constructed as
\[B=S^{-1}+\sum_{i=1}^{I}\pi_{i}\circ A_{i}^{-1}\circ\pi_{i}^{*},\]
where \(S\) and \(A_{i}\) are linear operators corresponding to the inner products \(s\) and \(a_{i}\), respectively, and \(\circ\) denotes composition of linear operators. The adjoint operator of \(\pi_{i}\) is denoted as \(\pi_{i}^{*}\).
Under appropriate assumptions, we prove that \(B\) is a valid preconditioner for \(A\). Specifically, we have the following result (see [40, Theorem 2.2]):
**Theorem 2.3**.: _Assume that there are some nonnegative constants \(\beta_{j}\) and \(\gamma\) such that_
1. _The continuity of_ \(\pi_{j}\) _with respect to the graph norms:_ \[a\left(\pi_{j}(w_{j}),\pi_{j}(w_{j})\right)\leq\beta_{j}\sum_{i=1}^{I}a_{i}(w _{j},w_{j}),\quad\forall w_{j}\in W_{j}.\]
2. _The continuity of_ \(s^{-1}\)_:_ \[a(v,v)\leq\gamma\,s(v,v),\quad\forall v\in V.\]
3. _Existence of a stable decomposition of_ \(V\)_: for each_ \(v\in V\)_, there exist_ \(\widetilde{v}\in V\) _and_ \(w_{i}\in W_{i}\) _such that_ \[v=\widetilde{v}+\sum_{i=1}^{I}\pi_{i}w_{i},\] _with estimate_ \[s(\widetilde{v},\widetilde{v})+\sum_{i=1}^{I}a_{i}(w_{i},w_{i})\leq\eta\,a(v, v),\] _for some nonnegative (small) constant_ \(\eta\)_._
_Then we have the following estimate for the spectral condition number of the preconditioned operator_
\[\kappa(BA)\leq\eta\left(\gamma+\sum_{i=1}^{I}\beta_{i}\right).\]
The above result highlights the central importance of stable regular decompositions in constructing an efficient auxiliary space preconditioner. In this work, we focus on the discrete case, which requires adapting the regular decomposition of Theorem 2.2 to the discrete level. As a first step, we introduce the discrete spaces in the next section.
### IsoGeometric Spaces
In this section, we introduce a discrete counterpart of the functional space \(\mathcal{H}(\mathcal{D},\Omega)\) in the context of Isogeometric Analysis [5, 14, 21, 23, 43]. We begin by recalling some basic properties of \(B\)-spline functions and then proceed to construct the IgA discretization of \(\boldsymbol{curl}\) and \(div\) operators. For an introduction to the subject, we refer the reader to standard textbooks on the topic [19, 31, 32, 37, 55, 56, 59, 61].
Let \(T=(t_{1},t_{2},\ldots,t_{m})\) be a knot vector, which is a non-decreasing sequence of real numbers. The \(i\)-th \(B\)-spline of order \(p\in\mathbb{N}\) is defined recursively using the _Cox-de Boor formula_ as follows:
\[B_{i,0}(t)=\begin{cases}1&\text{if }t_{i}\leq t<t_{i+1},\\ 0&\text{otherwise}\end{cases}\]
\[B_{i,p}(t)=\frac{t-t_{i}}{t_{i+p}-t_{i}}B_{i,p-1}(t)+\frac{t_{i+p+1}-t}{t_{i+ p+1}-t_{i+1}}B_{i+1,p-1}(t),\]
for \(i=1,\ldots,n=m-p-1\), where a fraction with zero denominator is assumed to be zero. Following [14], we introduce also the vector \(U=(u_{1},\ldots,u_{N})\) of breakpoints where \(N\) is the number of knots without repetition and the regularity vector \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{N})\in\mathbb{N}^{N}\) in such a way that for each \(i\in\{1,\ldots,N\}\), the \(B\)-spline function \(B_{i,p}\) is \(\alpha_{i}\) continuously derivable at the breakpoint \(u_{i}\). Note that \(\alpha_{i}=p-r_{i}\) where \(r_{i}\) is the multiplicity of the break point \(u_{i}\). Throughout the paper, we will only consider _non-periodic_ knot vectors
\[T=(\underbrace{0,\ldots,0}_{p+1},t_{p+2},\ldots,t_{m-p-1},\underbrace{1, \ldots,1}_{p+1}),\]
and we suppose that \(1\leq r_{i}\leq p\). In this way we guarantee that \(0\leq\alpha_{i}\leq p\) where the minimal regularity \(\alpha_{i}=0\) corresponds to a continuity at knot \(u_{i}\). This allow us to introduce the uni-variate spline spaces
\[\mathcal{S}^{p}_{\boldsymbol{\alpha}}=span\left\{B_{i,p}\,:\,i=1,\ldots,n \right\},\quad\mathcal{S}^{p}_{\boldsymbol{\alpha},0}=span\left\{B_{i,p}\,:\, i=2,\ldots,n-1\right\}.\]
Note that all the elements of space \(\mathcal{S}^{p}_{\boldsymbol{\alpha},0}\) vanish at the boundary of \((0,1)\) (by definition). Hence, the space is suited for dealing with homogeneous Dirichlet boundary conditions.
These definitions can be generalized to the multivariate case \(\Omega=(0,1)^{3}\) by _tensorization_: With a tridirectional knot vector \(\boldsymbol{T}=T_{1}\times T_{2}\times T_{3}\) at hand, where
\[T_{i}=(\underbrace{0,\ldots,0}_{p_{i}+1},t_{i,p_{i}+2},\ldots,t_{i,m_{i}-p_{i }-1},\underbrace{1,\ldots,1}_{p_{i}+1}),\quad m_{i},p_{i}\in\mathbb{N},\;i=1,2,3,\]
is an open univariate knot vector, we define the _tensor-product spline_ space by
\[\boldsymbol{\mathcal{S}}^{p_{1},p_{2},p_{3}}_{\boldsymbol{\alpha}_{1}, \boldsymbol{\alpha}_{2},\boldsymbol{\alpha}_{3}}=\mathcal{S}^{p_{1}}_{ \boldsymbol{\alpha}_{1}}\otimes\mathcal{S}^{p_{2}}_{\boldsymbol{\alpha}_{2}} \otimes\mathcal{S}^{p_{3}}_{\boldsymbol{\alpha}_{3}},\]
where \(\boldsymbol{\alpha}_{i}\) is the regularity vector related to knot \(T_{i}\), with \(i=1,2,3\). However, we shall also assume our mesh to be _locally quasi-uniform_, meaning, there exists a constant \(\theta\geq 1\) such that for all \(i\in\{1,2,3\}\) we have
\[\frac{1}{\theta}\leq\frac{h_{i,j_{i}}}{h_{i,j_{i}+1}}\leq\theta,\quad j_{i}=1, \ldots,N_{i}-2,\]
where \(N_{i}\) is the number of \(T_{i}\)-knots without repetition and \(h_{i,j_{i}}=t_{i,j_{i}+1}-t_{i,k_{j_{i}}}\), with \(k_{j_{i}}=\max\{l\,:\,t_{l}<t_{i,j_{i}+1}\}\).
With these notations, the \(3\)-\(d\) approximations spaces are given by (see, e.g. [14, 23])
\[\left\{\begin{array}{l}V_{h}(\mathbf{grad},\Omega)=\mathbf{\mathcal{S}}_{\mathbf{ \alpha}_{1},\mathbf{\alpha}_{2},\mathbf{\alpha}_{3}}^{p_{1},p_{2},p_{3}},\\ \mathbf{V}_{h}(\mathbf{curl},\Omega)=\mathbf{\mathcal{S}}_{\mathbf{\alpha}_{1}-1,\mathbf{\alpha}_ {2},\mathbf{\alpha}_{3}}^{p_{1}-1,p_{2},p_{3}}\times\mathbf{\mathcal{S}}_{\mathbf{\alpha} _{1},\mathbf{\alpha}_{2}-1,\mathbf{\alpha}_{3}}^{p_{1},p_{2}-1,p_{3}}\times\mathbf{\mathcal{ S}}_{\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},\mathbf{\alpha}_{3}-1}^{p_{1},p_{2},p_{3}-1},\\ \mathbf{V}_{h}(div,\Omega)=\mathbf{\mathcal{S}}_{\mathbf{\alpha}_{1},\mathbf{\alpha}_{2}-1, \mathbf{\alpha}_{3}-1}^{p_{1},p_{2}-1,p_{3}-1}\times\mathbf{\mathcal{S}}_{\mathbf{\alpha} _{1}-1,\mathbf{\alpha}_{2},\mathbf{\alpha}_{3}-1}^{p_{1}-1,p_{2}-1,p_{3}},\\ V_{h}(L^{2},\Omega)=\mathbf{\mathcal{S}}_{\mathbf{\alpha}_{1}-1,\mathbf{\alpha}_{2}-1,\bm {\alpha}_{3}-1}^{p_{1}-1,p_{2}-1,p_{3}-1},\end{array}\right.\]
where \(h\) refers to the global mesh size, i.e \(h=\max\limits_{\begin{subarray}{c}1\leq j_{i}\leq N_{i}-1\\ i=1,2,3\end{subarray}}h_{i,j_{i}}\). Let
\[\left\{\begin{array}{l}V_{h,0}(\mathbf{grad},\Omega)=V_{h}(\mathbf{grad}, \Omega)\cap H_{0}^{1}(\Omega)\\ \mathbf{V}_{h,0}(\mathbf{curl},\Omega)=\mathbf{V}_{h}(\mathbf{curl},\Omega)\cap\mathbf{H}_{0}(\bm {curl},\,\Omega),\\ \mathbf{V}_{h,0}(div,\Omega)=\mathbf{V}_{h}(div,\Omega)\cap\mathbf{H}_{0}(div,\,\Omega),\\ V_{h,0}(L^{2},\Omega)=V_{h}(L^{2},\Omega)\cap L_{0}^{2}(\Omega),\end{array}\right.\]
for spaces with essential boundary conditions.
_Remark 2.4_.: Since we work on the parametric domain \((0,1)^{3}\), we have
\[\left\{\begin{array}{l}V_{h,0}(\mathbf{grad},\Omega)=\mathcal{S}_{\mathbf{ \alpha}_{1},0}^{p_{1}}\otimes\mathcal{S}_{\mathbf{\alpha}_{2},0}^{p_{2}-1}\otimes \mathcal{S}_{\mathbf{\alpha}_{3},0}^{p_{3}-1},\\ \mathbf{V}_{h,0}(\mathbf{curl},\Omega)=\begin{pmatrix}\mathcal{S}_{\mathbf{\alpha}_{1}-1}^{ p_{1}-1}\otimes\mathcal{S}_{\mathbf{\alpha}_{2},0}^{p_{2}}\otimes\mathcal{S}_{\mathbf{ \alpha}_{3},0}^{p_{3}}\\ \mathcal{S}_{\mathbf{\alpha}_{1},0}^{p_{1}}\otimes\mathcal{S}_{\mathbf{\alpha}_{2}-1}^ {p_{2}-1}\otimes\mathcal{S}_{\mathbf{\alpha}_{3},0}^{p_{3}}\\ \mathcal{S}_{\mathbf{\alpha}_{1},0}^{p_{1}}\otimes\mathcal{S}_{\mathbf{\alpha}_{2},0}^ {p_{2}}\otimes\mathcal{S}_{\mathbf{\alpha}_{3}-1}^{p_{3}-1}\end{pmatrix}\\ \mathbf{V}_{h,0}(div,\Omega)=\begin{pmatrix}\mathcal{S}_{\mathbf{\alpha}_{1},0}^{p_{1}} \otimes\mathcal{S}_{\mathbf{\alpha}_{2},0}^{p_{2}-1}\otimes\mathcal{S}_{\mathbf{ \alpha}_{3}-1}^{p_{3}-1}\\ \mathcal{S}_{\mathbf{\alpha}_{1}-1}^{p_{1}-1}\otimes\mathcal{S}_{\mathbf{\alpha}_{2},0}^ {p_{2}}\otimes\mathcal{S}_{\mathbf{\alpha}_{3}-1}^{p_{3}-1}\\ \mathcal{S}_{\mathbf{\alpha}_{1}-1}^{p_{1}-1}\otimes\mathcal{S}_{\mathbf{\alpha}_{2},0}^ {p_{2}-1}\otimes\mathcal{S}_{\mathbf{\alpha}_{3},0}^{p_{3}-1}\end{pmatrix}.\end{array}\right.\]
Now _de Rham diagrams_ can be constructed. Among the important properties, one can build specific projectors, what is called _quasi interpolation operators_, that make these diagrams commute. We shall start with the univariate case, then extend it by tensor product. For this purpose, we take any locally stable projector \(\mathcal{P}_{h}\,:\,H^{1}(0,1)\longrightarrow\mathcal{S}_{\mathbf{\alpha}}^{p}\), for instance see [14, 61] for theoretical studies, then we define the corresponding listopolation operator by
\[\mathcal{Q}_{h}\phi=\frac{d}{dx}\mathcal{P}_{h}\left(\int_{0}^{x}\phi(t)dt \right),\quad\phi\in L^{2}(0,1).\]
Following the notations above, the quasi interpolation operators are given by
\[\left\{\begin{array}{l}\Pi_{h}^{\mathbf{grad}}=\mathcal{P}_{h}\otimes \mathcal{P}_{h}\otimes\mathcal{P}_{h},\\ \\ \Pi_{h}^{\mathbf{curl}}=\begin{pmatrix}\mathcal{Q}_{h}\otimes\mathcal{P}_{h}\otimes \mathcal{P}_{h}\\ \mathcal{P}_{h}\otimes\mathcal{Q}_{h}\otimes\mathcal{P}_{h}\\ \mathcal{P}_{h}\otimes\mathcal{P}_{h}\otimes\mathcal{Q}_{h}\end{pmatrix},\\ \\ \Pi_{h}^{div}=\begin{pmatrix}\mathcal{P}_{h}\otimes\mathcal{Q}_{h}\otimes \mathcal{Q}_{h}\\ \mathcal{Q}_{h}\otimes\mathcal{P}_{h}\otimes\mathcal{Q}_{h}\\ \mathcal{Q}_{h}\otimes\mathcal{Q}_{h}\otimes\mathcal{P}_{h},\\ \\ \Pi_{h}^{L^{2}}=\mathcal{Q}_{h}\otimes\mathcal{Q}_{h}\otimes\mathcal{Q}_{h}.\end{array}\right.\]
(here the notation \(\otimes\) express the composition of operators on each coordinate).
The case with boundary conditions follows the same rationals. In fact, in this case one simply replace \(\mathcal{P}_{h}\) by a locally stable projector preserving boundary conditions \(\mathcal{P}_{h,0}\,:\,H^{1}_{0}(0,1)\longrightarrow\mathcal{S}^{p}_{\mathbf{ \alpha},0}\) (see [14]) and modifies the projector \(\mathcal{Q}_{h}\) as follows
\[\mathcal{Q}_{h,0}\phi=\frac{d}{dx}\mathcal{P}_{h,0}\left(\int_{0}^{x}\phi(t)dt \right),\quad\phi\in L^{2}_{0}(0,1).\]
Let then
\[\left\{\begin{array}{l}\Pi^{\mathbf{grad}}_{h,0}=\mathcal{P}_{h,0}\otimes \mathcal{P}_{h,0}\otimes\mathcal{P}_{h,0},\\ \\ \Pi^{\mathbf{curl}}_{h,0}=\begin{pmatrix}\mathcal{Q}_{h,0}\otimes\mathcal{P}_{h,0} \otimes\mathcal{P}_{h,0}\\ \mathcal{P}_{h,0}\otimes\mathcal{Q}_{h,0}\otimes\mathcal{P}_{h,0}\\ \mathcal{P}_{h,0}\otimes\mathcal{P}_{h,0}\otimes\mathcal{Q}_{h,0}\end{pmatrix},\\ \\ \Pi^{div}_{h,0}=\begin{pmatrix}\mathcal{P}_{h,0}\otimes\mathcal{Q}_{h,0} \otimes\mathcal{Q}_{h,0}\\ \mathcal{Q}_{h,0}\otimes\mathcal{P}_{h,0}\otimes\mathcal{Q}_{h,0}\\ \mathcal{Q}_{h,0}\otimes\mathcal{Q}_{h,0}\otimes\mathcal{P}_{h,0},\end{pmatrix}.\\ \\ \Pi^{L^{2}}_{h,0}=\mathcal{Q}_{h,0}\otimes\mathcal{Q}_{h,0}\otimes\mathcal{Q}_{h,0}.\end{array}\right.\]
Next, we provide some approximation error results, for this purpose, it is more suitable to use an unified presentation. Thus, as in subsection 2.1 we write \(D\) for either \(\mathbf{curl}\) or \(div\), and in the case with essential boundary conditions we will drop the index \(0\) (see Table (2)). We have then (see [14, Proposition 4.5])
**Proposition 2.5**.: _The diagram shown below is exact and commutes:_
\[\begin{array}{ccccc}\mathcal{H}(\mathcal{D}^{-},\Omega)&\xrightarrow{ \mathcal{D}^{-}}&\mathcal{H}(\mathcal{D},\Omega)&\xrightarrow{\mathcal{D}}& \mathcal{H}(\mathcal{D}^{+},\Omega)\\ \Pi^{\mathcal{D}^{-}}_{h}\bigg{\downarrow}&&\Pi^{\mathcal{D}}_{h}\bigg{\downarrow} &&\Pi^{\mathcal{D}^{+}}_{h}\bigg{\downarrow}\\ V_{h}(\mathcal{D}^{-},\Omega)&\xrightarrow{\mathcal{D}^{-}}&V_{h}( \mathcal{D},\Omega)&\xrightarrow{\mathcal{D}}&V_{h}(\mathcal{D}^{+},\Omega) \end{array} \tag{6}\]
_Finally, we shall need the following approximation result (see [14, Theorem 5.3])_
**Theorem 2.6**.: _Suppose \(l\) and \(r\) are integers satisfying \(0\leq l\leq r\leq\underline{p}\) and \(l\leq\underline{\alpha}\), where \(\underline{p}\) is the minimum of \(p_{1}\), \(p_{2}\), and \(p_{3}\), and \(\underline{\alpha}\) is the minimum of \(\mathbf{\alpha}_{1}\), \(\mathbf{\alpha}_{2}\), and \(\mathbf{\alpha}_{3}\). Then, the following inequalities hold true_
\[\begin{array}{ll}\left\|\varphi-\Pi^{\mathcal{D}}_{h}\varphi\right\|_{H^{l} (\Omega)}\leq Ch^{r-l}\|\varphi\|_{H^{s}(\Omega)},&\forall\varphi\in H^{r}( \Omega),\\ \left\|\mathbf{u}-\Pi^{\mathbf{grad}}_{h}\mathbf{u}\right\|_{\mathbf{H}^{l}(\Omega)}\leq Ch^{r- l}\|\mathbf{u}\|_{\mathbf{H}^{s}(\Omega)},&\forall\mathbf{u}\in\mathbf{H}^{r}(\Omega),\\ \left\|\varphi-\Pi^{\mathcal{D}}_{h,0}\varphi\right\|_{H^{l}(\Omega)}\leq Ch ^{r-l}\|\varphi\|_{H^{s}(\Omega)},&\forall\varphi\in\mathcal{H}_{0}(\mathcal{D },\Omega)\cap H^{r}(\Omega),\\ \left\|\mathbf{u}-\Pi^{\mathbf{grad}}_{h,0}\mathbf{u}\right\|_{\mathbf{H}^{l}(\Omega)}\leq Ch ^{r-l}\|\mathbf{u}\|_{\mathbf{H}^{s}(\Omega)},&\forall\mathbf{u}\in\mathbf{H}^{1}_{0}(\Omega) \cap\mathbf{H}^{r}(\Omega).\end{array}\]
_Here, \(C\) is a positive constant that does not depend on \(h\)._
## 3 Auxiliary Space Preconditioners
The aim of this section is to develop a suitable auxiliary space preconditioner for the \(\mathbf{curl-curl}\) and \(\mathbf{grad}-div\) problems. As mentioned earlier, the main challenge is to drive a discrete version of the regular decomposition presented in Theorem 2.2; known as the Hitmair-Xu decomposition. The section is divided into two subsections. In Subsection 3.1, we focus on the discrete Hitmair-Xu decomposition. The outcome of this subsection is later employed in Subsection 3.2 to construct the ASP preconditioner.
Throughout this section, we use the notation \(A\lesssim B\) to indicate the existence of a constant \(C>0\), independent of \(h\) and \(\tau\), such that \(A\leq CB\). If \(A\lesssim B\) and \(B\lesssim A\), we write \(A\approx B\).
### Discrete Decompositions
We need the following preliminary results in order to prove Hitpmair-Xu decomposition stated in Proposition 3.4.
**Lemma 3.1**.: _For every \(\mathbf{\varphi}\in\mathbf{X}(\Omega)\) such that \(\mathcal{D}\mathbf{\varphi}\in V_{h}(\mathcal{D}^{+},\Omega)\), we have_
1. \(\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\) _is well-defined._
2. \(\mathcal{D}\mathbf{\varphi}=\mathcal{D}\left(\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\right)\)_._
3. \(\left\|\mathbf{\varphi}-\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\right\|_{L^{2}(\Omega) }\lesssim h\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}\)_._
Proof.: First insertion is a consequence of the fact that \(\mathbf{X}(\Omega)\subset\mathcal{H}(\mathcal{D},\Omega)\). Concerning (ii), using the commutativity of Diagram (6), we obtain
\[\mathcal{D}\left(\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\right)=\Pi_{h}^{\mathcal{ D}^{+}}\left(\mathcal{D}\mathbf{\varphi}\right).\]
We now use \(\mathcal{D}\mathbf{\varphi}\in V_{h}(\mathcal{D}^{+},\Omega)\) to obtain (ii). Estimate (iii) follows from Theorem 2.6.
**Lemma 3.2**.: _For each \(\mathbf{u}_{h}\in\mathbf{V}_{h}(\mathcal{D},\Omega)\), there exist \(\mathbf{\varphi}\in\mathbf{X}(\Omega)\) and \(\mathbf{\phi}_{h}\in V_{h}(\mathcal{D}^{-},\Omega)\) such that_
\[\mathbf{u}_{h}=\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}+\mathcal{D}^{-}\mathbf{\phi}_{h}, \tag{7}\]
_with estimates_
\[\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{ 2}(\Omega)},\quad\|\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}\leq\|\mathbf{u}_{h}\|_{\bm {L}^{2}(\Omega)}, \tag{8}\]
\[\text{and}\quad\|\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}+\| \mathcal{D}^{-}\mathbf{\phi}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{u}_{h}\|_{\bm {L}^{2}(\Omega)}. \tag{9}\]
Proof.: Let \(\mathbf{u}_{h}\in\mathbf{V}_{h}(\mathcal{D},\Omega)\). According to Theorem 2.2, there exists \(\mathbf{\varphi}\in\mathbf{X}(\Omega)\) such that
\[\left\{\begin{array}{l}\mathcal{D}\mathbf{\varphi}=\mathcal{D}\mathbf{u}_{h}\in\mathbf{ V}_{h}(\mathcal{D}^{+},\Omega),\\ \|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{2}( \Omega)},\\ \|\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}\leq\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}. \end{array}\right. \tag{10}\]
We now apply Lemma 3.1 and obtain
\[\mathcal{D}\mathbf{u}_{h}=\mathcal{D}\mathbf{\varphi}=\mathcal{D}\left(\Pi_{h}^{ \mathcal{D}}\mathbf{\varphi}\right),\]
hence,
\[\mathbf{u}_{h}-\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\in\text{\bf ker}\left(\mathcal{ D}\mid_{\mathbf{V}_{h}(\mathcal{D},\Omega)}\right)=\mathcal{D}^{-}\left(V_{h}( \mathcal{D}^{-},\Omega)\right).\]
Therefore, there exists \(\mathbf{\phi}_{h}\in V_{h}(\mathcal{D}^{-},\Omega)\) such that \(\mathbf{u}_{h}-\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}=\mathcal{D}^{-}\mathbf{\phi}_{h}\), which yields (7). Property (8) follows from estimates in (10).
We now show (9). We write
\[\|\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)} \leq \|\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}-\mathbf{\varphi}\|_{\mathbf{L}^{2}( \Omega)}+\|\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}\] \[\lesssim h\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}+\|\mathbf{\varphi}\|_{\mathbf{L}^{2} (\Omega)}\] \[\lesssim h\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}+\|\mathbf{u}_{h}\|_{ \mathbf{L}^{2}(\Omega)},\]
where in the last estimate we have used first and second estimates in (10). Moreover, using the inverse inequality
\[\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim h^{-1}\|\mathbf{u}_{h}\|_{ \mathbf{L}^{2}(\Omega)}, \tag{11}\]
we get
\[\|\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{u}_{h }\|_{\mathbf{L}^{2}(\Omega)}.\]
On the other hand, we have
\[\|\mathcal{D}^{-}\mathbf{\phi}_{h}\|_{\mathbf{L}^{2}(\Omega)} = \|\mathbf{u}_{h}-\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\|_{\mathbf{L}^{2}( \Omega)}\] \[\lesssim \|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}+\|\Pi_{h}^{\mathcal{D}}\mathbf{ \varphi}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)},\]
and inequality (9) is proved.
Let \(\mathbf{X}_{h}(\Omega)\) denotes one of the two discrete spaces \(\left(V_{h}(\mathbf{grad},\Omega)\right)^{3}\) or \(\left(V_{h,0}(\mathbf{grad},\Omega)\right)^{3}\), depending if we work with Dirichlet or Neumann boundary condition type (see Table 2).
**Lemma 3.3**.: _Every \(\mathbf{\varphi}\in\mathbf{X}(\Omega)\) admits a stable approximation \(\mathbf{\varphi}_{h}\in\mathbf{X}_{h}(\Omega)\) satisfying_
\[\|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{\varphi}\|_{\mathbf{L}^{2 }(\Omega)},\quad\text{and}\quad h^{-1}\|\mathbf{\varphi}-\mathbf{\varphi}_{h}\|_{\mathbf{L }^{2}(\Omega)}+\|\mathbf{\varphi}_{h}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathbf{\varphi}\|_ {\mathbf{X}(\Omega)}.\]
Proof.: Let \(\mathbf{\varphi}:=\left(\varphi^{1},\varphi^{2},\varphi^{3}\right)\in\mathbf{X}(\Omega)\) and define
\[\mathbf{\varphi}_{h}=\left(\Pi_{h}^{\mathbf{grad}}\varphi^{1},\Pi_{h}^{\mathbf{grad}} \varphi^{2},\Pi_{h}^{\mathbf{grad}}\varphi^{3}\right)\in\mathbf{X}_{h}(\Omega).\]
According to Theorem 2.6, we have:
\[\|\varphi^{k}-\Pi_{h}^{\mathbf{grad}}\varphi^{k}\|_{L^{2}(\Omega)} \lesssim\|\varphi^{k}\|_{L^{2}(\Omega)},\quad k=1,2,3, \tag{12}\] \[\|\varphi^{k}-\Pi_{h}^{\mathbf{grad}}\varphi^{k}\|_{L^{2}(\Omega)} \lesssim h\|\varphi^{k}\|_{H^{1}(\Omega)},\quad k=1,2,3, \tag{13}\]
and
\[\|\varphi^{k}-\Pi_{h}^{\mathbf{grad}}\varphi^{k}\|_{H^{1}(\Omega)} \lesssim\|\varphi^{k}\|_{H^{1}(\Omega)},\quad k=1,2,3. \tag{14}\]
Using (12), we get
\[\|\mathbf{\varphi}-\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}\] \[=\|\varphi^{1}-\Pi_{h}^{\mathbf{grad}}\varphi^{1}\|_{L^{2}(\Omega)} ^{2}+\|\varphi^{2}-\Pi_{h}^{\mathbf{grad}}\varphi^{2}\|_{L^{2}(\Omega)}^{2}+\| \varphi^{3}-\Pi_{h}^{\mathbf{grad}}\varphi^{3}\|_{L^{2}(\Omega)}^{2}\] \[\lesssim\|\varphi^{1}\|_{L^{2}(\Omega)}^{2}+\|\varphi^{2}\|_{L^{2} (\Omega)}^{2}+\|\varphi^{3}\|_{L^{2}(\Omega)}^{2}=\|\mathbf{\varphi}\|_{\mathbf{L}^{2} (\Omega)}^{2}.\]
From which we deduce that
\[\|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}\leq\|\mathbf{\varphi}-\mathbf{\varphi}_{h}\|_ {\mathbf{L}^{2}(\Omega)}+\|\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{\varphi} \|_{\mathbf{L}^{2}(\Omega)},\]
which proves the first inequality.
Similarly, using (13) and (14) we drive
\[\|\mathbf{\varphi}-\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim h\left(\|\varphi^ {1}\|_{H^{1}(\Omega)}+\|\varphi^{2}\|_{H^{1}(\Omega)}+\|\varphi^{3}\|_{H^{1}( \Omega)}\right)\lesssim\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)},\]
and
\[\|\mathbf{\varphi}_{h}\|_{\mathbf{X}(\Omega)}\leq\|\mathbf{\varphi}-\mathbf{\varphi}_{h}\|_{ \mathbf{X}(\Omega)}+\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathbf{\varphi}\|_{\mathbf{X} (\Omega)},\]
which conclude the proof of the lemma.
We have the following regular discrete decomposition.
**Proposition 3.4**.: _Let \(\tau>0\). Every \(\mathbf{u}_{h}\in\mathbf{V}_{h}(\mathcal{D},\Omega)\) has a decomposition_
\[\mathbf{u}_{h}=\mathbf{w}_{h}+\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}_{h}+\mathcal{D}^{-}\mathbf{ \phi}_{h}, \tag{15}\]
_where \(\mathbf{w}_{h}\in\mathbf{V}_{h}(\mathcal{D},\Omega)\), \(\mathbf{\varphi}_{h}\in\mathbf{X}_{h}(\Omega)\) and \(\mathbf{\phi}_{h}\in V_{h}(\mathcal{D}^{-},\Omega)\) with estimate_
\[(h^{-2}+\tau)\left\|\mathbf{w}_{h}\right\|_{\mathbf{L}^{2}(\Omega)}^{2}+\left\|\mathbf{ \varphi}_{h}\right\|_{\mathbf{X}(\Omega)}^{2}+\tau\|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2 }(\Omega)}^{2}+\tau\|\mathcal{D}^{-}\mathbf{\phi}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2} \lesssim\|\mathbf{u}_{h}\|_{A_{\mathcal{D}}}^{2}, \tag{16}\]
_with notation_
\[\|\mathbf{u}_{h}\|_{A_{\mathcal{D}}}^{2}=\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{2}( \Omega)}^{2}+\tau\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}.\]
Proof.: Let \(\mathbf{u}_{h}\in\mathbf{V}_{h}(\mathcal{D},\Omega)\). Using lemma (3.2) we can find \(\mathbf{\varphi}\in\mathbf{X}(\Omega)\) and \(\mathbf{\phi}_{h}\in V_{h}(\mathcal{D}^{-},\Omega)\) with properties
\[\left\{\begin{array}{l}\mathbf{u}_{h}=\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}+ \mathcal{D}^{-}\phi_{h}\\ \|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{2} (\Omega)}\\ \|\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}\leq\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)} \\ \|\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}\|_{\mathbf{L}^{2}(\Omega)}+\|\mathcal{D}^{-} \mathbf{\phi}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega )},\end{array}\right. \tag{17}\]
and let \(\mathbf{\varphi}_{h}\in\mathbf{X}_{h}(\Omega)\) be the stable approximation of \(\mathbf{\varphi}\), given by Lemma 3.3. We define
\[\mathbf{w}_{h}=\Pi_{h}^{\mathcal{D}}(\mathbf{\varphi}-\mathbf{\varphi}_{h}).\]
In this way, using the decomposition in (17), we obtain
\[\mathbf{u}_{h}=\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}+\mathcal{D}^{-}\mathbf{ \phi}_{h} = \Pi_{h}^{\mathcal{D}}(\mathbf{\varphi}-\mathbf{\varphi}_{h})+\Pi_{h}^{ \mathcal{D}}\mathbf{\varphi}_{h}+\mathcal{D}^{-}\mathbf{\phi}_{h}\] \[= \mathbf{w}_{h}+\Pi_{h}^{\mathcal{D}}\mathbf{\varphi}_{h}+\mathcal{D}^{-} \mathbf{\phi}_{h},\]
and decomposition (15) is proved. In order to show (16), we need to perform careful estimates. Indeed, we have
\[h^{-1}\|\mathbf{w}_{h}\|_{\mathbf{L}^{2}(\Omega)} = h^{-1}\|\Pi_{h}^{\mathcal{D}}(\mathbf{\varphi}-\mathbf{\varphi}_{h})\|_ {\mathbf{L}^{2}(\Omega)} \tag{18}\] \[\lesssim h^{-1}\|\mathbf{\varphi}-\mathbf{\varphi}_{h})\|_{\mathbf{L}^{2}(\Omega)} \lesssim\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathcal{D}\mathbf{u}_{h}\|_{ \mathbf{L}^{2}(\Omega)},\]
where in the last estimate we have used the first inequality in (17). Moreover, using the inverse inequality (11) we get
\[\|\mathbf{w}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim h\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^ {2}(\Omega)}\lesssim\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}. \tag{19}\]
Concerning the component \(\mathbf{\varphi}_{h}\), we use the first and the second inequalities in (17) to obtain
\[\|\mathbf{\varphi}_{h}\|_{\mathbf{X}(\Omega)}\lesssim\|\mathbf{\varphi}\|_{\mathbf{X}(\Omega)} \lesssim\|\mathcal{D}\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}, \tag{20}\]
and
\[\|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}\lesssim\|\mathbf{\varphi}\|_{\mathbf{L}^{2} (\Omega)}\lesssim\|\mathbf{u}_{h}\|_{\mathbf{L}^{2}(\Omega)}. \tag{21}\]
Combining (18)-(21) together with third estimate in (17) we obtain the desired estimate (16). This complete the proof of the proposition.
Proposition 3.4 forms the basis for applying the auxiliary space theory described in Section 2.2. It offers a strategy for selecting suitable auxiliary spaces and projections, as discussed in the next subsection, and provides clear instructions for choosing a smoothing operator, which must satisfy the following condition:
\[s(\mathbf{w}_{h},\mathbf{w}_{h})\approx(h^{-2}+\tau)\left\|\mathbf{w}_{h}\right\|_{\mathbf{L}^ {2}(\Omega)}^{2}.\]
Typically, a smoother such as Jacobi or Gauss-Seidel is used, which depends on the choice of the bases of the discrete spaces, similar to the multigrid method.
Next, we will improve estimate (16) to apply a Jacobi smoothing method, by first constructing suitable bases for the discrete spaces. We adopt the following set of basis functions, as proposed in [16, 23, 57]:
\[\mathcal{B}(\boldsymbol{grad})=\Big{\{}B_{i_{1},p_{1}}\otimes B_{i_{2},p_{2}} \otimes B_{i_{3},p_{3}}:\quad 1\leq i_{l}\leq n_{l},\quad l=1,2,3\Big{\}},\]
\[\mathcal{B}(\boldsymbol{curl})=\{(D_{i_{1},p_{1}-1}\otimes B_{i_{2},p_{2}} \otimes B_{i_{3},p_{3}})\boldsymbol{e}_{1}:\;1\leq i_{1}\leq n_{1}-1,\,1\leq i _{l}\leq n_{l},\,l=2,3\}\]
\[\cup\{(B_{j_{1},p_{1}}\otimes D_{j_{2},p_{2}-1}\otimes B_{j_{3},p_{3}}) \boldsymbol{e}_{2}:\;1\leq j_{2}\leq n_{2}-1,\,1\leq j_{l}\leq n_{l},\,l=1,3\}\]
\[\cup\{(B_{k_{1},p_{1}}\otimes B_{k_{2},p_{2}}\otimes D_{k_{3},p_{3}-1}) \boldsymbol{e}_{3}\;1\leq k_{3}\leq n_{3}-1,\,1\leq k_{l}\leq n_{l},\,l=1,2\},\]
\[\mathcal{B}(div)=\{(B_{i_{1},p_{1}}\otimes D_{i_{2},p_{2}}\otimes D_{i_{3},p_ {3}-1})\boldsymbol{e}_{1}:\;1\leq i_{1}\leq n_{1},\,1\leq i_{l}\leq n_{l}-1,\, l=2,3\}\]
\[\cup\{(D_{j_{1},p_{1}-1}\otimes B_{j_{2},p_{2}}\otimes D_{j_{3},p_{3}-1}) \boldsymbol{e}_{2}:\;1\leq j_{2}\leq n_{2},\,1\leq j_{l}\leq n_{l}-1,\,l=1,3\}\]
\[\cup\{(D_{k_{1},p_{1}-1}\otimes D_{k_{2},p_{2}-1}\otimes B_{k_{3},p_{3}}) \boldsymbol{e}_{3}:\;1\leq k_{3}\leq n_{3},\,1\leq k_{l}\leq n_{l}-1,\,l=1,2\},\]
where \(\{\boldsymbol{e}_{l}\}_{l=1,2,3}\) is the canonical basis of \(\mathbb{R}^{3}\) and \(D_{i,p-1}\) stands for Curry-Schoenberg spline basis (see, e.g., [57])
\[D_{i,p-1}(t)=\frac{p}{t_{i+p+1}-t_{i+1}}B_{i+1,p-1}(t),\quad t\in[0,1].\]
In the case with boundary conditions, we introduce
\[\mathcal{B}_{0}(\boldsymbol{grad})=\Big{\{}B_{i_{1},p_{1}}\otimes B_{i_{2},p_ {2}}\otimes B_{i_{3},p_{3}}:\quad 2\leq i_{l}\leq n_{l}-1,\quad l=1,2,3\Big{\}},\]
\[\mathcal{B}_{0}(\boldsymbol{curl})=\{(D_{i_{1},p_{1}-1}\otimes B_{i_{2},p_{2}} \otimes B_{i_{3},p_{3}})\boldsymbol{e}_{1}:\;1\leq i_{1}\leq n_{1}-1,\,2\leq i _{l}\leq n_{l}-1,\,l=2,3\}\]
\[\cup\{(B_{j_{1},p_{1}}\otimes D_{j_{2},p_{2}-1}\otimes B_{j_{3},p_{3}}) \boldsymbol{e}_{2}:\;1\leq j_{2}\leq n_{2}-1,\,2\leq j_{l}\leq n_{l}-1,\,l=1,3\}\]
\[\cup\{(B_{k_{1},p_{1}}\otimes B_{k_{2},p_{2}}\otimes D_{k_{3},p_{3}-1}) \boldsymbol{e}_{3}\;1\leq k_{3}\leq n_{3}-1,\,2\leq k_{l}\leq n_{l}-1,\,l=1,2\},\]
\[\mathcal{B}_{0}(div)=\{(B_{i_{1},p_{1}}\otimes D_{i_{2},p_{2}1}\otimes D_{i_{3},p_{3}-1})\boldsymbol{e}_{1}:\;2\leq i_{1}\leq n_{1}-1,\,1\leq i_{l}\leq n_{l} -1,\,l=2,3\}\]
\[\cup\{(D_{j_{1},p_{1}-1}\otimes B_{j_{2},p_{2}}\otimes D_{j_{3},p_{3}-1}) \boldsymbol{e}_{2}:\;2\leq j_{2}\leq n_{2}-1,\,1\leq j_{l}\leq n_{l}-1,\,l=1,3\}\]
\[\cup\{(D_{k_{1},p_{1}-1}\otimes D_{k_{2},p_{2}-1}\otimes B_{k_{3},p_{3}}) \boldsymbol{e}_{3}:\;2\leq k_{3}\leq n_{3}-1,\,1\leq k_{l}\leq n_{l}-1,\,l=1,2\},\]
We clearly have
\[\left\{\begin{array}{l}V_{h}(\boldsymbol{grad},\Omega)=span\left(\mathcal{B} (\boldsymbol{grad})\right),\\ V_{h,0}(\boldsymbol{grad},\Omega)=span\left(\mathcal{B}_{0}(\boldsymbol{grad}) \right),\\ \boldsymbol{V}_{h}(\mathcal{D},\Omega)=span\left(\mathcal{B}(\mathcal{D}) \right),\end{array}\right.\]
where in the unified notation, as usual, we dropped the index \(0\) in the case with boundary conditions.
We will make use of the following \(L^{2}\)-stability of spline basis functions (see [48]).
**Theorem 3.5**.: _Let \((B_{i,p})_{1\leq i\leq n}\) and \((D_{j,p})_{1\leq j\leq n-1}\) denote, respectively, the \(B\)-spline and the Curry-Schoenberg spline bases associated to knot vector \(T\). Then, we have_
\[h\sum_{i=1}^{n}b_{i}^{2}\approx\left\|\sum_{i=1}^{n}b_{i}B_{i,p}\right\|_{L^{2}( 0,1)}^{2},\qquad\sum_{j=1}^{n-1}d_{j}^{2}\approx\left\|\sum_{j=1}^{n-1}d_{j}D_{ j,p-1}\right\|_{L^{2}(0,1)}^{2},\]
_for all vectors \((b_{1},\cdots,b_{n})\) and \((d_{1},\cdots,d_{n-1})\)._
_In particular,_
\[\left\|B_{i,p}\right\|_{L^{2}(0,1)}^{2}\approx h,\quad\left\|D_{j,p-1}\right\|_ {L^{2}(0,1)}^{2}\approx 1,\quad\forall 1\leq i\leq n,\quad\forall 1\leq j\leq n-1.\]
A direct consequence of this theorem is the following stability result:
**Corollary 3.6**.: _The bases \(\mathcal{B}(D)\) are \(L^{2}\)-stables, i.e_
\[\left\|\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}\, \boldsymbol{v}_{r}\right\|_{\boldsymbol{L}^{2}(\Omega)}^{2}\approx\sum_{ \begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}^{2}\,\| \boldsymbol{v}_{r}\|_{\boldsymbol{L}^{2}(\Omega)}^{2}\,, \tag{22}\]
_for any vector \((c_{r})\in\mathbb{R}^{\#\mathcal{B}(\mathcal{D})}\)._
Proof.: From Theorem 3.5 we deduce that for all vectors \((b_{1},\cdots,b_{n})\) and \((d_{1},\cdots,d_{n-1})\) we have
\[\sum_{i=1}^{n}b_{i}^{2}\,\|B_{i,p}\|_{L^{2}(0,1)}^{2}\approx h\sum_{i=1}^{n}b_ {i}^{2}\approx\left\|\sum_{i=1}^{n}b_{i}B_{i,p}\right\|_{L^{2}(0,1)}^{2},\]
and
\[\sum_{j=1}^{n-1}d_{j}^{2}\,\|D_{j,p-1}\|_{L^{2}(0,1)}^{2}\approx\sum_{j=1}^{n- 1}d_{j}^{2}\approx\left\|\sum_{j=1}^{n-1}d_{j}D_{j,p-1}\right\|_{L^{2}(0,1)}^{2}.\]
The bound (22) follows by tensorization and applying last estimates on each coordinate.
We prove the following stable decomposition result.
**Theorem 3.7** (Hitpmair-Xu decomposition).: _Let \(\tau>0\). For each \(\boldsymbol{u}_{h}\in\boldsymbol{V}_{h}(\mathcal{D},\Omega)\), there exit \(\boldsymbol{w}_{h}\in\boldsymbol{V}_{h}(\mathcal{D},\Omega)\), \(\boldsymbol{\varphi}_{h}\in\boldsymbol{X}_{h}(\Omega)\) and \(\boldsymbol{\phi}_{h}\in V_{h}(\mathcal{D}^{-},\Omega)\) such that_
\[\boldsymbol{u}_{h}=\boldsymbol{w}_{h}+\Pi_{h}^{\mathcal{D}}\boldsymbol{\varphi }_{h}+\mathcal{D}^{-}\boldsymbol{\phi}_{h}. \tag{23}\]
_In addition, expanding the component \(\boldsymbol{w}_{h}\) on the basis \(\mathcal{B}(\mathcal{D})\)_
\[\boldsymbol{w}_{h}=\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}\,\boldsymbol {v}_{r},\]
_we have_
\[\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}^{2}\,\| \boldsymbol{v}_{r}\|_{A_{\mathcal{D}}}^{2}+\|\boldsymbol{\varphi}_{h}\|_{ \boldsymbol{X}(\Omega)}^{2}+\tau\|\boldsymbol{\varphi}_{h}\|_{\boldsymbol{L} ^{2}(\Omega)}^{2}+\|\mathcal{D}^{-}\phi_{h}\|_{A_{\mathcal{D}}}^{2}\lesssim\| \boldsymbol{u}_{h}\|_{A_{\mathcal{D}}}^{2}. \tag{24}\]
Proof.: Let \(\boldsymbol{w}_{h}\in\boldsymbol{V}_{h}(\mathcal{D},\Omega)\), \(\boldsymbol{\varphi}_{h}\in\boldsymbol{X}_{h}(\Omega)\), and \(\boldsymbol{\phi}_{h}\in V_{h}(\mathcal{D}^{-},\Omega)\) the components given by Proposition 3.4. By remaking that
\[\|\mathcal{D}^{-}\boldsymbol{\phi}_{h}\|_{A_{\mathcal{D}}}^{2}=\|\mathcal{D} \,\left(\mathcal{D}^{-}\boldsymbol{\phi}_{h}\right)\|_{\boldsymbol{L}^{2}( \Omega)}^{2}+\tau\,\|\mathcal{D}^{-}\boldsymbol{\phi}_{h}\|_{\boldsymbol{L}^{ 2}(\Omega)}^{2}=\tau\|\mathcal{D}^{-}\boldsymbol{\phi}_{h}\|_{\boldsymbol{L}^{ 2}(\Omega)}^{2},\]
we need only to estimate the first term in (24). Using (22) we obtain
\[\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}^{2}\,\| \boldsymbol{v}_{r}\|_{A_{\mathcal{D}}}^{2} = \sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}^{2}\,\| \mathcal{D}\boldsymbol{v}_{r}\|_{L^{2}(\Omega)}^{2}+\tau\sum_{\begin{subarray}{ c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}^{2}\,\| \boldsymbol{v}_{r}\|_{L^{2}(\Omega)}^{2}\] \[\lesssim (h^{-2}+\tau)\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}^{2}\,\| \boldsymbol{v}_{r}\|_{L^{2}(\Omega)}^{2}\] \[\approx (h^{-2}+\tau)\left\|\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\mathcal{D})\end{subarray}}c_{r}\,\boldsymbol {v}_{r}\right\|_{\boldsymbol{L}^{2}(\Omega)}^{2}\] \[= (h^{-2}+\tau)\|\boldsymbol{w}_{h}\|_{\boldsymbol{L}^{2}(\Omega)},\]
and we conclude the proof using (16).
As a final step, we provide a slightly different decomposition for the case of \(\mathcal{D}=div\). In fact using the splitting of Theorem 3.7 in the case of the \(div\) problem involves solving an \(\mathbf{H}(\mathbf{curl},\Omega)\) elliptic problem, which also has a large null space. To avoid this difficulty, we adopt the approach of [40] and we use both the decomposition presented in Theorem 3.7 and the one outlined in Proposition 3.4. Specifically, we prove the following result:
**Corollary 3.8** (Hitpmair-Xu decomposition for \(\mathbf{H}(div,\Omega)\)).: _Let \(\tau>0\). For each \(\mathbf{u}_{h}\in\mathbf{V}_{h}(div,\Omega)\), there exit \(\mathbf{w}_{h}\in\mathbf{V}_{h}(div,\Omega)\)\(\mathbf{z}_{h}\in\mathbf{V}_{h}(\mathbf{curl},\Omega)\) and \((\mathbf{\varphi}_{h},\mathbf{\psi}_{h})\in\mathbf{X}_{h}(\Omega)^{2}\) such that_
\[\mathbf{u}_{h}=\mathbf{w}_{h}+\Pi_{h}^{div}\mathbf{\varphi}_{h}+\mathbf{curl}\,\mathbf{z}_{h}+\bm {curl}\,\mathbf{\psi}_{h}. \tag{25}\]
_In addition, expanding the components \(\mathbf{w}_{h}\) and \(\mathbf{z}_{h}\) on the bases \(\mathcal{B}(div)\) and \(\mathcal{B}(\mathbf{curl})\), respectively,_
\[\mathbf{w}_{h}=\sum_{\begin{subarray}{c}\mathbf{v}_{r}\in\mathcal{B}(div)\end{subarray}}c _{r}\,\mathbf{v}_{r},\quad\mathbf{z}_{h}=\sum_{\begin{subarray}{c}\mathbf{v}_{r}\in \mathcal{B}(\mathbf{curl})\end{subarray}}d_{r}\,\mathbf{v}_{r},\]
_we have_
\[\sum_{\begin{subarray}{c}\mathbf{v}_{r}\in\mathcal{B}(div)\end{subarray}}c _{r}^{2}\,\|\mathbf{v}_{r}\|_{A_{div}}^{2}+\|\mathbf{\varphi}_{h}\|_{\mathbf{X}(\Omega)}^ {2}+\tau\|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2} \tag{26}\] \[+\tau\sum_{\begin{subarray}{c}\mathbf{v}_{r}\in\mathcal{B}(\mathbf{curl })\end{subarray}}d_{r}^{2}\,\|\mathbf{curl}\,\mathbf{v}_{r}\|_{\mathbf{L}^{2}(\Omega)}^{2} +\tau\,\|\mathbf{\psi}_{h}\|_{\mathbf{X}(\Omega)}^{2}\lesssim\|\mathbf{u}_{h}\|_{A_{div}} ^{2},\]
Proof.: Using Theorem 3.7, we can find \(\mathbf{w}_{h}\in\mathbf{V}_{h}(div,\Omega)\), \(\mathbf{\varphi}_{h}\in\mathbf{X}_{h}(\Omega)\) and \(\mathbf{\phi}_{h}\in\mathbf{V}_{h}(\mathbf{curl},\Omega)\) such that
\[\mathbf{u}_{h}=\mathbf{w}_{h}+\Pi_{h}^{div}\mathbf{\varphi}_{h}+\mathbf{curl}\,\mathbf{\phi}_{h},\]
and
\[\sum_{\begin{subarray}{c}\mathbf{v}_{r}\in\mathcal{B}(div)\end{subarray}}c_{r}^{ 2}\,\|\mathbf{v}_{r}\|_{A_{div}}^{2}+\|\mathbf{\varphi}_{h}\|_{\mathbf{X}(\Omega)}^{2}+ \tau\|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}+\|\mathbf{curl}\,\phi_{h}\|_{A_{ div}}^{2}\lesssim\|\mathbf{u}_{h}\|_{A_{div}}^{2}. \tag{27}\]
Using however Proposition 3.4, there exist \(\mathbf{z}_{h}\in\mathbf{V}_{h}(\mathbf{curl},\Omega)\), \(\mathbf{\psi}_{h}\in\mathbf{X}_{h}(\Omega)\) and \(\Psi_{h}\in V_{h}(\mathbf{grad},\Omega)\) such that
\[\mathbf{\phi}_{h}=\mathbf{z}_{h}+\Pi_{h}^{\mathbf{curl}}\mathbf{\psi}_{h}+\mathbf{grad}\,\Psi_{h},\]
with estimate
\[h^{-2}\|\mathbf{z}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}+\|\psi\|_{\mathbf{L}^{2}(\Omega)}^{ 2}\lesssim\|\mathbf{curl}\,\mathbf{\phi}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}. \tag{28}\]
It follows that
\[\mathbf{u}_{h} = \mathbf{w}_{h}+\Pi_{h}^{div}\mathbf{\varphi}_{h}+\mathbf{curl}\,\mathbf{z}_{h}+ \mathbf{curl}\left(\Pi_{h}^{\mathbf{curl}}\mathbf{\psi}_{h}\right)+\mathbf{curl}\,\mathbf{grad}\, \Psi_{h}\] \[= \mathbf{w}_{h}+\Pi_{h}^{div}\mathbf{\varphi}_{h}+\mathbf{curl}\,\mathbf{z}_{h}+ \mathbf{curl}\,\mathbf{\psi}_{h},\]
where in the last equality we have used the fact that \(\mathbf{curl}\left(\Pi_{h}^{\mathbf{curl}}\mathbf{\psi}_{h}\right)=\mathbf{curl}\,\mathbf{\psi}_{h}\) and \(\mathbf{curl}\,\mathbf{grad}\,\Psi_{h}=0\); this prove (25).
We now prove bound (26). Using estimate (27), we obtain
\[\sum_{\begin{subarray}{c}\mathbf{v}_{r}\in\mathcal{B}(div)\end{subarray}}c_{r}^{2} \,\|\mathbf{v}_{r}\|_{A_{div}}^{2}+\|\mathbf{\varphi}_{h}\|_{\mathbf{X}(\Omega)}^{2}+\tau \|\mathbf{\varphi}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}+\tau\|\mathbf{curl}\,\phi_{h}\|_{\mathbf{L }^{2}(\Omega)}^{2}\lesssim\|\mathbf{u}_{h}\|_{A_{div}}^{2}. \tag{29}\]
But, using (28), we have
\[\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl})\end{subarray}}d_{r}^{2}\left\| \boldsymbol{curl}\,\boldsymbol{v}_{r}\right\|_{\boldsymbol{L}^{2}(\Omega)}^{2} +\left\|\boldsymbol{\psi}_{h}\right\|_{\boldsymbol{X}(\Omega)}^{2} \lesssim h^{-2}\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl})\end{subarray}}d_{r}^{2} \left\|\boldsymbol{v}_{r}\right\|_{\boldsymbol{L}^{2}(\Omega)}^{2}+\left\| \boldsymbol{\psi}_{h}\right\|_{\boldsymbol{X}(\Omega)}^{2}\] \[\approx h^{-2}\left\|\sum_{\begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl})\end{subarray}}d_{r} \boldsymbol{v}_{r}\right\|_{\boldsymbol{L}^{2}(\Omega)}^{2}+\left\| \boldsymbol{\psi}_{h}\right\|_{\boldsymbol{X}(\Omega)}^{2}\] \[= h^{-2}\|\boldsymbol{z}_{h}\|_{\boldsymbol{L}^{2}(\Omega)}^{2}+ \left\|\boldsymbol{\psi}_{h}\right\|_{\boldsymbol{X}(\Omega)}^{2}\] \[\lesssim \left\|\boldsymbol{curl}\,\boldsymbol{\phi}_{h}\right\|_{ \boldsymbol{L}^{2}(\Omega)}^{2},\]
and we conclude the proof combining this last estimate together with (29).
### Auxiliary Space Preconditioners
We are now ready to apply the abstract ASP theory of Section 2.2.
#### 3.2.1 ASP-preconditioner in the case \(\mathcal{D}=\boldsymbol{curl}\)
Following the same notations of Section 2.2, let us consider \(V=\boldsymbol{V}_{h}(\boldsymbol{curl},\Omega)\) equipped with the bilinear form \(a\) related to equation (3), namely \(a(\boldsymbol{w}_{h},\widetilde{\boldsymbol{w}_{h}})=(\boldsymbol{curl} \,\boldsymbol{w}_{h},\boldsymbol{curl}\,\widetilde{\boldsymbol{w}_{h}})_{ \boldsymbol{L}^{2}(\Omega)}+\tau(\boldsymbol{w}_{h},\widetilde{\boldsymbol{w }_{h}})_{\boldsymbol{L}^{2}(\Omega)}\) for \(\boldsymbol{w}_{h},\widetilde{\boldsymbol{w}_{h}}\in V\), and auxiliary spaces \(W_{1}=\boldsymbol{X}_{h}(\Omega)\) and \(W_{2}=V_{h}(\boldsymbol{grad},\Omega)\) equipped with the following inner products
\[a_{1}(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})=( \boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})_{\boldsymbol {X}(\Omega)}+\tau(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{ h}})_{\boldsymbol{L}^{2}(\Omega)},\quad\boldsymbol{\varphi}_{h},\widetilde{ \boldsymbol{\varphi}_{h}}\in W_{1},\]
and
\[a_{2}(\phi_{h},\widetilde{\phi_{h}})=\tau\,(\boldsymbol{grad}\,\phi_{h}, \boldsymbol{grad}\,\widetilde{\phi_{h}})_{\boldsymbol{L}^{2}(\Omega)},\quad \phi_{h},\widetilde{\phi_{h}}\in W_{2},\]
respectively. The corresponding transfer operators are \(\pi_{1}=\Pi_{h}^{\boldsymbol{curl}}\big{|}_{W_{1}}\) and \(\pi_{2}=\boldsymbol{grad}\big{|}_{W_{2}}\).
Before we verify the validity of assumptions of Theorem 2.3, we transition to a matrix notation. So we write \(\boldsymbol{H}\) for the matrix related to the restriction of \(\boldsymbol{X}(\Omega)\) inner product to \(\boldsymbol{X}_{h}(\Omega)\), that is the matrix representation of the bilinear form
\[(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})\in \boldsymbol{X}_{h}(\Omega)\times\boldsymbol{X}_{h}(\Omega)\mapsto( \boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})_{\boldsymbol {X}(\Omega)}.\]
Similarly, we write \(\boldsymbol{M}\) for the matrix related to the restriction of the \(\boldsymbol{L}^{2}(\Omega)\) inner product to \(\boldsymbol{X}_{h}(\Omega)\). Let \(\boldsymbol{L}\) be the matrix related to the mapping
\[(\boldsymbol{\phi}_{h},\widetilde{\boldsymbol{\phi}_{h}})\in V_{h}( \boldsymbol{grad},\Omega)\times V_{h}(\boldsymbol{grad},\Omega)\longmapsto \left(\boldsymbol{grad}\,\boldsymbol{\phi}_{h},\boldsymbol{grad}\, \widetilde{\boldsymbol{\phi}_{h}}\right)_{\boldsymbol{L}^{2}(\Omega)}.\]
We also write \(\boldsymbol{P}_{\boldsymbol{curl}}\) and \(\boldsymbol{G}\) for matrices related to the transform operators \(\pi_{1}\) and \(\pi_{2}\) respectively while \(\boldsymbol{S}_{\boldsymbol{curl}}\) further stands for the matrix related to the smoother. In the case of Jacobi smoothing \(\boldsymbol{S}_{\boldsymbol{curl}}\) is the matrix representation of the smoothing operator
\[\boldsymbol{w}_{h}\in V\mapsto s(\boldsymbol{w}_{h},\boldsymbol{w}_{h})=\sum_{ \begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl})\end{subarray}}c_{r}^{2}\,a( \boldsymbol{v}_{r},\boldsymbol{v}_{r}),\quad\boldsymbol{w}_{h}=\sum_{ \begin{subarray}{c}r\\ \boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl})\end{subarray}}c_{r}\, \boldsymbol{v}_{r}, \tag{30}\]
and it coincides with the diagonal \(\boldsymbol{D}_{\boldsymbol{A}_{\boldsymbol{curl}}}\) of \(\boldsymbol{A}_{\boldsymbol{curl}}\) (\(\boldsymbol{A}_{\boldsymbol{curl}}\) stands for the matrix representation of bilinear form \(a\)).
With these notations, a simple computation shows that ASP preconditioner for problem (1) reads
\[\boldsymbol{B}_{\boldsymbol{curl}}=\boldsymbol{S}_{\boldsymbol{curl}}^{-1}+ \boldsymbol{P}_{\boldsymbol{curl}}\left(\boldsymbol{H}+\tau\boldsymbol{M} \right)^{-1}\boldsymbol{P}_{\boldsymbol{curl}}^{T}+\tau^{-1}\boldsymbol{GL}^{ -1}\boldsymbol{G}^{T}. \tag{31}\]
_Remark 3.9_.: In two dimensions, we have two distinct curl operators: the scalar curl operator, defined as \(curl\,\mathbf{u}=\dfrac{\partial u_{1}}{\partial x_{2}}-\dfrac{\partial u_{2}}{ \partial x_{1}}\), and the vector \(curl\) operator, defined as \(\mathbf{curl}\,u=\left(\dfrac{\partial u}{\partial x_{2}},-\dfrac{\partial u}{ \partial x_{1}}\right)\). The present analysis refers to the scalar \(curl\) operator. However, the vector \(curl\) operator is used in the preconditioning of the two-dimensional \(\mathbf{H}(div,\Omega)\) problem addressed in the following subsection.
The following fundamental result demonstrates the mesh-independence of the preconditioner \(\mathbf{B_{curl}}\), at least when a Jacobi smoothing scheme is used. This result is a direct consequence of Theorem 3.7 and Theorem 2.3.
**Theorem 3.10**.: _Let \(\tau>0\) and suppose that the smoothing operator \(s\) is given by (30). Then, the spectral condition number \(\kappa\left(\mathbf{B_{curl}}\mathbf{A_{curl}}\right)\) is bounded, with respect to \(h\) and \(\tau\)._
Proof.: We will verify the assumptions of Theorem 2.3. To do so, we use Theorem 2.6 and the following estimate (found, for instance, in [14]):
\[\|\Pi_{h}^{\mathbf{curl}}\mathbf{w}_{h}\|_{\mathbf{H(curl,\Omega)}}\lesssim\|\mathbf{w}_{h}\|_ {\mathbf{H(curl,\Omega)}},\quad w_{h}\in W_{1}.\]
From this estimate, we can see that the first inequality in (i) holds with a constant \(\beta_{1}\) that is independent of \(h\). The second inequality in (i) holds with a constant \(\beta_{2}=1\), which is a consequence of the relation \(\mathbf{curl}\circ\mathbf{grad}=0\).
To prove the inequality in (ii), we express any \(\mathbf{w}_{h}\in\mathbf{V}_{h}(\mathbf{curl},\Omega)\) as
\[\mathbf{w}_{h}=\sum_{\mathbf{v}_{\mathbf{r}}\in\mathcal{B}(\mathbf{curl})}c_{\mathbf{r}}\,\mathbf{v}_{ \mathbf{r}}.\]
We have
\[a(\mathbf{w}_{h},\mathbf{w}_{h}) = \|\mathbf{curl}\,\mathbf{w}_{h}\|_{\mathbf{L}^{2}(\Omega)}^{2}+\tau\|\mathbf{w}_{ h}\|_{\mathbf{L}^{2}(\Omega)}^{2}\] \[\lesssim \left\|\sum_{\mathbf{v}_{\mathbf{r}}\in\mathcal{B}(\mathbf{curl})}c_{\mathbf{r}} \,\mathbf{curl}\,\mathbf{v}_{\mathbf{r}}\right\|_{\mathbf{L}^{2}(\Omega)}^{2}+\tau\left\|\sum _{\mathbf{v}_{\mathbf{r}}\in\mathcal{B}(\mathbf{curl})}c_{\mathbf{r}}\,\mathbf{v}_{\mathbf{r}}\right\| _{\mathbf{L}^{2}(\Omega)}^{2}\] \[= \sum_{\mathcal{Q}\in\mathcal{Q}_{h}}\left\|\sum_{k=1}^{K}c_{k}\, \mathbf{curl}\,\mathbf{v}_{k}\right\|_{\mathbf{L}^{2}(\Omega)}^{2}+\tau\sum_{\mathcal{Q} \in\mathcal{Q}_{h}}\left\|\sum_{k=1}^{K}c_{k}\,\mathbf{v}_{k}\right\|_{\mathbf{L}^{2} (\Omega)}^{2}\] \[\leq K\sum_{\mathcal{Q}\in\mathcal{Q}_{h}}\sum_{k=1}^{K}c_{k}^{2}\| \mathbf{curl}\,\mathbf{v}_{k}\|_{\mathbf{L}^{2}(\Omega)}^{2}+\tau K\sum_{\mathcal{Q}\in \mathcal{Q}_{h}}\sum_{k=1}^{K}c_{k}^{2}\|\mathbf{v}_{k}\|_{\mathbf{L}^{2}(\Omega)}^{2}\] \[= K\,s(\mathbf{w}_{h},\mathbf{w}_{h}),\]
where \(\mathcal{Q}_{h}\) denotes the parametric Bezier mesh, \(Q\) a generic Bezier element and the constant \(K\) is the number of basis functions whose support interacts with \(Q\). The constant \(K\) depends only on the degrees of the spline bases.
Finally, the last assumption (iii) follows from Theorem 3.7.
#### 3.2.2 ASP-preconditioner in the case \(\mathcal{D}=div\)
To simplify the discussion, we will consider the two and three dimensional cases separately. In the two-dimensional setting, the de Rham diagram reduces to the following:
\[\begin{array}{ccccc}H^{1}(\Omega)&\xrightarrow{\mathbf{curl}}&\mathbf{H}(div,\Omega )&\xrightarrow{div}&L^{2}(\Omega)\\ \Pi_{h}^{\mathbf{grad}}\Bigg{\downarrow}&&\Pi_{h}^{div}\Bigg{\downarrow}&&\Pi_{h} ^{L^{2}}\Bigg{\downarrow}\\ V_{h}(\mathbf{grad},\Omega)&\xrightarrow{\mathbf{curl}}&\mathbf{V}_{h}(div,\Omega)& \xrightarrow{div}&V_{h}(L^{2},\Omega)\end{array}\]
Here, \(\boldsymbol{curl}\) refers to the vector \(\boldsymbol{curl}\) operator. Theorem 3.7, with \(\mathcal{D}=div\), provides us with the starting point. We therefore choose \(V=\boldsymbol{V}_{h}(div,\Omega)\) equipped with the bilinear form
\[(\boldsymbol{w}_{h},\widetilde{\boldsymbol{w}_{h}})\in V\times V\mapsto a( \boldsymbol{w}_{h},\widetilde{\boldsymbol{w}_{h}})=(div\,\boldsymbol{w}_{h}, div\,\widetilde{\boldsymbol{w}_{h}})_{L^{2}(\Omega)}+\tau(\boldsymbol{w}_{h}, \widetilde{\boldsymbol{w}_{h}})_{\boldsymbol{L}^{2}(\Omega)}. \tag{32}\]
We choose the auxiliary spaces \(W_{1}=\boldsymbol{X}_{h}(\Omega)\) and \(W_{2}=V_{h}(\boldsymbol{grad},\Omega)\). \(W_{1}\) and \(W_{2}\) are equipped with the following inner products:
\[a_{1}(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})=( \boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})_{\boldsymbol {X}(\Omega)}+\tau(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h }})_{\boldsymbol{L}^{2}(\Omega)},\quad\boldsymbol{\varphi}_{h},\widetilde{ \boldsymbol{\varphi}_{h}}\in W_{1},\]
and
\[a_{2}(\phi_{h},\widetilde{\phi_{h}})=\tau\left(\boldsymbol{curl}\,\phi_{h}, \boldsymbol{curl}\,\widetilde{\phi_{h}}\right)_{\boldsymbol{L}^{2}(\Omega)},\quad\phi_{h},\widetilde{\phi_{h}}\in W_{2},\]
respectively. We define the transfer operators as \(\pi_{1}=\Pi_{h}^{div}\big{|}_{W_{1}}\) and \(\pi_{2}=\boldsymbol{curl}\big{|}_{W_{2}}\). Let \(\boldsymbol{P}_{div}\) and \(\boldsymbol{R}\) be the matrices related to the transfer operators \(\pi_{1}\) and \(\pi_{2}\), respectively, and let \(\widetilde{\boldsymbol{S}}_{div}\) be the matrix representation of the smoother. The ASP preconditioner for problem (2) in the two-dimensional setting can be expressed as
\[\boldsymbol{B}_{div}=\boldsymbol{S}_{div}^{-1}+\boldsymbol{P}_{div}\left( \boldsymbol{H}+\tau\boldsymbol{M}\right)^{-1}\boldsymbol{P}_{div}^{T}+\tau^{ -1}\boldsymbol{RL}^{-1}\boldsymbol{R}^{T}, \tag{33}\]
where \(\boldsymbol{L}\) and \(\boldsymbol{M}\) are the matrices defined in the case \(\mathcal{D}=\boldsymbol{curl}\).
In the three-dimensional case, Corollary 3.8 is used as the basis for constructing the preconditioner. Similar to the two-dimensional case, we have \(V=\boldsymbol{V}_{h}(div,\Omega)\) equipped with the bilinear form (32). We choose the transform operators as follows:
1. \(W_{1}=\boldsymbol{X}_{h}(\Omega)\) with inner product \[a_{1}(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})=( \boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}})_{\boldsymbol{X }(\Omega)}+\tau(\boldsymbol{\varphi}_{h},\widetilde{\boldsymbol{\varphi}_{h}} )_{\boldsymbol{L}^{2}(\Omega)},\quad\boldsymbol{\varphi}_{h},\widetilde{ \boldsymbol{\varphi}_{h}}\in W_{1}.\]
2. \(W_{2}=\boldsymbol{V}_{h}(\boldsymbol{curl},\Omega)\) equipped with inner product \[a_{2}(\boldsymbol{z}_{h},\widetilde{\boldsymbol{z}_{h}})=\tau\sum_{ \begin{subarray}{c}\boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl}) \end{subarray}}d_{r}\,\tilde{d}_{r}\,\|\boldsymbol{curl}\,\boldsymbol{v}_{r }\|_{\boldsymbol{L}^{2}(\Omega)}^{2},\quad\boldsymbol{z}_{h},\widetilde{ \boldsymbol{z}_{h}}\in W_{2},\] with \[\boldsymbol{z}_{h}=\sum_{\begin{subarray}{c}\boldsymbol{v}_{r}\in\mathcal{B} (\boldsymbol{curl})\end{subarray}}d_{r}\,\boldsymbol{v}_{r},\quad\widetilde{ \boldsymbol{z}_{h}}=\sum_{\begin{subarray}{c}\boldsymbol{r}\\ \boldsymbol{v}_{r}\in\mathcal{B}(\boldsymbol{curl})\end{subarray}}\tilde{d}_{r }\,\boldsymbol{v}_{r}.\]
3. \(W_{3}=\boldsymbol{X}_{h}(\Omega)\) with inner product \[a_{3}(\boldsymbol{\psi}_{h},\widetilde{\boldsymbol{\psi}_{h}})=\tau( \boldsymbol{\psi}_{h},\widetilde{\boldsymbol{\psi}_{h}})_{\boldsymbol{X}( \Omega)},\quad\boldsymbol{\psi}_{h},\widetilde{\boldsymbol{\psi}_{h}}\in W_ {3}.\]
The corresponding transfer operators are \(\pi_{1}=\Pi_{h}^{div}\big{|}_{W_{1}}\), \(\pi_{2}=\boldsymbol{curl}\big{|}_{W_{2}}\) and \(\pi_{3}=\boldsymbol{curl}\big{|}_{W_{3}}\).
In matrix notation, the bilinear form \(\tau^{-1}a_{2}\) is represented by the matrix \(\boldsymbol{D}_{\boldsymbol{curl}}\), it coincides with the diagonal of the matrix \(\boldsymbol{Q}_{\boldsymbol{curl}}=(\boldsymbol{Q}_{\boldsymbol{r}, \boldsymbol{q}})\) defined by
\[\boldsymbol{Q}_{\boldsymbol{r},\boldsymbol{q}}=\int_{\Omega}\boldsymbol{curl} \,\boldsymbol{v}_{\boldsymbol{q}}\cdot\boldsymbol{curl}\,\boldsymbol{v}_{ \boldsymbol{r}},\quad\boldsymbol{v}_{\boldsymbol{r}},\boldsymbol{v}_{ \boldsymbol{q}}\in\mathcal{B}(\boldsymbol{curl}). \tag{34}\]
The matrix related to projection \(\pi_{2}\) is denoted by \(\boldsymbol{C}\). A straightforward calculation yields that
\[\begin{split}\boldsymbol{B}_{div}=\boldsymbol{S}_{div}^{-1}& +\boldsymbol{P}_{div}\left(\boldsymbol{H}+\tau\boldsymbol{M}\right)^{-1} \boldsymbol{P}_{div}^{T}+\tau^{-1}\boldsymbol{CD}_{\boldsymbol{curl}}^{-1} \boldsymbol{C}^{T}\\ &+\tau^{-1}\boldsymbol{CP}_{\boldsymbol{curl}}\boldsymbol{H}^{-1} \boldsymbol{P}_{\boldsymbol{curl}}^{T}\boldsymbol{C}^{T},\end{split} \tag{35}\]
where \(\boldsymbol{S}_{div}\) represents the smoother in matrix form.
**Theorem 3.11**.: _Suppose that the smoothing operator \(s\) is given by the Jacobi relaxation scheme and let \(\tau>0\). Then, the spectral condition number \(\kappa\left(\mathbf{B}_{div}\mathbf{A}_{div}\right)\) is bounded with respect to \(h\) and \(\tau\)._
Proof.: The proof follows a similar approach to that of Theorem 3.10 and is therefore omitted.
## 4 Numerical Results
The computational domain is defined as the unit square \(\Omega=\left(0,1\right)^{2}\) subdivided into \(n\times n\) sub-domains (\(n\in\mathbb{N}^{*}\)). First, we compute the condition number and then track the total number of iterations required for convergence of the Conjugate Gradient (CG ) method for different values of \(n\), \(\tau\), and \(p\).
\begin{table}
\begin{tabular}{c l l l l l l l} \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{3}{c}{\(p=1\)} & \multicolumn{3}{c}{\(p=2\)} \\ \cline{2-9} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(4.40e+08\) & \(1.71e+09\) & \(6.74e+09\) & \(2.68e+10\) & \(4.98e+09\) & \(1.77e+10\) & \(6.77e+10\) & \(2.68e+11\) \\ \(10^{-3}\) & \(4.40e+07\) & \(1.71e+08\) & \(6.74e+08\) & \(2.68e+09\) & \(4.98e+08\) & \(1.77e+09\) & \(6.77e+09\) & \(2.68e+10\) \\ \(10^{-2}\) & \(4.40e+06\) & \(1.71e+07\) & \(6.74e+07\) & \(2.68e+08\) & \(4.98e+07\) & \(1.77e+08\) & \(6.77e+08\) & \(2.68e+09\) \\ \(10^{-1}\) & \(4.40e+05\) & \(1.71e+06\) & \(6.74e+06\) & \(2.68e+07\) & \(4.98e+06\) & \(1.77e+07\) & \(6.77e+07\) & \(2.68e+08\) \\ \(1\) & \(4.41e+04\) & \(1.71e+05\) & \(6.74e+05\) & \(2.68e+06\) & \(4.99e+05\) & \(1.77e+06\) & \(6.77e+06\) & \(2.68e+07\) \\ \(10^{1}\) & \(4.48e+03\) & \(1.71e+04\) & \(6.75e+04\) & \(2.68e+05\) & \(5.04e+04\) & \(1.77e+05\) & \(6.77e+05\) & \(2.68e+06\) \\ \(10^{2}\) & \(5.31e+02\) & \(1.79e+03\) & \(6.82e+03\) & \(2.69e+04\) & \(5.65e+03\) & \(1.82e+04\) & \(6.82e+04\) & \(2.69e+05\) \\ \(10^{3}\) & \(2.95e+02\) & \(2.95e+02\) & \(7.56e+02\) & \(2.76e+03\) & \(2.73e+03\) & \(2.52e+03\) & \(7.31e+03\) & \(2.73e+04\) \\ \(10^{4}\) & \(3.04e+02\) & \(2.97e+02\) & \(2.90e+02\) & \(3.62e+02\) & \(2.90e+03\) & \(2.61e+03\) & \(2.44e+03\) & \(3.26e+03\) \\ \hline \end{tabular}
\end{table}
Table 3: The \(2\)-\(d\) unpreconditioned \(\mathbf{H}_{0}(\mathbf{curl},\Omega)\): condition number \(\kappa_{2}\left(\mathbf{A}_{\mathbf{curl}}\right)\).
We use \(\mathbf{u}^{\mathcal{D}}\) (\(\mathcal{D}=\mathbf{curl}\) or \(\mathcal{D}=div\)) to denote the solution of the linear system, _i.e._, \(\mathbf{A}_{\mathcal{D}}\mathbf{u}^{\mathcal{D}}=\mathbf{b}\), where \(\mathbf{b}\) represents the IgA discretization of the right-hand side function \(\mathbf{f}\). In all experiments, we use the stopping criterion of
\[\frac{\|\mathbf{A}_{\mathcal{D}}\mathbf{u}^{\mathcal{D}}-\mathbf{b}\|_{2}}{\|\mathbf{b}\|_{2}} \leq 10^{-6}, \tag{36}\]
and the initial guess is always chosen to be the zero vector.
In this section, some sample simulations are developed to test the strategy proposed in this paper in view of further applications. The simulations are performed in two and three spatial dimensions,
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{2}{c}{\(p=1\)} & \multicolumn{5}{c}{\(p=2\)} \\ \cline{2-9} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(63\) & \(110\) & \(186\) & \(253\) & \(115\) & \(287\) & \(588\) & \(1120\) \\ \(10^{-3}\) & \(51\) & \(99\) & \(158\) & \(185\) & \(86\) & \(256\) & \(510\) & \(927\) \\ \(10^{-2}\) & \(46\) & \(82\) & \(117\) & \(160\) & \(72\) & \(197\) & \(426\) & \(743\) \\ \(10^{-1}\) & \(34\) & \(44\) & \(88\) & \(149\) & \(55\) & \(167\) & \(317\) & \(586\) \\ \(1\) & \(18\) & \(41\) & \(80\) & \(134\) & \(51\) & \(123\) & \(230\) & \(435\) \\ \(10^{1}\) & \(15\) & \(36\) & \(67\) & \(115\) & \(34\) & \(67\) & \(120\) & \(240\) \\ \(10^{2}\) & \(12\) & \(20\) & \(34\) & \(58\) & \(22\) & \(37\) & \(70\) & \(125\) \\ \(10^{3}\) & \(6\) & \(5\) & \(12\) & \(28\) & \(19\) & \(19\) & \(23\) & \(37\) \\ \(10^{4}\) & \(5\) & \(7\) & \(4\) & \(6\) & \(19\) & \(21\) & \(18\) & \(16\) \\ \hline \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{2}{c}{\(p=3\)} & \multicolumn{5}{c}{\(p=4\)} \\ \cline{2-9} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(323\) & \(666\) & \(1050\) & \(1896\) & \(616\) & \(1771\) & \(2296\) & \(-\) \\ \(10^{-3}\) & \(272\) & \(604\) & \(861\) & \(1545\) & \(437\) & \(1394\) & \(1878\) & \(2931\) \\ \(10^{-2}\) & \(216\) & \(427\) & \(697\) & \(1133\) & \(329\) & \(1032\) & \(1518\) & \(2404\) \\ \(10^{-1}\) & \(175\) & \(346\) & \(558\) & \(963\) & \(229\) & \(794\) & \(1143\) & \(1849\) \\ \(1\) & \(128\) & \(275\) & \(390\) & \(685\) & \(150\) & \(506\) & \(785\) & \(1217\) \\ \(10^{1}\) & \(72\) & \(147\) & \(236\) & \(428\) & \(82\) & \(241\) & \(400\) & \(701\) \\ \(10^{2}\) & \(35\) & \(53\) & \(97\) & \(166\) & \(43\) & \(73\) & \(135\) & \(237\) \\ \(10^{3}\) & \(30\) & \(33\) & \(34\) & \(53\) & \(32\) & \(38\) & \(47\) & \(77\) \\ \(10^{4}\) & \(32\) & \(33\) & \(26\) & \(21\) & \(46\) & \(44\) & \(38\) & \(33\) \\ \hline \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{2}{c}{\(p=5\)} & \multicolumn{5}{c}{\(p=6\)} \\ \cline{2-9} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(1567\) & \(-\) & \(-\) & \(-\) & \(2151\) & \(-\) & \(-\) & \(-\) \\ \(10^{-3}\) & \(1258\) & \(2816\) & \(-\) & \(-\) & \(1565\) & \(-\) & \(-\) & \(-\) \\ \(10^{-2}\) & \(927\) & \(2214\) & \(-\) & \(-\) & \(1029\) & \(-\) & \(-\) & \(-\) \\ \(10^{-1}\) & \(606\) & \(1417\) & \(-\) & \(-\) & \(621\) & \(2368\) & \(-\) & \(-\) \\ \(1\) & \(307\) & \(731\) & \(1587\) & \(2795\) & \(292\) & \(988\) & \(2377\) & \(-\) \\ \(10^{1}\) & \(125\) & \(286\) & \(613\) & \(1141\) & \(129\) & \(355\) & \(839\) & \(1631\) \\ \(10^{2}\) & \(55\) & \(98\) & \(173\) & \(362\) & \(59\) & \(138\) & \(229\) & \(495\) \\ \(10^{3}\) & \(45\) & \(52\) & \(66\) & \(115\) & \(54\) & \(73\) & \(84\) & \(165\) \\ \(10^{4}\) & \(61\) & \(56\) & \(48\) & \(50\) & \(64\) & \(69\) & \(64\) & \(75\) \\ \hline \end{tabular}
\end{table}
Table 4: The \(2\)-\(d\) unpreconditioned \(\mathbf{H}_{0}(\mathbf{curl},\Omega)\): CG iterations. Exact solution is given by (37). ‘\(-\)’ means that CG reaches the maximum number of iterations (set to \(3000\)) without convergence.
which are discussed in two separate subsections. The first subsection focuses on the two-dimensional case, while the second subsection is dedicated to the three-dimensional case.
### Two dimensional tests
In Subsection 4.1.1, we investigate the unpreconditioned system, which allows us to evaluate the importance of the ASP preconditioner by comparing the obtained results with those of Subsection 4.1.2. In that subsection, we develop numerical tests related to the auxiliary space preconditioning method using both Jacobi and Gauss-Seidel smoothing schemes. Later in Subsection 4.1.3, we examine the behavior of the preconditioner with respect to \(p\)-refinement. We demonstrate that the resulting algorithm can be easily extended to a \(p\)-stable algorithm that exhibits excellent convergence behavior of the preconditioner with respect to the \(B\)-spline degree \(p\).
#### 4.1.1 Test 1: the \(2\)-\(d\) unpreconditioned system
We computed the condition number \(\kappa_{2}(\mathbf{A_{curl}})\) and the number of iterations required for convergence of the CG solver for various choices of \(p\), \(n\), and \(\tau\). The exact solutions were defined as small perturbations from the corresponding null spaces:
\[\mathbf{u_{curl}}(x_{1},x_{2})=\tau^{-1}\begin{pmatrix}x_{2}(x_{2}-1)(2x_{1}-1)\\ x_{1}(x_{1}-1)(2x_{2}-1)\end{pmatrix}+10^{-2}\mathbf{v_{curl}}(x_{1},x_{2}), \tag{37}\]
and
\[\mathbf{u}_{div}(x_{1},x_{2})=\tau^{-1}\begin{pmatrix}x_{1}(x_{1}-1)(2x_{2}-1)\\ x_{2}(x_{2}-1)(2x_{1}-1)\end{pmatrix}+10^{-2}\mathbf{v}_{div}(x_{1},x_{2}), \tag{38}\]
where \(\mathbf{v_{curl}}\) and \(\mathbf{v}_{div}\) are solutions of (1) and (2) respectively, with \(f=\begin{pmatrix}1\\ 1\end{pmatrix}\). Simple computation shows that
\[\mathbf{v_{curl}}(x_{1},x_{2})=C_{1}\begin{pmatrix}e^{-\sqrt{\tau}x_{2}+\sqrt{ \tau}/2}+e^{\sqrt{\tau}x_{2}-\sqrt{\tau}/2}\\ e^{-\sqrt{\tau}x_{1}+\sqrt{\tau}/2}+e^{\sqrt{\tau}x_{1}-\sqrt{\tau}/2}\end{pmatrix}+ \tau^{-1}\begin{pmatrix}1\\ 1\end{pmatrix}, \tag{39}\]
and
\[\mathbf{v}_{div}(x_{1},x_{2})=C_{2}\begin{pmatrix}\cos\left(\sqrt{\tau}x_{1}-\sqrt {\tau}/2\right)\\ \cos\left(\sqrt{\tau}x_{2}-\sqrt{\tau}/2\right)\end{pmatrix}+\tau^{-1} \begin{pmatrix}1\\ 1\end{pmatrix}, \tag{40}\]
where \(C_{1}\) and \(C_{2}\) are given by
\[C_{1}=\frac{-\tau^{-1}}{e^{-\sqrt{\tau}/2}+e^{\sqrt{\tau}/2}},\quad C_{2}= \frac{-\tau^{-1}}{\cos\left(\sqrt{\tau}/2\right)}.\]
\begin{table}
\end{table}
Table 5: \(2\)-\(d\) unpreconditioned problem: CG iterations, residual and \(l^{2}\) approximation errors. Exact solutions are given by (39)–(40). Parameter values are set to \(n=32\), \(p=3\).
Note that the functions defined above, namely \(\mathbf{u_{curl}}\in\mathbf{H}_{0}(\mathbf{curl},\Omega)\) and \(\mathbf{u}_{div}\in\mathbf{H}_{0}(div,\Omega)\), are solutions to problems (1)-(2) with right-hand sides given by
\[\mathbf{f_{curl}}=10^{-2}+\begin{pmatrix}x_{2}(x_{2}-1)(2x_{1}-1)\\ x_{1}(x_{1}-1)(2x_{2}-1)\end{pmatrix},\quad(x_{1},x_{2})\in(0,1)^{2},\]
and
\[\mathbf{f}_{div}=10^{-2}+\begin{pmatrix}x_{1}(x_{1}-1)(2x_{2}-1)\\ x_{2}(x_{2}-1)(2x_{1}-1)\end{pmatrix},\quad(x_{1},x_{2})\in(0,1)^{2},\]
respectively. It is worth mentioning that both \(\mathbf{f_{curl}}\) and \(\mathbf{f}_{div}\) are independent of the parameter \(\tau\).
The results are summarized in tables 3 and 4. As expected, we found that the spectral condition number is very large and increases with \(n\). Furthermore, it becomes extremely large as \(p\) increases and as \(\tau\) approaches \(0\). Similar observations apply to the number of CG iterations. The results for the \(\mathbf{H}_{0}(div,\Omega)\) problem are similar, and therefore, we do not report them here.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{6}{c}{\(p=1\)} & \multicolumn{6}{c}{\(p=2\)} \\ \cline{2-9} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(4.52e+00\) & \(5.94e+00\) & \(8.22e+00\) & \(1.04e+01\) & \(3.75e+00\) & \(4.77e+00\) & \(7.12e+00\) & \(9.61e+00\) \\ \(10^{-3}\) & \(4.52e+00\) & \(5.94e+00\) & \(8.21e+00\) & \(1.04e+01\) & \(3.75e+00\) & \(4.77e+00\) & \(7.12e+00\) & \(9.61e+00\) \\ \(10^{-2}\) & \(4.52e+00\) & \(5.94e+00\) & \(8.21e+00\) & \(1.04e+01\) & \(3.74e+00\) & \(4.77e+00\) & \(7.12e+00\) & \(9.61e+00\) \\ \(10^{-1}\) & \(4.52e+00\) & \(5.90e+00\) & \(8.17e+00\) & \(1.03e+01\) & \(3.73e+00\) & \(4.74e+00\) & \(7.08e+00\) & \(9.56e+00\) \\ \(1\) & \(4.49e+00\) & \(5.61e+00\) & \(7.78e+00\) & \(9.85e+00\) & \(3.63e+00\) & \(4.49e+00\) & \(6.72e+00\) & \(9.10e+00\) \\ \(10^{1}\) & \(4.23e+00\) & \(4.56e+00\) & \(6.02e+00\) & \(7.75e+00\) & \(3.21e+00\) & \(3.57e+00\) & \(5.02e+00\) & \(7.02e+00\) \\ \(10^{2}\) & \(3.80e+00\) & \(4.13e+00\) & \(4.50e+00\) & \(4.75e+00\) & \(3.07e+00\) & \(3.34e+00\) & \(3.39e+00\) & \(3.87e+00\) \\ \(10^{3}\) & \(2.97e+00\) & \(3.67e+00\) & \(4.75e+00\) & \(4.61e+00\) & \(4.29e+00\) & \(3.39e+00\) & \(3.57e+00\) & \(3.70e+00\) \\ \(10^{4}\) & \(3.12e+00\) & \(3.05e+00\) & \(3.19e+00\) & \(4.22e+00\) & \(6.45e+00\) & \(5.52e+00\) & \(3.98e+00\) & \(3.38e+00\) \\ \hline \end{tabular}
\end{table}
Table 7: \(2\)-\(d\) preconditioned \(\mathbf{H}_{0}(\mathbf{curl},\Omega)\): condition number \(\kappa_{2}\left(\mathbf{B}_{\mathbf{curl}}\mathbf{A}_{\mathbf{curl}}\right)\) in the case of a Gauss-Seidel smoothing.
It's important to note that, for certain types of problems, reaching the stopping criterion and having a decrease in residual error does not necessarily mean that the solution has converged to the exact one. In fact, it could lead to a non-physical solution. To test this scenario, we modified the analytic solutions (37)-(38) by considering only the parts corresponding to right hand sides equal to \(\begin{pmatrix}1\\ 1\end{pmatrix}\), i.e \(\mathbf{v_{curl}}\) and \(\mathbf{v}_{div}\) defined in (39) and (40).
\begin{table}
\begin{tabular}{c
We evaluated the iteration counts, residual error, and relative \(l^{2}\)-error for different values of \(\tau\) using a fixed number of elements (\(n=32\)) and a B-spline degree of \(p=3\). The results are shown in Table 5. As we observed, even when the CG method reached the stopping criterion, the relative error was still very high for small values of \(\tau\). This indicates that the approximated solution did not converge to the exact one. Next, we will show that this misleading convergence can be remedied using the ASP strategy.
1.2 Test 2: convergence study of the ASP in the \(2\)-\(d\) setting with Jacobi and Gauss-Seidel smoothing
The smoother is provided by Jacobi and symmetric Gauss-Seidel relaxation schemes. We recall that this allows for an explicit form of the matrix related to the smother, more precisely we have
\[\mathbf{S}_{\mathcal{D}}^{-1}=\mathbf{D}_{\mathbf{A}_{\mathcal{D}}}^{-1},\]
in the case of Jacobi smoothing, while when using Gauss-Seidel smoothing \(\mathbf{D}_{\mathbf{A}_{\mathcal{D}}}^{-1}\) is replaced by
\[\mathbf{S}_{\mathcal{D}}^{-1}=\mathbf{L}_{\mathbf{A}_{\mathcal{D}}}^{-1}-\mathbf{L}_{\mathbf{A}_{ \mathcal{D}}}^{-1}\mathbf{A}_{\mathcal{D}}\mathbf{U}_{\mathbf{A}_{\mathcal{D}}}^{-1}+\mathbf{ U}_{\mathbf{A}_{\mathcal{D}}}^{-1},\]
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{6}{c}{\(p=1\)} & \multicolumn{6}{c}{\(p=2\)} \\ \cline{2-9} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(4.44e+00\) & \(5.18e+00\) & \(7.78e+00\) & \(1.09e+01\) & \(1.19e+01\) & \(1.15e+01\) & \(1.13e+01\) & \(1.12e+01\) \\ \(10^{-3}\) & \(4.44e+00\) & \(5.18e+00\) & \(7.78e+00\) & \(1.09e+01\) & \(1.19e+01\) & \(1.15e+01\) & \(1.13e+01\) & \(1.12e+01\) \\ \(10^{-2}\) & \(4.44e+00\) & \(5.15e+00\) & \(7.73e+00\) & \(1.08e+01\) & \(1.19e+01\) & \(1.15e+01\) & \(1.13e+01\) & \(1.12e+01\) \\ \(10^{-1}\) & \(4.43e+00\) & \(4.87e+00\) & \(7.32e+00\) & \(1.03e+01\) & \(1.19e+01\) & \(1.15e+01\) & \(1.13e+01\) & \(1.12e+01\) \\ \(10^{1}\) & \(4.36e+00\) & \(4.37e+00\) & \(5.26e+00\) & \(7.85e+00\) & \(1.18e+01\) & \(1.15e+01\) & \(1.13e+01\) & \(1.12e+01\) \\ \(10^{2}\) & \(4.83e+00\) & \(4.35e+00\) & \(4.33e+00\) & \(4.37e+00\) & \(1.33e+01\) & \(1.19e+01\) & \(1.13e+01\) & \(1.12e+01\) \\ \(10^{3}\) & \(7.82e+00\) & \(6.16e+00\) & \(4.63e+00\) & \(4.32e+00\) & \(2.03e+01\) & \(1.81e+01\) & \(1.13e+01\) & \(1.15e+01\) \\ \(10^{4}\) & \(1.64e+01\) & \(1.20e+01\) & \(8.84e+00\) & \(5.78e+00\) & \(5.02e+01\) & \(3.20e+01\) & \(3.12e+01\) & \(2.12e+01\) \\ \hline \end{tabular}
\end{table}
Table 10: \(2\)-\(d\) preconditioned \(\mathbf{H}_{0}(div,\Omega)\): condition number \(\kappa_{2}\left(\mathbf{B}_{div}\mathbf{A}_{div}\right)\) in the case of a Gauss-Seidel smoothing.
where \(\mathbf{D}_{\mathbf{A}_{\mathcal{D}}}\), \(\mathbf{L}_{\mathbf{A}_{\mathcal{D}}}\) and \(\mathbf{U}_{\mathbf{A}_{\mathcal{D}}}\) stand for the diagonal, the lower and the upper parts of the matrix \(\mathbf{A}_{\mathcal{D}}\), respectively.
Following the approach of the previous subsection, we compute the spectral condition number \(\kappa_{2}\left(\mathbf{B}_{\mathcal{D}}\mathbf{A}_{\mathcal{D}}\right)\) and the number of conjugate gradient iterations required for the preconditioned system to converge, while varying the \(B\)-spline degree \(p\), the number of elements \(n\), and the parameter \(\tau\). Tables 6-8 show the results for the _curl_ problem, while tables 9-11 show the results for the \(div\) problem.
\begin{table}
\begin{tabular}{c
\begin{table}
\end{table}
Table 13: 2-\(d\) ASP reconditioning with Gauss-Seidel smoothing: CG iterations, residual and \(l^{2}\) approximation errors. Exact solutions are given by (39)–(40). Parameter values are set to \(n=32\), \(p=3\).
\begin{table}
\end{table}
Table 14: The 2-\(d\) problem: CG iterations in the case of the unpreconditioned problem (NP), ASP preconditioning with Jacobi smoothing (J), ASP preconditioning with Gauss-Seidel smoothing (GS), and the optimal ASP algorithm with GLT smoothing (ASP-GLT). The number of iterations required for ASP-GLT is indicated in parentheses. The exact solutions are defined by (37)-(38). Parameter values are set to \(\tau=10^{-4}\), \(n=64\), \(\nu_{1}=1\), \(\nu_{2}=p^{2}\), and \(\nu_{asp}=3\). (\(-\)) indicates that CG reached the maximum number of iterations (set to 3000) without achieving convergence.
\begin{table}
\end{table}
Table 12: 2-\(d\) ASP reconditioning with Jacobi smoothing: CG iterations, residual and \(l^{2}\) approximation errors. Exact solutions are given by (39)–(40). Parameter values are set to \(n=32\), \(p=3\).
To test the algorithm's convergence to exact solutions, we consider analytic solutions (39)-(40), as we did for the unpreconditioned problem. We track the number of iterations, residual error, and \(l^{2}\) relative error after the CG method converges for different choices of \(\tau\) with fixed values of \(n=32\) and \(p=3\). We present the results in Tables 12-13.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{3}{c}{\(p=1\)} & \multicolumn{3}{c}{\(p=2\)} \\ \cline{2-10} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(151\) & \(328\) & \(511\) & \(879\) & \(520\) & \(975\) & \(1313\) & \(1962\) \\ \(10^{-3}\) & \(127\) & \(271\) & \(439\) & \(749\) & \(452\) & \(856\) & \(1092\) & \(1550\) \\ \(10^{-2}\) & \(114\) & \(192\) & \(350\) & \(610\) & \(366\) & \(679\) & \(892\) & \(1324\) \\ \(10^{-1}\) & \(81\) & \(174\) & \(276\) & \(486\) & \(279\) & \(533\) & \(703\) & \(1029\) \\ \(1\) & \(59\) & \(109\) & \(188\) & \(295\) & \(188\) & \(367\) & \(518\) & \(717\) \\ \(10^{1}\) & \(37\) & \(70\) & \(133\) & \(244\) & \(112\) & \(226\) & \(308\) & \(432\) \\ \(10^{2}\) & \(21\) & \(38\) & \(72\) & \(131\) & \(48\) & \(78\) & \(122\) & \(182\) \\ \(10^{3}\) & \(11\) & \(12\) & \(21\) & \(41\) & \(44\) & \(41\) & \(39\) & \(52\) \\ \(10^{4}\) & \(11\) & \(11\) & \(7\) & \(10\) & \(47\) & \(52\) & \(41\) & \(30\) \\ \hline \end{tabular} \begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{3}{c}{\(p=3\)} & \multicolumn{3}{c}{\(p=4\)} \\ \cline{2-10} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{-3}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{-2}\) & \(2301\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{-1}\) & \(1579\) & \(2763\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(1\) & \(991\) & \(1570\) & \(1840\) & \(2277\) & \(1471\) & \(-\) & \(-\) & \(-\) \\ \(10^{1}\) & \(303\) & \(534\) & \(764\) & \(1149\) & \(510\) & \(867\) & \(1463\) & \(2091\) \\ \(10^{2}\) & \(113\) & \(150\) & \(219\) & \(362\) & \(154\) & \(277\) & \(432\) & \(548\) \\ \(10^{3}\) & \(88\) & \(60\) & \(64\) & \(102\) & \(116\) & \(110\) & \(122\) & \(172\) \\ \(10^{4}\) & \(129\) & \(109\) & \(61\) & \(39\) & \(195\) & \(206\) & \(128\) & \(74\) \\ \hline \end{tabular}
\begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\(\tau\)\(n\)} & \multicolumn{3}{c}{\(p=5\)} & \multicolumn{3}{c}{\(p=6\)} \\ \cline{2-10} & \(8\) & \(16\) & \(32\) & \(64\) & \(8\) & \(16\) & \(32\) & \(64\) \\ \hline \(10^{-4}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{-3}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{-2}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{-1}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(1\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \(10^{1}\) & \(1187\) & \(1692\) & \(2200\) & \(-\) & \(1325\) & \(2252\) & \(2838\) & \(-\) \\ \(10^{2}\) & \(359\) & \(414\) & \(625\) & \(786\) & \(384\) & \(562\) & \(771\) & \(1013\) \\ \(10^{3}\) & \(300\) & \(206\) & \(184\) & \(228\) & \(327\) & \(363\) & \(267\) & \(348\) \\ \(10^{4}\) & \(542\) & \(431\) & \(260\) & \(139\) & \(625\) & \(764\) & \(403\) & \(201\) \\ \hline \end{tabular}
\end{table}
Table 15: \(3\)-\(d\) unpreconditioned \(\mathbf{H}_{0}(\mathbf{curl},\Omega)\): CG iterations. The right-hand side function is defined by (41). ’\(-\)’ indicates that CG reached the maximum number of iterations (set to \(3000\)) without convergence.
Comparing the above results to those of the previous subsection, several observations can be made.
* the spectral condition numbers and the number of conjugate gradient iterations required for convergence are relatively small and not heavily dependent on the mesh parameter \(h\).
* both the number of CG iterations and the spectral condition numbers appear to be independent of \(\tau\), indicating that the ASP methodology can handle small values of \(\tau\) effectively
\begin{table}
\begin{tabular}{c
* the relative errors are now sufficiently small and are of the same order as the corresponding residual errors, demonstrating that the solution obtained converges to the corresponding exact solution.
* it can be observed that the overall performance obtained with Gauss-Seidel smoothing is slightly better than that obtained with Jacobi smoothing scheme.
\begin{table}
\begin{tabular}{c
In conclusion, Tables 6-13 present results that compare favorably with those of Subsection 4.1.1, indicating that the preconditioning method outlined in Section 3 has been an effective solution to the ill-preconditioning of the approximate problem. However, it is important to note that the numerical results deteriorate when \(p\) is large, specially in the case of \(\mathbf{H}_{0}(div,\Omega)\) problem. Although this \(p\)-dependency is not addressed in the theoretical results presented in this paper, in the next subsection we propose a numerical investigation.
\begin{table}
\begin{tabular}{c c
#### 4.1.3 Test 3: ASP and p-dependency
This section introduces a modified version of our Auxiliary Space Preconditioning method that addresses the issue related to the dependency with respect to \(B\)-Splines degree. Specifically, an additional smoother is applied to control the \(p\)-dependency of the preconditioner. The construction of the smoother is based on the theory of Generalized Locally Toeplitz (GLT) and utilizes the spectral information of the involved matrices, as discussed in [49]. For this purpose, we decompose the ASP preconditioner as follows:
\[\mathbf{B}_{\mathcal{D}}=\mathbf{S}_{\mathcal{D}}^{-1}+\mathbf{K}_{\mathcal{D}}.\]
The suggested algorithm is as follows:
```
Input :\(\mathbf{A}\): the matrix related to the IgA discretization of (3); \(\mathbf{b}\): a given vector; \(\mathbf{x}\): a starting point; \(\nu_{1}\): the number of Jacobi (J) or Gauss-Seidel (GS) iterations; \(\nu_{2}\): the number of GLT iterations; \(\nu_{ASP}\): the number of ASP iterations. Output :\(\mathbf{x}\): the approximate solution of \(\mathbf{A}_{\mathcal{D}}\mathbf{x}=\mathbf{b}\).
1\(k\gets 0\);
2while\(k\leq\nu_{ASP}\)and not convergeddo
3\(\mathbf{x}\leftarrow\texttt{smoother}_{1}(\mathbf{A}_{\mathcal{D}},\mathbf{b},\mathbf{x}, \nu_{1})\)// Apply J or GS smoother.
4\(\mathbf{x}\leftarrow\texttt{smoother}_{2}(\mathbf{A}_{\mathcal{D}},\mathbf{b},\mathbf{x}, \nu_{2})\)// Apply GLT smoother.
5\(\mathbf{d}\leftarrow\mathbf{b}-\mathbf{A}_{\mathcal{D}}\mathbf{x}\)// Compute the defect.
6\(\mathbf{x}_{c}\leftarrow\mathbf{K}_{\mathcal{D}}\mathbf{d}\)// Compute the ASP correction.
7\(\mathbf{x}\leftarrow\mathbf{x}+\mathbf{x}_{c}\)// Update the solution.
8\(k\gets k+1\);
9
10 end while
```
**Algorithm 1**ASP-GLT: Preconditioning for \(\mathbf{V}_{h}(\mathcal{D},\Omega)\).
In the simplest case, we can select the inverse of the mass matrix as the GLT smoother, and we use this approach in the numerical tests developed in this subsection.
To investigate the impact of \(p\)-refinement on the convergence of Algorithm 1, we report in Table 14 the number of CG iterations as a function of the \(B\)-spline degree, with fixed parameter values \(n=64\), \(\tau=10^{-4}\), \(\nu_{1}=1\), and \(\nu_{asp}=3\). In the GLT smoother step, we employ the MINRES solver with \(\nu_{2}=p^{2}\) iterations; the numerical value of \(\nu_{2}\) is motivated by analytical results for the Poisson equation [41]. We consider four cases: unpreconditioned problem, ASP preconditioning with Jacobi smoothing, ASP preconditioning with Gauss-Seidel smoothing, and the optimal ASP algorithm (ASP-GLT). As expected, the number of iterations in the ASP-GLT case appears to be well-behaved with respect to \(p\), meaning that it remains bounded as \(p\) increases. In contrast, the other cases show a much stronger dependence on \(p\), leading to a higher number of iterations as \(p\) increases. In the case of the unpreconditioned problem, this increase is particularly dramatic.
### Three dimensional tests.
We will now consider the three-dimensional case, where our computational domain is a unit cube \(\Omega=(0,1)\) that has been subdivided into \(n\times n\times n\) sub-domains. To test our algorithm's effectiveness, we will be using the optimal Algorithm 1. Similar to the two-dimensional case, we will be using the Conjugate Gradient (CG) method to solve the IgA discrete system related to (3). Both unpreconditioned and preconditioned systems will be tested, with the stopping criteria given by (36) and the initial guess set to the zero vector. We will employ the MINRES solver with \(\nu_{2}=p^{3}\) iterations in the GLT smoother step.
For clarity, we will consider the \(\mathbf{curl}\) and \(div\) problems separately.
#### 4.2.1 Test 4: the \(3\)-\(d\)\(\boldsymbol{H}_{0}(\boldsymbol{curl},\Omega)\) problem
In this test we consider problem (1) subjects to Dirichlet boundary conditions with a right-hand side function given by
\[\boldsymbol{f}(x_{1},x_{2},x_{3})=(x_{1},x_{2},x_{3}),\quad(x_{1},x_{2},x_{3}) \in(0,1)^{3}. \tag{41}\]
Table 15 shows the number of CG iterations for the unpreconditioned system, which allows for comparing the performance of the ASP-GLT algorithm for \(3\)-\(d\)\(\boldsymbol{curl}-\boldsymbol{curl}\) problems. In contrast, Table 16 provides the results for the preconditioned system with both Jacobi and Gauss-Seidel smoothing, and for varying values of \(\tau\), \(n\), and \(p\).
By examining the results presented in these tables, one can evaluate the efficiency and effectiveness of the ASP-GLT algorithm for solving \(3\)-\(d\)\(\boldsymbol{curl}-\boldsymbol{curl}\) problems. Specifically, the preconditioned system leads to a significantly lower number of iterations compared to the unpreconditioned system. Moreover, the number of iterations is largely independent of the system parameters, such as \(\tau\), \(n\), and \(p\), which highlights the robustness of the algorithm. These results also allow for choosing the most suitable smoother, which, as in the \(2\)-\(d\) case, is the Gauss-Seidel smoother.
#### 4.2.2 Test 5: the \(3\)-\(d\)\(\boldsymbol{H}_{0}(div,\Omega)\) problem
We now consider the \(3\)-\(d\)\(\boldsymbol{H}_{0}(div,\Omega)\) problem. Following a similar approach as the previous test, we study problem (2) subject to Dirichlet boundary conditions with a right-hand side function given by
\[\boldsymbol{f}(x_{1},x_{2},x_{3})=(x_{2}x_{3},x_{1}x_{3},x_{1}x_{2}),\quad(x_ {1},x_{2},x_{3})\in(0,1)^{3}. \tag{42}\]
We focus on the CG iterations of the preconditioned system, as the CG iteration count in the case of the unpreconditioned system is similar to that of the \(\boldsymbol{H}_{0}(\boldsymbol{curl},\Omega)\) problem, and hence, we omit it. The results are shown in Table 17.
As it can be observed, the ASP-GLT algorithm shows different behavior compared to the \(\boldsymbol{H}_{0}(\boldsymbol{curl},\Omega)\) problem. In fact, Table 17 shows that although the ASP algorithm significantly reduces the number of iterations required, this number increases with the degree \(p\). One solution to this issue is to replace the matrix \(\boldsymbol{D}_{\boldsymbol{curl}}^{-1}\) in (35) with the symmetric Gauss-Seidel matrix associated with the matrix defined in (34). Specifically, we can replace it with:
\[\boldsymbol{L}_{\boldsymbol{Q}_{curl}}^{-1}-\boldsymbol{L}_{\boldsymbol{Q}_{ curl}}^{-1}\boldsymbol{Q}_{\boldsymbol{curl}}\boldsymbol{U}_{\boldsymbol{Q}_{ curl}}^{-1}+\boldsymbol{U}_{\boldsymbol{curl}}^{-1}. \tag{43}\]
As usual, \(\boldsymbol{L}_{\boldsymbol{Q}_{curl}}\) and \(\boldsymbol{U}_{\boldsymbol{Q}_{curl}}\) represent the lower and upper parts of the matrix \(\boldsymbol{Q}_{\boldsymbol{curl}}\), respectively.
We evaluated the effectiveness of our new strategy by conducting experiments, the results of which are summarized in Table 18. The table presents the number of CG iterations required for various combinations of parameters \(\tau\), \(n\), and \(p\), using both Jacobi and Gauss-Seidel smoothing schemes.
The results presented in Table 18 demonstrate that our proposed approach yields significant improvements compared to the results presented in Table 17. Specifically, by replacing the diagonal matrix \(\boldsymbol{D}_{\boldsymbol{curl}}^{-1}\) with (43), we were able to improve the performance of our solver and control the \(p\)-dependency.
It is worth noting that in all of the developed tests, we used a MINRES solver in the GLT smoothing step. We tried several solvers, including GMRS and BICGSTAB, but the MINRES solver provided the best performance. However, there is another solver that yielded even more satisfactory results and can serve as an alternative to using the smoother matrix (43): the flexible GCROT solver (see [29, 38]). In the following subsection, we will demonstrate the performance of using the flexible GCROT solver in the GLT smoothing step for both the \(\boldsymbol{H}_{0}(\boldsymbol{curl},\Omega)\) and \(\boldsymbol{H}_{0}(div,\Omega)\) problems.
2.3 Test 5: evaluation of the \(3\)-\(d\) ASP-GLT algorithm using a GCROT solver in the GLT smoothing step
In this test, we consider the case studies of _Test 4_ and _Test 5_. We consider only the case of a Jacobi smoother. In the case of \(div\) problem the ASP preconditioner is the one given by (35). The numerical results are shown in Table 19.
The results indicate that using the flexible GCROT algorithm during the GLT smoothing step significantly improves the performance of the ASP-GLT algorithm, particularly in the case of the _curl_ problem. In fact, in this case, the algorithm behaves like a direct method; converging in just one iteration step.
## 5 Conclusions
In this work, we have proposed the use of the Auxiliary Space Preconditioning method (ASP) as a preconditioning strategy for linear \(\mathbf{H}(\mathbf{curl},\Omega)\) and \(\mathbf{H}(div,\Omega)\) elliptic problems in Isogeometric Analysis (IgA). Our contributions include a uniform, stable, and regular decomposition of the IgA discrete spaces, and a proof of the mesh-independent effectiveness of the preconditioners using abstract theory by R. Hiptmair and J. Xu [40]. Our numerical simulations, conducted in two and three spatial dimensions, have confirmed the practical usefulness of our approach. Specifically, we have shown that our preconditioner significantly reduces the spectral condition number and that the number of conjugate gradient iterations required for convergence is independent of the discretization parameter. Additionally, we have demonstrated that the resulting algorithm can be easily extended to a \(p\)-stable algorithm. Our results suggest that the proposed preconditioning method is a promising candidate for future applications in isogeometric analysis.
We conclude this paper by presenting some future research perspectives selected from several
\begin{table}
\begin{tabular}{c c
possible ones. First, our results only apply to a parametric domain. Therefore, extending our method to more general physical domains is currently under development and will be the subject of future work. Another research direction involves extending our results to the case of variant coefficients, which is a wide-open subject and of great interest to investigate. Finally, our numerical tests indicate that the ASP-GLT algorithm has been highly effective. Therefore, a theoretical study of this algorithm would be beneficial, particularly with respect to the choice of solver in the GLT smoothing step, which needs more investigation.
|
2303.07046
|
Deploying Offline Reinforcement Learning with Human Feedback
|
Reinforcement learning (RL) has shown promise for decision-making tasks in
real-world applications. One practical framework involves training
parameterized policy models from an offline dataset and subsequently deploying
them in an online environment. However, this approach can be risky since the
offline training may not be perfect, leading to poor performance of the RL
models that may take dangerous actions. To address this issue, we propose an
alternative framework that involves a human supervising the RL models and
providing additional feedback in the online deployment phase. We formalize this
online deployment problem and develop two approaches. The first approach uses
model selection and the upper confidence bound algorithm to adaptively select a
model to deploy from a candidate set of trained offline RL models. The second
approach involves fine-tuning the model in the online deployment phase when a
supervision signal arrives. We demonstrate the effectiveness of these
approaches for robot locomotion control and traffic light control tasks through
empirical validation.
|
Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, Peilin Zhao
|
2023-03-13T12:13:16Z
|
http://arxiv.org/abs/2303.07046v1
|
# Deploying Offline Reinforcement Learning with Human Feedback
###### Abstract
Reinforcement learning (RL) has shown promise for decision-making tasks in real-world applications. One practical framework involves training parameterized policy models from an offline dataset and subsequently deploying them in an online environment. However, this approach can be risky since the offline training may not be perfect, leading to poor performance of the RL models that may take dangerous actions. To address this issue, we propose an alternative framework that involves a human supervising the RL models and providing additional feedback in the online deployment phase. We formalize this online deployment problem and develop two approaches. The first approach uses model selection and the upper confidence bound algorithm to adaptively select a model to deploy from a candidate set of trained offline RL models. The second approach involves fine-tuning the model in the online deployment phase when a supervision signal arrives. We demonstrate the effectiveness of these approaches for robot locomotion control and traffic light control tasks through empirical validation.
## 1 Introduction
Reinforcement learning (RL) offers a systematic approach to tackle sequential decision-making tasks [41]. RL methods utilize Markov Decision Processes (MDPs) to model the tasks [35], enabling agents to interact with an environment and enhance decision-making by maximizing long-term returns. Thanks to powerful neural networks, RL methods have achieved outstanding performance, surpassing even master-level expertise in various domains [31, 50, 40, 8, 37, 6, 53, 34].
A popular RL framework for real-world applications involves two key steps. The first step is training parameterized policy models using an offline dataset that has previously been collected by specific behavior policies. The second step is deploying these trained policy models in an online environment. In recent years, significant efforts have been dedicated to training offline RL models [13, 24, 25, 46, 23, 14]. The primary challenge in this approach is the lack of further data collection in the offline setting, which requires the agent to consider the epistemic uncertainty (i.e., subjective uncertainty due to limited samples) when optimizing policies. Consequently, various methods have been proposed and evaluated, which can be found in [29, 12] and their respective references.
Despite its benefits, offline training may not be perfect due to various factors such as dataset quality and hyperparameter choices. As a result, trained models may suffer from overfitting, leading to poorer generalization performance in new scenarios. This is particularly concerning when deploying RL methods in the online phase, as models may take
dangerous actions and unexpected results may occur. It is worth noting that the issue of overfitting is widely recognized in the machine learning community, and techniques such as cross-validation and early stopping have been proposed to evaluate model performance before deployment [39]. However, these methods often fail in RL due to the distributional shift problem [36]. In other words, the training and validation distributions may differ significantly in decision-making tasks, making it challenging to assess the offline models' performance without conducting online experiments.
Apart from the evaluation issue mentioned earlier, another concern that arises in industrial applications (such as power system control, autonomous driving, and traffic light control) is the importance of safety and ethics [15]. However, incorporating these factors into the training phase can be challenging as designing proper reward/penalty functions for them requires a significant amount of engineering effort [1]. Fortunately, in many applications, an expert policy (i.e., a human operator) can supervise the deployed RL model and provide feedback. For example, in the case of autonomous driving, people may have different preferences about the control system's decisions. Some may prioritize safety and comfort, while others may prioritize driving efficiency. In such scenarios, it is challenging to consider each person's preferences in the offline training phase, as feedback is only available in the online deployment phase. Therefore, offline RL methods may not perform well in this setting as they are not adaptive during online deployment. Although studies have emerged in the offline-to-online RL setting [49, 38, 28], these works have not considered online human feedback, which is crucial in practical applications.
After taking the above-mentioned considerations into account, it becomes essential to improve the performance of trained RL models during online deployment, particularly when human feedback is available. Please see Figure 1 for illustration. In this context, our objective is not only to maximize the environment return defined by each task but also to ensure that the decisions made by RL models are in line with what human experts expect. Thus, this paper proposes to maximize the concept of _online score_, which integrates both the environment return and human feedback (further elaborated in Section 3.2).
In this manuscript, we formalize the problem of online deployment with human feedback and propose two approaches to maximize the online score. The first approach is based on model selection, where we assume there are \(N\) pre-trained offline models, but their online scores are unknown in advance. To determine which model can achieve the highest online score with minimal trials, we propose using the upper confidence bound (UCB) algorithm [26, 2]. The UCB algorithm estimates the online score of each offline model optimistically and adaptively selects the one to deploy, taking into account the stochastic and uncertain nature of the environment.
For the same online deployment problem, our second approach is based on fine-tuning. In this scenario, we assume that we only have access to one specific offline RL model, but we can improve its performance by fine-tuning it in the online deployment phase using human feedback. Unlike the first approach, the expert provides direct suggestions on action selection, allowing us to improve the model's performance. To leverage the human feedback, we develop imitation-learning-based methods [20] that penalize the discrepancy between the model's and the human's decisions to improve the online score.
We conduct experiments on two tasks: robotics locomotion control [9] and traffic light control [45]. The goal of the first task is to train a robot to perform locomotion behaviors like humans, while in the second task, the objective is to control traffic lights to avoid congestion. We evaluate the performance of our proposed methods on these tasks. Specifically, we show that in the case of model selection, our approach can identify the best model from a candidate set with only about
Figure 1: The framework of online deployment with human feedback. In our framework, a human expert supervises the RL models and provides additional feedback to improve their performance.
100 trials. In the case of fine-tuning, we demonstrate that the online performance can be improved significantly with no more than 200 trials.
This paper is structured as follows. In Section 2, we review prior research in the field. Section 3 provides the necessary background and problem formulation. We then present our proposed methods for model selection and fine-tuning in Sections 4 and 5, respectively. Finally, we present the numerical results in Section 6.
## 2 Related Work
In this section, we provide an overview of previous research related to the topic of this paper.
**Offline Reinforcement Learning.** Offline reinforcement learning algorithms aim to train an effective policy using a dataset that has been previously collected. Over the past few decades, offline RL (also known as batch RL) has been extensively studied in terms of algorithm design [11, 13, 24, 25, 46, 23] and theoretical analysis [42, 33, 4, 47]. These works demonstrate that if the dataset has wide coverage and a small concentration coefficient in relation to the optimal policy, it is possible to accurately solve the Bellman equation using finite samples. In practical applications, the relevant theory studies suggest to consider the epistemic uncertainty in policy optimization. For example, BCQ [13] limits the action range to improve the policy, BRAC [46] employs KL regularization during policy optimization, and CQL [25] penalizes Q-values for out-of-distribution actions.
It should be noted that prior studies on offline reinforcement learning algorithms have a significant limitation. These algorithms often employ online evaluation to optimize architectures or find suitable hyper-parameters, which is impractical due to the high cost of online evaluation. In real-world applications, hyperparameter-free heuristics such as selecting the action with the highest Q-value [16] are commonly used, despite the lack of theoretical guarantees. In our experiments, we will compare the performance of our proposed framework with such heuristics.
**Online Deployment.** Several recent studies have focused on the online deployment problem. For example, [49] proposed a meta-episodic algorithm that addresses the exploration uncertainty issue and ensures uniformly conservative exploration. [38] employed model-based policy and value improvement operators to compute new training targets on existing data points. Additionally, [28] proposed a balanced experience replay scheme to address the online distribution shift issue. However, none of these works considered human feedback during the online deployment phase. In our framework, we consider two types of deployment plans: model selection and fine-tuning, and we review related works in the sequel.
**Model Selection.** Model selection is well studied in the supervised learning literature [39, 32]. Since we only have access to finite samples in practice, the generalization gap must be carefully considered when training an offline model. In supervised learning, training and testing data are independently and identically drawn from the same distribution. Various techniques such as early stopping and \(n\)-fold cross-validation have been developed to address overfitting in the training phase [39]. Nevertheless, reinforcement learning poses a unique challenge to model selection as training and testing data are not from the same distribution due to distributional shift [29].
In contrast, model selection in online learning is well-studied in the literature, particularly in the context of multi-arm bandit (MAB) problems [27]. In MAB, the learner seeks to identify the optimal arm based on partial and bandit feedback. In our scenario, we face a similar problem with partial and bandit feedback, making it suitable for MAB solutions. The upper confidence bound (UCB) algorithm is a popular approach for MAB, which provides nearly minimax optimal performance for regret minimization [26, 2]. Therefore, we propose implementing the UCB algorithm for effective model selection in reinforcement learning.
**Fine-Tuning.** Fine-tuning is a widely used technique in deep learning, especially in the field of transfer learning [17]. It involves taking a pre-trained model and further refining it for downstream tasks [51, 22, 5]. In the realm of reinforcement learning (RL), researchers have explored the use of imitation learning approaches [20] to initialize a model using human demonstrations, and then improve it further with online RL methods. One example of this is the DQfD algorithm proposed by [19], which combines temporal difference updates with supervised classification of the demonstrator's actions. Another algorithm, LOKI, introduced by [7], starts with a few iterations of imitation learning before switching to a policy gradient RL method. A Bayesian formulation using prior information to fine-tune is considered in [30].
## 3 Problem Formulation
In this section, we first introduce the background of reinforcement learning (RL) in Section 3.1. Subsequently, we formalize the problem of online deployment with human feedback in Section 3.2.
### Background
**Markov Decision Processes.** A standard tool to study reinforcement learning is the Markov Decision Process (MDP) [35], which can be described by a tuple \((\mathcal{S},\mathcal{A},p,r,\rho,\gamma)\). Here \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action space, respectively. Moreover, \(p\) is the system transition function; i.e., \(p(s^{\prime}|s,a)\) determines the probability of the next state \(s^{\prime}\) condition on the current state-action pair \((s,a)\). \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) specifies the reward signal and \(\rho(\cdot)\) is the initial state distribution. Finally, \(\gamma\in(0,1)\) is the discounted factor in computing the long-term return.
For a deterministic1 policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), its expected long-term return is denoted by
Footnote 1: Note that there always exists a deterministic policy that can achieve the optimal return [35], so it does not lose generality to consider the deterministic policies.
\[V(\pi):=\mathbb{E}\bigg{[}\sum_{t=0}^{\infty}\gamma^{t}r(s_{t}, a_{t})\mid s_{0}\sim\rho,a_{t}=\pi(s_{t}),s_{t+1}\sim p(\cdot|s_{t},a_{t}), \forall t\geq 0\bigg{]}.\]
To further measure the quality of policy \(\pi\), \(Q\)-value function is introduced:
\[Q^{\pi}(s,a):= \mathbb{E}\bigg{[}\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}) \mid s_{0}=s,a_{0}=a;a_{t}=\pi(s_{t}),s_{t}\sim p(\cdot|s_{t},a_{t}),\forall t \geq 1\bigg{]},\]
i.e., the expected long-term return starting from \((s,a)\). It is well-known that the optimal \(Q\)-value function \(Q^{\star}\) satisfies the Bellman optimality equation:
\[Q^{\star}(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p(\cdot| s,a)}\left[\max_{a^{\prime}}Q^{\star}(s^{\prime},a^{\prime})\right]. \tag{1}\]
The optimal policy \(\pi^{\star}\) is defined as the greedy policy with respect to \(Q^{\star}\), i.e., \(\pi^{\star}(s)=\operatorname*{argmax}_{a\in\mathcal{A}}Q^{\star}(s,a)\). In this manuscript, we mainly consider model-free approaches, so when we mention an RL model, we mean a \(Q\)-value function or the greedy policy associated with this \(Q\)-value function.
**Offline Reinforcement Learning.** In the framework of RL, the transition function is assumed to be unknown, so Equation (1) cannot be directly solved. Instead, RL methods typically have access to samples obtained from environments in an online or offline manner. In this manuscript, we consider the offline scenario, where a dataset \(\mathcal{D}=\{(s,a,r,s^{\prime})\}\) is provided to train RL models [11]. In a simple form where the action space is finite, offline RL methods aim to minimize the Bellman error from finite samples:
\[\theta_{\mathrm{critic}}=\operatorname*{argmin}_{\theta}\sum_{(s,a,r,s^{ \prime})\in\mathcal{D}}\bigg{(}Q_{\theta}(s,a)-r(s,a)-\gamma\max_{a^{\prime} }Q_{\theta}(s^{\prime},a^{\prime})\bigg{)}^{2}, \tag{2}\]
where \(\theta_{\mathrm{critic}}\) is the parameter of \(Q\)-value function \(Q_{\theta_{\mathrm{critic}}}\) (in the context of actor-critic methods [21], the \(Q\)-value function is also called a critic). Then, the policy can be extracted by greedily optimizing \(Q_{\theta_{\mathrm{critic}}}\), i.e., \(\pi(s)=\operatorname*{argmax}_{a\in\mathcal{A}}Q_{\theta_{\mathrm{critic}}}(s,a)\).
For applications with continuous action control, the above training method cannot be applied as the maximization over \(\mathcal{A}\) cannot be directly implemented. In this case, we need an additional actor network \(\pi_{\theta_{\mathrm{actor}}}\) to extract the greedy policy from \(Q_{\theta_{\mathrm{critic}}}\):
\[\theta_{\mathrm{actor}}=\operatorname*{argmax}_{\theta}\sum_{s \in\mathcal{D}}Q_{\theta_{\mathrm{critic}}}(s,\pi_{\theta}(s)).\]
where \(\theta_{\mathrm{actor}}\) is the parameter of the policy \(\pi_{\theta_{\mathrm{actor}}}\) (in the context of actor-critic methods, the policy is also called an actor). Accordingly, the optimization problem (2) becomes
\[\theta_{\mathrm{critic}}=\operatorname*{argmin}_{\theta}\sum_{(s,a,r,s^{ \prime})\in\mathcal{D}}\bigg{(}Q_{\theta}(s,a)-r(s,a)-\gamma Q_{\theta}(s^{ \prime},\pi_{\theta_{\mathrm{actor}}}(s^{\prime}))\bigg{)}^{2}.\]
Advanced offline RL methods additionally consider the epistemic uncertainty in the above optimization; please refer to [29] and references therein.
### Deployment of Offline RL Models
In this section, we explore the challenges involved in deploying trained RL models, which have significant implications for real-world applications. Although a well-trained offline RL model is expected to perform well in online deployment, this is often not the case. One of the primary reasons for this is the difficulty in accurately assessing the offline RL model's performance in the offline scenario. In supervised learning, we can use a validation dataset to choose a model that can generalize well. However, estimating the actual performance of an offline RL model using a validation dataset is challenging due to the distributional shift problem.
Another critical issue in deploying an offline RL model is safety [15]. To address this concern, a human operator often supervises the trained RL model in the online phase to ensure that it does not take dangerous actions. Typically, this human operator has an expert policy \(\pi^{\mathrm{E}}\). If the RL agent deviates significantly from the expert policy, it incurs a penalty. Therefore, it is desirable for the trained RL model to balance the environment's return and the expert policy's expectation.
In this manuscript, we propose the expected online score as a metric to evaluate the performance of RL models in the online deployment phase, taking into account the challenges discussed earlier. The online score is defined as follows:
\[S=\mathbb{E}\bigg{[}\alpha_{1}\cdot\sum_{t=1}^{T}r(s_{t},\pi(s_{t}))-\alpha_{2 }\cdot\sum_{t=1}^{T}\mathbf{1}\left\{\pi(s_{t})\neq\pi^{\mathrm{E}}(s_{t}) \right\}\bigg{]}\quad\text{(discrete action control)}. \tag{3}\]
Here, \(T\) is the maximum trajectory length of an episode, \(\alpha_{1}>0\) and \(\alpha_{2}>0\) are scaling factors, and \(\mathbf{1}\left\{\cdot\right\}\) is the indicator function. The expectation is taken over the randomness in environment transitions. For continuous action control tasks, the second term in (3) is too restrictive, so we introduce a relaxation that considers the squared distance between \(\pi(s_{t})\) and \(\pi^{\mathrm{E}}(s_{t})\), with a tolerance parameter \(\tau>0\). That is, the second term in (3) is replaced with
\[S=\mathbb{E}\bigg{[}\alpha_{1}\cdot\sum_{t=1}^{T}r(s_{t},\pi(s_{t}))-\alpha_{ 2}\cdot\sum_{t=1}^{T}\mathbf{1}\left\{\left\|\pi(s_{t})-\pi^{\mathrm{E}}(s_{t} )\right\|^{2}>\tau\right\}\bigg{]}\quad\text{(continuous action control)}. \tag{4}\]
We note that in both equations (3) and (4), the second term acts as a constraint by quantifying the degree of disagreement between the RL models and the human expert.
## 4 Online Model Selection
In this section, we propose a model-selection-based approach to determine the best offline RL model for online deployment. Suppose we have trained \(N\) different offline models using various methods such as different random seeds, training techniques, and hyperparameters. However, it is unclear which model would perform the best in online deployment. Therefore, we aim to select the optimal model by conducting multiple online deployment trials. This task can be viewed as an online model selection problem.
In this manuscript, we focus on the (cumulative) regret criterion as a model selection algorithm. The (cumulative) regret criterion is defined as follows:
\[\text{regret}_{K}=\sum_{k=1}^{K}\left(S_{k}-S^{\star}\right), \tag{5}\]
where \(S_{k}\) is the online score obtained when deploying a specific model in iteration \(k\). \(S^{\star}\) represents the expected best score that can be obtained by deploying the optimal offline model in hindsight, i.e., \(S^{\star}=\max_{i=1,\dots,N}S^{i}\), where \(S^{i}\) is the online score of the \(i\)-th model. Ideally, we want the model selection algorithm to identify the optimal model quickly, leading to sublinear growth of the regret.
It is worth noting that we only deploy one specific model at a time and receive its corresponding feedback. Thus, we are faced with a bandit feedback setting where the feedback information is partial, leading to an exploration-and-exploitation dilemma [27]. This means that we must try each offline model multiple times (i.e., exploration) before identifying the optimal one for online deployment (i.e., exploitation). To balance the trade-off between exploration and exploitation adaptively, we can employ well-known UCB (upper confidence bound) strategies [26; 2], which construct an optimistic estimate of the feedback to guide exploration. Specifically, in each iteration \(k\), we can use the following rule to make decisions:
\[\operatorname*{argmax}_{i}\texttt{UCB}^{i}_{k}:=\widehat{S^{i}_{k}}+\beta \cdot\texttt{bonus}^{i}_{k}, \tag{6}\]
where \(\widehat{S_{k}^{i}}\) is the estimation of the feedback for decision \(i\) (explained later), \(\beta>0\) is the scaling factor, and \(\texttt{bonus}_{k}^{i}=\sqrt{1/n_{k}^{i}}\), where \(n_{k}^{i}\) is the number of times decision \(i\) has been attempted up to iteration \(k\). Although this UCB strategy is a greedy approach, it can achieve sublinear regret due to the optimistic estimation [27]. Intuitively, the bonus term encourages exploration, ensuring that each decision is tested sufficiently.
In our scenario, each decision refers to a specific offline model, and each iteration corresponds to a rollout of the policy, i.e., an episode. The feedback obtained in each iteration is a noisy score that combines the long-term environment return with a penalty for violating the human's intents, as follows:
\[s_{k}= \alpha_{1}\cdot\sum_{t=1}^{T}r(s_{t},a_{t})-\alpha_{2}\cdot\sum_{ t=1}^{T}\mathbf{1}\left\{\pi(s_{t})\neq\pi^{\mathrm{E}}(s_{t})\right\}.\]
Here, \(\alpha_{1}\) and \(\alpha_{2}\) are scaling factors defined previously, and \(\pi^{\mathrm{E}}\) is the expert policy. A similar formulation can be obtained for continuous action control tasks by replacing the hard constraint with the soft constraint. Note that \(s_{k}\) is a random variable due to the randomness of environment transitions, and we have \(S_{k}=\mathbb{E}[s_{k}]\).
The online score combines the environment rewards and human preferences for the agent's actions. Trained offline models can have dramatically different online scores. Reward-pursuit models, which have no particular constraints in offline policy optimization, may generate actions that human believe are risky and dangerous. Thus, such models receive a large penalty in the online deployment phase and a low online score. Conversely, conservative models with explicit constraints in offline policy optimization may not achieve excellent environment-defined performance, and radical experts may not like them, resulting in a low online score. In both cases, note that we do not know how the models will perform in the online deployment phase.
We have outlined the procedure for applying the UCB strategy to select offline models in our problem in Algorithm 1. In each iteration, we use the UCB strategy to select the most suitable offline model. The model index is denoted by \(i_{k}\) in iteration \(k\). We compute \(\widehat{S_{k}^{i}}\) (appeared in (6)) by the empirical mean:
\[\widehat{S_{k}^{i}}=\frac{\sum_{k^{\prime}=1}^{k}s_{k^{\prime}}\mathbf{1}(i_{ k^{\prime}}=k)}{n_{k}^{i}}, \tag{7}\]
where \(n_{k}^{i}\) is the total times of deploying the \(i\)-th model. In Algorithm 1, we use \(X_{k}^{j}\) to compute the numerator in (7).
Although Algorithm 1 is straightforward, it has been shown to achieve good practical performance and provides reasonable theoretical guarantees. According to the literature [27], the cumulative regret of Algorithm 2 scales proportionally to \(\tilde{\mathcal{O}}(\sqrt{NK})\), by properly choosing \(\beta\). In practice, a constant value of \(\beta\) usually performs well. Note that as \(K\) goes to \(\infty\), we have the averaged regret \(\tilde{\mathcal{O}}(\sqrt{NK}/K)\to 0\). The theory also implies that utilizing prior knowledge to select suitable candidate policies with small \(N\) can significantly minimize regret.
```
0:\(N\): number of offline models, \(\beta\): exploration coefficient, and offline models \(M^{1},\cdots,M^{N}\).
1: Initialize \(X_{0}\in\mathbb{R}^{N}\gets 0,n_{0}\in\mathbb{R}^{N}\gets 0\).
2:for iteration \(k=1,2,\cdots\)do
3:if\(k\leq N\)then
4:\(i_{k}=k\).
5:else
6:\(i_{k}=\mathrm{argmax}_{i}\frac{X_{k}^{i}}{n_{k}^{i}}+\beta\cdot\sqrt{\frac{1}{ n_{k}^{i}}}\).
7:endif
8: Deploy the model \(M^{i_{k}}\) and receive the score \(s_{k}\).
9:for each \(j=1,\cdots,N\)do
10:if\(j=i_{k}\)then
11: Update: \(n_{k}^{j}\gets n_{k-1}^{j}+1\) and \(X_{k}^{j}\gets X_{k-1}^{j}+s_{k}\).
12:else
13: Update: \(n_{k}^{j}\gets n_{k-1}^{j}\) and \(X_{k}^{j}\gets X_{k-1}^{j}\).
14:endif
15:endfor
16:endfor
```
**Algorithm 1** UCB for online model selection
As a meta-algorithm, Algorithm 1 can be applied to both discrete and continuous action control tasks. It is important to note that although UCB includes an exploration phase in the model selection process, it differs significantly from
online exploration in a standard RL framework. In Algorithm 1, we consider only well-trained offline models and test them in the online phase, ensuring the quality of the exploration behavior. On the other hand, in a standard online RL framework, agents may attempt dangerous or harmful actions to explore, which can lead to unexpected results in some applications.
An advantage of Algorithm 1 is its minimal computation complexity in the online phase, as it only requires storing a few vectors and updating them with simple calculations. However, the quality of the candidate set determines the online performance of this selection problem. If the quality of the \(N\) offline RL models is poor, the final performance of Algorithm 1 may not be acceptable. In such cases, fine-tuning can be used to further improve the performance of the offline models, and we will discuss this method in the following sections.
## 5 Online Model Fine-Tuning
In this section, we explore fine-tuning approaches to improve the quality of offline models during online deployment. Our inspiration for this method comes from the fine-tuning of deep neural networks in downstream tasks [51, 22, 5], as well as fine-tuning of deep RL models trained from human demonstrations [19, 7, 3]. Specifically, we consider a scenario where a human expert policy can override the model's action when the expert's decision deviates significantly from the model's action. We log these events in the form of \((s,a,r,s^{\prime})\), where \(a\) is the expert's decision. Later, we extract samples from the log and construct a dataset \(\mathcal{D}^{o}=(s,a,r,s^{\prime})\), which we use to fine-tune the models. Depending on whether the action space is continuous or discrete, we employ two different fine-tuning methods.
**Continuous action control.** In this scenario, we have two models: an actor model \(\pi_{\theta_{\mathrm{actor}}}\) and a critic model \(Q_{\theta_{\mathrm{critic}}}\) (refer to Section 3.1). It is natural to first optimize the critic by minimizing the Bellman error:
\[\theta_{\mathrm{critic}}=\operatorname*{argmin}_{\theta}\sum_{(s,a,r,s^{ \prime})\in\mathcal{D}^{o}}\bigg{(}Q_{\theta}(s,a)-r(s,a)-\gamma Q_{\theta}(s ^{\prime},\pi_{\theta_{\mathrm{actor}}}(s^{\prime}))\bigg{)}^{2}. \tag{8}\]
Then, we can improve the actor by maximizing its \(Q\)-value. To effectively follow the expert's guidance, we additionally train the actor by a mean-squared-error between the model's output and the expert's action (c.f. the second term in (9)). This loss function is inspired by imitation learning theory [48]. As a result, the trained actor may perform well in maximizing the environment return and following the expert, achieving a high score defined in (3).
\[\theta_{\mathrm{actor}}=\operatorname*{argmax}_{\theta}\sum_{s\in\mathcal{D} ^{o}}Q_{\theta_{\mathrm{critic}}}(s,\pi_{\theta}(s))-\sum_{(s,a)\in\mathcal{D }^{o}}\left\lVert\pi_{\theta}(s)-a\right\rVert_{2}^{2}. \tag{9}\]
**Discrete action control.** Different from the previous case, there is no explicit component of policy or actor in the discrete action control applications. Instead, we only have a \(Q\)-value function. Hence, we cannot directly apply the above approach. Following [19], we consider a margin-based loss function to increase the gap between the expert's action and the other actions. Concretely, the margin \(\Delta>0\) is a hyper-parameter, and this margin-based loss function incentivizes the expert action value \(Q_{\theta_{\mathrm{critic}}}(s,a)\) is at least larger than \(Q(s,a^{\prime})\) for \(a^{\prime}\neq a\) by a margin \(\Delta\). In this way, the greedy policy is more likely to select the expert action \(a\).
\[\theta_{\mathrm{critic}}=\operatorname*{argmin}_{\theta}\sum_{(s,a,r,s^{\prime})\in\mathcal{D}^{o}}\bigg{\{} \bigg{(}Q_{\theta}(s,a)-r(s,a)-\gamma Q_{\theta}(s^{\prime},\pi_{\mathrm{ actor}}(s^{\prime}))\bigg{)}^{2} \tag{10}\] \[+\max_{a^{\prime}}\left[Q_{\theta}(s,a^{\prime})+\ell_{\Delta}(a, a^{\prime})-Q_{\theta}(s,a)\right]\bigg{\}}.\]
where \(\ell_{\Delta}(a,a^{\prime})=\Delta\) if \(a\neq a^{\prime}\) and \(0\) otherwise.
In Algorithm 2, we present our proposed approach based on fine-tuning. However, it's important to note that in real-world scenarios, decisions must be made in real-time. Therefore, it's crucial to record only those state-action pairs where the expert is strongly dissatisfied with the decision made by the RL model. Otherwise, the log would become too large, leading to a significant computation load for the fine-tuning process. We address this issue in Lines 6-9 of Algorithm 2.
## 6 Experiments
In this section, we conduct experiments to verify the effectiveness of the proposed methods.
```
0: Trained offline model.
1: Initialize \(\mathcal{D}^{o}\leftarrow\emptyset\).
2:for iteration \(k=1,2,\cdots\)do
3:for time step \(t=1,\cdots,T\)do
4: Observe the current state \(s_{t}\).
5: Select the action \(a_{t}\).
6:if\(\|a_{t}-\pi^{\mathrm{E}}(s_{t})\|^{2}>\tau\) (or \(a_{t}\neq\pi^{\mathrm{E}}(s_{t})\) for discrete action control) then
7: Implement the action \(\pi^{\mathrm{E}}(s_{t})\).
8: Receive the environment reward \(r_{t}\) and observe the next state \(s_{t+1}\).
9: Update \(\mathcal{D}^{o}\leftarrow\mathcal{D}^{o}\cup\{(s_{t},\pi^{\mathrm{E}}(s_{t}),r _{t},s_{t+1})\}\).
10:else
11: Implement the action \(a_{t}\).
12: Receive the environment reward \(r_{t}\) and observe the next state \(s_{t+1}\).
13:endif
14:endfor
15:if\(\mathcal{D}^{o}\) is not empty then
16:if Action space is continuous then
17: Fine-tune the model by (8) and (9).
18:else
19: Fine-tune the model by (10).
20:endif
21:endif
22:\(\mathcal{D}^{o}\leftarrow\emptyset\).
23:endfor
```
**Algorithm 2** Online Fine-Tuning
### Robotics Locomotion Control
This section focuses on three locomotion control tasks: HalfCheetah-v2, Hopper-v2, and Walker2d-v2, as described in [10]. These tasks aim to train a robot to perform human-like locomotion behaviors, using joint angle and velocity information as states and low-level motor controls as actions. Refer to Figure 2 for a visual representation of the locomotion tasks.
For illustrative purpose, we employ an online RL algorithm, specifically the SAC algorithm [18], to obtain an expert policy. Note that SAC is run for 1 million steps. The replay buffer from SAC serves as our offline dataset.
Numerous approaches exist for training offline RL models, as seen in [13, 25, 46, 23]. For this study, we opt to use the CQL algorithm [25] due to its simplicity in implementation, although other methods can also be considered. CQL incorporates a penalty term on out-of-distribution actions during training, resulting in five offline RL models with different scales of this penalty term. The performance of these models is presented in Table 5 in the Appendix. In computing the online score, we appropriately scaled the environment reward and penalty terms for the considered tasks; refer to (3):
\[\texttt{HalfCheetah-v2}: \alpha_{1}=1/8500,\alpha_{2}=1/1000,\] \[\texttt{Hopper-v2}: \alpha_{1}=1/3500,\alpha_{2}=1/1000,\] \[\texttt{Walker2d-v2}: \alpha_{1}=1/4000,\alpha_{2}=1/1000.\]
Figure 2: Illustration of robotics locomotion control, simulated by MuJoCo [43].
The tolerance parameter \(\tau=0.09\) and the maximum trajectory length \(T=1000\).
#### 6.1.1 Online Model Selection
Using the five trained offline models, we implement model selection during the online deployment phase with Algorithm 1. For Algorithm 1, we set the hyper-parameter \(\beta\) to \(1\). Our baselines include the following:
* Highest Q: This method selects the model with the highest Q-value function trained by offline datasets, which has been considered in prior work [16].
* Random Ensemble: This method randomly selects a model from the candidate models to deploy. This method is not adaptive during the online phase.
In addition, we also consider the oracle, which directly selects the model that can maximize the online score. Note that this method cannot be used in practice, and its performance serves as the upper limit for all methods.
We present the numerical results in terms of online scores in Figure 3. Notably, we observe that with no more than 100 iterations, the performance of Algorithm 1 is close to the optimal score. On the other hand, other methods like Highest Q and Random Ensemble do not perform well. The performance of the trained policies is reported in Table 1, where the model selected by Algorithm 1 achieves a high online score.
\begin{table}
\begin{tabular}{l|l|l l l} \hline \hline & & Environment Return (\(\uparrow\)) & Human Disagreement (\(\downarrow\)) & Online Score (\(\uparrow\)) \\ \hline \multirow{4}{*}{HalfCheetah-v2} & Highest Q & \(8682_{\pm 34}\) & \(554_{\pm 7}\) & \(0.46_{\pm 0.01}\) \\ & Random Ensemble & \(8246_{\pm 147}\) & \(162_{\pm 67}\) & \(0.78_{\pm 0.10}\) \\ & Algorithm 1 & \(8308_{\pm 136}\) & \(72_{\pm 11}\) & \(0.91_{\pm 0.00}\) \\ & Oracle & 8292 & 59 & 0.90 \\ \hline \multirow{4}{*}{Hopper-v2} & Highest Q & \(3387_{\pm 1}\) & \(194_{\pm 1}\) & \(0.77_{\pm 0.00}\) \\ & Random Ensemble & \(2756_{\pm 465}\) & \(225_{\pm 41}\) & \(0.54_{\pm 0.17}\) \\ & Algorithm 1 & \(3390_{\pm 6}\) & \(198_{\pm 6}\) & \(0.77_{\pm 0.00}\) \\ & Oracle & 3387 & 193 & 0.77 \\ \hline \multirow{4}{*}{Walker2d-v2} & Highest Q & \(3627_{\pm 325}\) & \(688_{\pm 56}\) & \(0.23_{\pm 0.02}\) \\ & Random Ensemble & \(3685_{\pm 274}\) & \(452_{\pm 96}\) & \(0.40_{\pm 0.10}\) \\ \cline{1-1} & Algorithm 1 & \(4100_{\pm 46}\) & \(329_{\pm 9}\) & \(0.70_{\pm 0.00}\) \\ \cline{1-1} & Oracle & 4106 & 328 & 0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of online model selection algorithms for three locomotion control tasks. Note that the metric of human disagreement measures the number of actions on which the human expert disagrees with the model, as defined in (3) and (4). Here digits correspond to the averaged results, the symbol \(\pm\) corresponds to the stand deviation of 5 experiments with different random seeds, and the symbol \(\uparrow\) indicates that higher values are better, while the symbol \(\downarrow\) indicates the opposite (same as other tables).
Figure 3: Online score (the higher the better) of Algorithm 1 for the robotics locomotion control tasks. Solid lines correspond to the mean and shaded regions correspond to the 95% confidence interval over \(5\) random seeds (same as other figures).
#### 6.1.2 Online Model Fine-Tuning
In this part, we consider the online model fine-tuning approaches for deploying RL models as introduced in Section 5. Without loss of generality, we select the offline model with the worst performance to fine tune (refer to Table 5 in Appendix for the detailed performance of offline models).
We report the numerical results in Figure 4 and Table 1. From Figure 4, we see that by minimizing the mean squared error to fine-tune the RL model further, its online performance can be improved, compared with that without fine-tuning. This validates the effectiveness of Algorithm 2.
### Traffic Light Control
This section considers a traffic light control task, as shown in Figure 5. The traffic light at the roadway is the agent that needs to be controlled, while traffic flows serve as the environment. The states of the system include the queue length (i.e., the number of vehicles in incoming lanes) and the current phase (i.e., the movement signal of the traffic light, such as a green light on the west-east). The goal is to minimize queue length and avoid congestion by adaptively selecting the movement signal. We use real traffic data from Hangzhou, China, which was provided by the TSCC competition2.
Figure 4: Online score of Algorithm 2 for the robotics locomotion control tasks.
\begin{table}
\begin{tabular}{l|l|l l l} \hline \hline & & Environment Return (\(\uparrow\)) & Human Disagreement (\(\downarrow\)) & Online Score (\(\uparrow\)) \\ \hline \multirow{2}{*}{HalfCheetah-v2} & Without Fine-tuning & \(7362_{\pm 46}\) & \(93_{\pm 7}\) & \(0.76_{\pm 0.00}\) \\ & Algorithm 2 & \(7353_{\pm 90}\) & \(67_{\pm 10}\) & \(0.80_{\pm 0.00}\) \\ \hline \multirow{2}{*}{Hopper-v2} & Without Fine-tuning & \(3368_{\pm 2}\) & \(264_{\pm 7}\) & \(0.70_{\pm 0.00}\) \\ & Algorithm 2 & \(3374_{\pm 2}\) & \(81_{\pm 5}\) & \(0.88_{\pm 0.00}\) \\ \hline \multirow{2}{*}{Walker2d-v2} & Without Fine-tuning & \(3787_{\pm 39}\) & \(253_{\pm 12}\) & \(0.69_{\pm 0.00}\) \\ & Algorithm 2 & \(3843_{\pm 25}\) & \(118_{\pm 14}\) & \(0.84_{\pm 0.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of online fine-tuning algorithms for three locomotion control tasks.
Figure 5: Illustration of the traffic light control, simulated by CityFlow [52].
Since the action space for traffic light control is discrete and finite, we consider using DQN-based approaches [31]. To obtain an expert policy, we first train a double DQN agent [44] for 100 iterations. We then collect an offline dataset of 500K samples by using \(\epsilon\)-greedy (\(\epsilon=0.2\)) to roll out the expert policy. We use CQL [25] to train offline RL models with different penalty scales, resulting in five models. Their performance is summarized in Table 6 in Appendix. We set \(\alpha_{1}=1/1000\) and \(\alpha_{2}=1/1800\) for this traffic light control task.
#### 6.2.1 Online Model Selection
In this section, we use Algorithm 1 to select offline RL models during the online deployment phase, with a chosen hyper-parameter \(\beta\) of \(1\). We consider the same baselines as in Section 6.1.1 and present the online score curve in Figure 6. Our observation is consistent with the previous section: after 100 iterations, the performance of Algorithm 1 is comparable to the optimal one. We report the detailed performance of trained policies in Table 3.
#### 6.2.2 Online Model Fine-Tuning
In this part, we try to fine-tune the trained RL model in the online deployment phase. Again, we select the model with the worst performance to fine-tune. The hyperparameter \(\Delta=1\) in (10) is used in experiments. We visualize the online score curve in Figure 7. We observe that without fine-tuning, the performance of the deployed RL model is poor. However, by fine-tuning this model via Algorithm 2 with \(100\) iterations, its performance can be significantly improved. The detailed performance of trained policies is reported in Table 4.
## 7 Conclusion
This manuscript explores effective deployment strategies for offline reinforcement learning (RL) models in the online phase, leveraging human feedback. Two approaches are proposed: model selection and fine-tuning. Experimental results demonstrate the effectiveness of these methods in achieving high online performance.
\begin{table}
\begin{tabular}{c|l l l} \hline \hline & Environment Return (\(\uparrow\)) & Human Disagreement (\(\downarrow\)) & Online Score (\(\uparrow\)) \\ \hline Without Fine-tuning & \(906_{\pm 0}\) & \(457_{\pm 0}\) & \(0.45_{\pm 0.00}\) \\ Algorithm 2 & \(928_{\pm 0}\) & \(147_{\pm 0}\) & \(0.77_{\pm 0.00}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of online fine-tuning algorithms for the traffic light control task.
\begin{table}
\begin{tabular}{c|l l l} \hline \hline & Environment Return (\(\uparrow\)) & Human Disagreement (\(\downarrow\)) & Online Score (\(\uparrow\)) \\ \hline Highest Q & \(908_{\pm 0}\) & \(320_{\pm 0}\) & \(0.75_{\pm 0.00}\) \\ Random Ensemble & \(898_{\pm 11}\) & \(565_{\pm 190}\) & \(0.57_{\pm 0.03}\) \\ Algorithm 1 & \(916_{\pm 3}\) & \(270_{\pm 17}\) & \(0.78_{\pm 0.00}\) \\ Oracle & \(913\) & \(270\) & \(0.79\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of online model selection algorithms for the traffic light control task.
Figure 6: Online score of Algorithm 1 for the traffic light control task.
It should be noted that this work only considers scenarios where the human expert policy and environment are static. In some applications, these factors may change over time, and more sophisticated deployment methods may be required in the future.
|
2301.09612
|
Constraining a Model of the Radio Sky Below 6 MHz Using the Parker Solar
Probe/FIELDS Instrument in Preparation for Upcoming Lunar-based Experiments
|
We present a Bayesian analysis of data from the FIELDS instrument on board
the Parker Solar Probe (PSP) spacecraft with the aim of constraining low
frequency ($\lesssim$ 6 MHz) sky in preparation for several upcoming
lunar-based experiments. We utilize data recorded during PSP's ``coning roll''
maneuvers, in which the axis of the spacecraft is pointed 45$^{\circ}$ off of
the Sun. The spacecraft then rotates about a line between the Sun and the
spacecraft with a period of 24 minutes. We reduce the data into two formats:
roll-averaged, in which the spectra are averaged over the roll, and
phase-binned, in which the spectra are binned according to the phase of the
roll. We construct a forward model of the FIELDS observations that includes
numerical simulations of the antenna beam, an analytic emissivity function of
the galaxy, and estimates of the absorption due to free electrons. Fitting 5
parameters, we find that the roll-averaged data can be fit well by this model
and we obtain posterior parameter constraints that are in general agreement
with previous estimates. The model is not, however, able to fit the
phase-binned data well, likely due to limitations such as the lack of
non-smooth emission structure at both small and large scales, enforced symmetry
between the northern and southern galactic hemispheres, and large uncertainties
in the free electron density. This suggests that significant improvement in the
low frequency sky model is needed in order to fully and accurately represent
the sky at frequencies below 6 MHz.
|
Neil Bassett, David Rapetti, Bang D. Nhan, Brent Page, Jack O. Burns, Marc Pulupa, Stuart D. Bale
|
2023-01-23T18:17:59Z
|
http://arxiv.org/abs/2301.09612v1
|
Constraining a Model of the Radio Sky Below 6 MHz Using the Parker Solar Probe/FIELDS Instrument in Preparation for Upcoming Lunar-based Experiments
###### Abstract
We present a Bayesian analysis of data from the FIELDS instrument on board the Parker Solar Probe (PSP) spacecraft with the aim of constraining low frequency (\(\lesssim 6\) MHz) sky in preparation for several upcoming lunar-based experiments. We utilize data recorded during PSP's "coning roll" maneuvers, in which the axis of the spacecraft is pointed 45\({}^{\circ}\) off of the Sun. The spacecraft then rotates about a line between the Sun and the spacecraft with a period of 24 minutes. We reduce the data into two formats: roll-averaged, in which the spectra are averaged over the roll, and phase-binned, in which the spectra are binned according to the phase of the roll. We construct a forward model of the FIELDS observations that includes numerical simulations of the antenna beam, an analytic emissivity function of the galaxy, and estimates of the absorption due to free electrons. Fitting 5 parameters, we find that the roll-averaged data can be fit well by this model and we obtain posterior parameter constraints that are in general agreement with previous estimates. The model is not, however, able to fit the phase-binned data well, likely due to limitations such as the lack of non-smooth emission structure at both small and large scales, enforced symmetry between the northern and southern galactic hemispheres, and large uncertainties in the free electron density. This suggests that significant improvement in the low frequency sky model is needed in order to fully and accurately represent the sky at frequencies below 6 MHz.
cosmology: dark ages, reionization, first stars--cosmology: observations--methods: data analysis +
Footnote †: journal: Accepted for publication in ApJ
## 1 Introduction
The rapidly growing field of 21 cm cosmology endeavors to open a new window into previously unobserved epochs of the early universe, starting with the Dark Ages, the period immediately following recombination, as well as Cosmic Dawn, when the first astrophysical objects coalesced, through the Epoch of Reionization (EoR). The cosmological 21 cm signal contains an abundance of information about the state and evolution of the universe during these formative periods. Making 21 cm radiation particularly useful as an observational probe is the fact that the expansion of the universe relates each redshifted frequency at which the signal can be observed today (\(\sim\)1-200 MHz) to a different time in the history of the universe, with lower frequencies corresponding to earlier times. However, observational measurement, particularly of the sky-averaged "global signal," requires separating it from strong systematics, most notably foreground emission from both galactic and extragalactic sources.
The Dark Ages portion of the 21 cm signal (below \(\sim\) 50 MHz), is particularly difficult to measure due to the
increased magnitude of the foreground sky and increased ionospheric effects at lower frequencies for ground-based instruments. However, several upcoming experiments are preparing for 21 cm Dark Ages observations by performing measurements from the Moon. These experiments, Radiowave Observations at the Lunar Surface of the photoElectron Sheath (ROLSES; expected launch 2023) and the Lunar Surface Electromagnetics Experiment (LuSEE), will conduct low frequency observations that will be free of terrestrial ionospheric effects. ROLSES will be delivered to the nearside in 2023. the LuSEE experiment is composed of two separate payloads, LuSEE-lite, which will launch to the south pole farside in 2024, and LuSEE-Night, which will land at a mid-latitude farside location in 2025. All three of these experiments have been selected for flight through NASA's Commercial Lunar Payload Services (CLPS) program (Burns et al., 2021). Although lunar observations are free of earth's ionosphere, any potential measurement of the global 21 cm signal still requires extremely careful data analysis.
Some analysis schemes for extracting the global signal from low frequency observations (such as that used by the EDGES team in Bowman et al., 2018, the first reported detection of the global signal) use a linear, polynomial-based model to fit the foreground component. Tauscher et al. (2020) found, however, that this model of the foreground can produce false detections, particularly when used in conjunction with the flattened Gaussian signal model utilized by Bowman et al. (2018). The alternative is to forward-model the foreground component of observations, requiring detailed knowledge of the spatial and spectral structure of the sky (as well as the antenna beam).
Previous efforts to combine measurements and theoretical knowledge of the characteristics of the emission to produce a model of the sky at low radio frequencies include the Global Sky Model (GSM; de Oliveira-Costa et al., 2008; Zheng et al., 2017) and the Ultralong-wavelength Sky Model with Absorption Effect (ULSA; Cong et al., 2021). The GSM decomposes sky maps into five physical components which are then interpolated in frequency in order to output simulated maps between 10 MHz and 5 THz. A key limitation of the GSM, however, is that it does not account for free-free absorption, which becomes significant below 10 MHz, preventing the GSM from producing maps below this frequency. ULSA seeks to extend sky simulations down to 1 MHz by incorporating absorption due to free electrons within the galaxy, as well as in the circumgalactic medium (CGM) and intergalactic medium (IGM).
Limiting the accuracy of simulations of the sky below 10 MHz is the lack of sky surveys at these frequencies, even partial ones. Although a number of observations of the radio sky below 10 MHz have been attempted (see Page et al. (2022) for a more detailed overview of previous measurements), ground-based instruments are severely limited by the ionosphere, which becomes opaque to incoming radiation at the plasma frequency (\(\sim\)10 MHz). Some experiments have been able to take advantage of particularly favorable ionospheric conditions in some locations to observe the emission spectrum (e.g. Ellis, 1982 and Cane and Whitham, 1977 both present sky measurements below 16.5 MHz), but are derived from only a small portion of the sky. Space-based instruments have the advantage of being above Earth's ionosphere and in general have access to a larger portion of the sky due to the lack of blockage from the ground, but previous experiments have not had the capacity to perform high spatial resolution measurements as would be possible with an interferometric array. The result is that no high resolution sky surveys exist in the free-free absorption frequency regime as they do at higher frequencies (e.g. Guzman et al., 2011 at 45 MHz or Haslam et al., 1982 at 408 MHz).
We do, however, have a sense of the general characteristic of the spectrum and spatial distribution of the radio sky below 10 MHz. The ground-based measurements presented in Cane (1978) show that the brightness of the sky increases as frequency decreases until reaching \(\sim\)3 MHz, where the spectrum reaches a maximum before decreasing at lower frequencies1. In complementary space-based measurements using the WIND-WAVES instrument, Manning and Dulk (2001) showed that this maximum in the spectrum near 3 MHz corresponds to a shift in the region of apparent maximum brightness from the lower galactic latitudes (above \(\sim\)3 MHz) to higher galactic latitudes (below \(\sim\)3 MHz). Both of these features are consistent with free-free absorption, which is concentrated in the plane of the galaxy and increases at lower frequencies.
Footnote 1: To be clear, brightness as used here refers to a quantity with units W m\({}^{-2}\) Hz\({}^{-1}\) sr\({}^{-1}\). The spectrum in the commonly used quantity of brightness temperature (with units K) continues to increase below 3 MHz due to the Rayleigh-Jeans law having a \(\nu^{-2}\) dependence.
In an effort to more precisely measure the radio sky at frequencies below 10 MHz, Page et al. (2022) (hereafter referred to as P22) analyzes data from the FIELDS instrument on board Parker Solar Probe (PSP). Specifically, P22 decomposes the spectra recorded during a series of "coning roll" spacecraft maneuvers into spher
ical harmonic components. The analysis is able to constrain the spatial distribution of the emission through the \(l=0\) and \(l=2\) spherical harmonic functions. P22 confirms the presence of a maximum in the monopole component near 3 MHz and a shift in the maximum apparent brightness from the galactic plane to the galactic poles, consistent with the analysis of Manning & Dulk (2001), and indicates that the FIELDS antennas can be treated as ideal short dipoles in the 0.5 - 7 MHz band.
Although P22 finds that the spherical harmonic coefficients extracted from the FIELDS coning roll data are roughly consistent with synthetic sky maps in which the emissivity follows a power law spectral dependence and free-free absorption is modeled using an existing free-electron model, P22 does not attempt to fit a physical model of the sky to the data directly. Instead, the spherical harmonic decomposition presented in P22 is a model-agnostic analysis that is not dependent on any knowledge or assumptions about the emission and absorption. This paper is intended to be a companion to P22, expanding on it by utilizing the same FIELDS coming roll data, with the goal of constraining physical parameters through forward-modeling of the observations.
## 2 Data
The high frequency Receiver (HFR) of the Radio Frequency Spectrometer (RFS; Pulupa et al., 2017), one component of the FIELDS instrument suite (Bale et al., 2016), provides spectral measurements of the sky from 1.3 to 19.2 MHz. FIELDS employs four electric field sensors (V1-V4), each consisting of a 2 m long "whip" that is 1/8 inch in diameter, that extend perpendicularly from the PSP spacecraft in a nearly orthogonal configuration2. The four sensors make up a crossed dipole antenna configuration with tip-to-tip lengths of 6.975 m (V1-V2) and 6.889 m (V3-V4). For a more detailed description of the FIELDS instrument and signal processing chain, see Section 2 of P22.
Footnote 2: Antenanns V1 and V2, as well as V3 and V4, are oriented 180\({}^{\circ}\) apart, but V1 and V3 are separated by only 85\({}^{\circ}\).
In order to extract any spatial information from FIELDS, we utilize FIELDS observations made at different positions relative to the Galaxy. Since PSP is pointed towards the Sun the vast majority of the time, there is very little relative motion between the FIELDS antennas and the sky. Although a Sun-pointing vector will move relative to a galactic coordinate system over the period of a PSP orbit, analysis is complicated by the changing plasma environment and its effect on quasi-thermal noise (QTN; Meyer-Vernet et al., 2017) in the RFS band as PSP moves closer or further from the Sun. However, the coning roll maneuvers performed by PSP at solar distances of approximately 0.8 AU provide an ideal opportunity where the pointing of the FIELDS antennas move significantly relative to a galactic coordinate system in a regular and predictable manner. During a coning roll, the PSP \(z\) axis (the vector orthogonal to the plane of the FIELDS antennas) is pointed off of the Sun and rotates with a period of 24 minutes about the sun-spacecraft line, tracing out a conical shape with the body of the spacecraft.
Though each coning roll maneuver generally lasts for between 10 and 24 hours, transients such as solar bursts or Jovian emission events can contaminate observations. In an attempt to limit contamination from these transients, we excise data surrounding the events by eye. While the excision limits the data to be analyzed, increasing the noise level and thus potentially reduces the constraining power, our goal is to restrict the FIELDS observations to the sky emissi
\begin{table}
\begin{tabular}{c c c c} \hline Year & Month & \begin{tabular}{c} Start \\ (DD HH:MM) \\ \end{tabular} &
\begin{tabular}{c} Stop \\ (DD HH:MM) \\ \end{tabular} \\ \hline \hline
2020 & Dec & 03 12:00 & 03 17:00 \\ \hline
2020 & Apr & 23 01:30 & 23 10:00 \\ & & 23 12:00 & 24 00:00 \\ \hline
2020 & Mar & 14 06:30 & 14 14:00 \\ & & 14 15:00 & 14 18:00 \\ \hline
2019 & Jul & 21 01:00 & 21 08:00 \\ & & 21 10:00 & 22 07:00 \\ \hline
2018 & Dec & 17 12:10 & 18 02:00 \\ & & 18 03:30 & 18 12:00 \\ \hline \end{tabular}
\end{table}
Table 1: Time intervals during coning roll maneuvers omitting transient Jovian or solar emission. Start and stop times are given in UTC. Adapted from P22.
Figure 1: Pointings of the FIELDS V1 antenna during each of the five coning roll maneuvers: 12/03/2020 (blue), 04/23/2020 (orange), 03/14/2020 (red), 07/21/2019 (green), 12/17/2018 (purple). The curves are plotted over the Haslam et al. (1982) 408 MHz map to provide a sense of the orientations of the antennas relative to the galaxy.
Table 1 indicates the time intervals during roll maneuvers that were deemed to be clean of enhanced Jovian or solar emission. Since the five coning rolls that we will analyze are spread over a period of nearly two years and multiple PSP orbits, the orientation of the spacecraft relative to the galaxy during the roll changes each time. The pointing of the V1 antenna over the course of a full rotation period during each of the five coning roll maneuvers is shown in Figure 1.
Autocorrelation spectra \(\langle VV^{*}\rangle\), i.e. power spectral density, with units nV\({}^{2}\)/Hz are calculated for each effective dipole. Though the antenna configuration also permits the calculation of cross-correlations between the two effective dipoles, we leave analysis of these data to future work3. \(\langle VV^{*}\rangle\) can be straightforwardly converted to brightness \(B_{\nu}\) (with units W m\({}^{-2}\) Hz\({}^{-1}\) sr\({}^{-1}\)) through the relation
Footnote 3: P22 incorporates cross-correlation data in the spherical harmonic decomposition analysis.
\[\langle VV^{*}\rangle=\frac{4\pi}{3}Z_{0}\Gamma^{2}l_{\rm eff}^{2}B_{\nu}, \tag{1}\]
where \(Z_{0}=\sqrt{\mu_{0}/\epsilon_{0}}\) is the impedance of vacuum, \(\Gamma\) is the gain factor, and \(l_{\rm eff}\) is the effective length of the dipole (Zaslavsky et al., 2011). Lab measurements of a model FIELDS antenna (Pulupa et al., 2017) indicate that \(\Gamma\approx 0.32\). P22 found that \(l_{\rm eff,V1-V2}=3.3\pm 0.1\) m provided the best agreement with the spectrum published in Novaco & Brown 1978. For the results presented in this work, a constant value of \(l_{\rm eff,V1-V2}=3.3\) m is assumed. Given \(l_{\rm eff,V1-V2}\), \(l_{\rm eff,V3-V4}\) can be determined by the ratio of the V1-V2 and V3-V4 autocorrelations. Again following P22, we adopt \(l_{\rm eff,V3-V4}/l_{\rm eff,V1-V2}=0.99\pm 0.01\).
Brightness temperature is also a commonly used quantity to characterize low frequency radiation. To convert from power spectral density to brightness temperature \(T_{b}\), the relation is
\[\langle VV^{*}\rangle=\frac{8\pi}{3}Z_{0}k_{B}\Gamma^{2}\bigg{(}\frac{l_{\rm eff }}{\lambda}\bigg{)}^{2}T_{b}, \tag{2}\]
where \(k_{B}\) is Boltzmann's constant and \(\lambda\) is the wavelength. The models that we will use are evaluated in brightness temperature before being converted to power spectral density to fit the data.
### Systematic Noise Subtraction
After the autocorrelation spectrum is formed, but before any analysis is performed, two known sources of systematic noise are subtracted from the FIELDS spectra: electronic noise from the instrument receiver and QTN from the local plasma environment. Based on pre-flight testing, noise of the form
\[\begin{split}\Big{(}7.4+0.38(\nu/1\ {\rm MHz})^{-2}\Big{)} \bigg{(}\frac{T_{\rm PA1}+T_{\rm PA2}}{298\ K}\bigg{)}+\\ \Big{(}6.0\Big{)}\bigg{(}\frac{T_{\rm DCB}}{298\ K}\bigg{)}\ {\rm nV^{2}/Hz},\end{split} \tag{3}\]
where \(T_{\rm PA1}\), \(T_{\rm PA2}\), and \(T_{\rm DCB}\) are the temperature of pre-amp 1, pre-amp 2, and the digital control board, respectively, is subtracted from the V1-V2 autospectrum. For the V3-V4 autospectrum, the pre-amp temperatures for antennas V3 and V4 are used. The QTN is modeled with a spectrum of the form
\[A\bigg{(}\frac{\nu}{1\ {\rm MHz}}\bigg{)}^{-b}\ {\rm nV^{2}/Hz}. \tag{4}\]
After the electronic noise is subtracted from the spectrum, \(A\) and \(b\) are fit using the 300 - 400 kHz region of the FIELD Low Frequency Receiver (LFR) band, where QTN is dominant. These best-fit values are then used to subtract the QTN spectrum from the HFR band.
### Roll-averaged Spectra
In order to perform the analysis, we assemble the FIELDS data into two different formats: roll-averaged and phase-binned. Averaging the data over the course of each roll maneuver produces a set of five spectra, one for each of the rolls indicated in Table 1. Note that the roll-averaged component differs slightly from the all-sky monopole component (i.e. the \(l=0\), \(m=0\) spherical
Figure 2: _Top_: roll-averaged spectra for the 04/23/2020 and 07/21/2019 coning rolls. _Bottom_: residual between the two spectra plotted in the top panel. The dashed line indicates the assumed error level for a roll averaged spectrum consisting of both systematic and statistical components.
harmonic) due to the fact that the FIELDS antennas do not "see" the entire sky with equal sensitivity over the course of a roll. The roll-averaged data are most useful in assessing the spectral behavior of the model, particularly the frequency at which free-free absorption produces a maximum in brightness.
We take the error on a single auto spectrum \(\langle VV^{*}\rangle\) to be
\[\sigma_{\langle VV^{*}\rangle}=\frac{\langle VV^{*}\rangle}{\sqrt{240}}. \tag{5}\]
The factor of \(240=80\times 3\) comes from the fact that 80 spectra recorded over an interval of 2 seconds along with 3 adjacent frequency channels are averaged together on board the spacecraft to produce the spectrum that is ultimately telemetered to the ground. Many of these raw auto spectra are averaged together to obtain the roll-averaged spectrum. The statistical error on the roll-averaged mean is
\[\sigma_{\text{roll-average,statistical}}=\frac{\langle VV^{*}\rangle}{\sqrt{240 \times n_{\text{spectra}}}}, \tag{6}\]
where \(n_{\text{spectra}}\) is the number of spectra that have been averaged together. In general, using data from the time periods specified in Table 1, \(n_{\text{spectra}}\approx 5,000\), meaning that the normalized statistical error approaches 0.1% or smaller.
Figure 2 compares the roll-averaged spectra from the 04/23/2020 and 07/21/2019 roll maneuvers. As shown in Figure 1, the orientations of the FIELDS antennas during these rolls are very similar. As expected, the two roll-averaged spectra are nearly identical with the exception of some scatter that appears to be uncorrelated in frequency. Since the synchrotron emission that makes up most of the sky is expected to be spectrally smooth, as well as the fact that the spectra are derived from very similar views of the sky, it is unlikely that the scatter is intrinsic to the emission. The apparently uncorrelated nature of the scatter also seems unlikely to be caused by transient events that have escaped our cleaning process. The statistical error (calculated with Equation 6) is not large enough to account for the observed scatter, so it must be caused by some source of systematic error. We can still account for this error in our analysis without absolute knowledge of its source. Fortunately, since the scatter appears to be uncorrelated in frequency there will be little overlap between this systematic error and the models we will be fitting to the data. We can straightforwardly estimate the magnitude of the error and include it as a diagonal term in the covariance matrix. We estimate this systematic error as a 0.6% fractional error. The total error (statistical + systematic) is plotted in Figure 6 relative to the spectral residuals.
To fit the five roll-averaged spectra simultaneously, we concatenate the spectra into a single data vector \(\boldsymbol{y}^{4}\) with \(5\left(n_{\text{rolls}}\right)\times 36\) (\(n_{\text{frequencies}}\)) = 180 channels. We also construct a covariance matrix \(\boldsymbol{C}\) which accounts for all sources of error, both statistical and systematic. \(\boldsymbol{C}\) is a square matrix with shape \(180\times 180\), with
\[C_{ij}=\begin{cases}\text{Var}\big{[}y_{i}\big{]}&i=j\\ \text{Cov}\big{[}y_{i},y_{j}\big{]}&i\neq j\end{cases}, \tag{7}\]
where \(C_{ij}\) is the element in the \(i\)th row and \(j\)th column of \(\boldsymbol{C}\). In this case, since both the statistical and systematic error are assumed to be uncorrelated, the covariance matrix is diagonal with all off-diagonal elements equal to 0. This covariance matrix is used in the likelihood function to perform the nonlinear fits described in Section 3.6.
### Phase-binned Spectra
To more clearly illustrate the modulation in the FIELDS observations induced by the PSP coning roll maneuvers, we phase-fold and bin the RFS spectra at each frequency. While the phase-folded data are derived from the same observations as the roll-averaged data, we find it useful to inspect each separately in order to better assess the performance of the model. The roll-averaged spectra can be thought of as a single phase bin, while increasing the number of bins breaks the roll into increasingly smaller sections.
Figure 3: Phase-binned V1-V2 (top) and V3-V4 (bottom) autocorrelations at 5.97 MHz using 40 phase bins during the April 23, 2020 coming roll maneuver. Error bars show the \(1\sigma\) uncertainty calculated from Equation 8.
Errors for each phase bin at each frequency are given by the standard error of the bin mean, i.e.
\[\sigma_{\rm bin}=\frac{\sigma}{\sqrt{N}}, \tag{8}\]
where \(N\) is the number of data points within the bin and \(\sigma\) is their standard deviation. Figure 3 shows an example of the phase-binned data for a single frequency channel. In this example, V3-V4 appears to lag V1-V2 by a phase of about \(\pi/2\) or so, which makes sense given the orientation of the antennas. Comparing the magnitude of the peak-to-trough distance to the average magnitude of the phase-binned data indicates that the roll-induced modulations are about 10% or less of the average sky magnitude. Although only a single frequency channel is shown in this plot, the phase-binned data can be concatenated into a single data vector with shape \(2\times n_{\rm bins}\times n_{\rm frequencies}\), allowing all frequencies and both autocorrelations to be fit simultaneously. The covariance matrix is formed using the square of the error given by Equation 8 for the diagonal elements. Again \(\mathbf{C}\) is diagonal with off-diagonal elements equal to 0.
## 3 Modeling and fits
Our forward model of the FIELDS observations consists of three main components: the antenna beam, which quantifies the sensitivity of the antennas to incoming radiation; an emissivity function, which provides the inherent brightness of the sky along a given line of sight; and an absorption model, which describes how the emission is attenuated by free-free absorption. The synthetic model makes use of the SPICE toolkit (Acton, 1996; Acton et al., 2018) to obtain the attitude of the PSP spacecraft over the course of a coning roll maneuver. SPICE kernel data provide the pointing of the FIELDS antennas in galactic coordinates, which is used to orient the antenna beams relative to the emission and absorption maps. Since each coning roll involves a periodic movement of the antennas, to make the synthetic model we retrieve the orientations of the antennas from the SPICE toolkit at the phase bin centers during a single roll period and use these orientations to construct simulated observations. This model is then fit to either the roll-averaged or phase-binned observations as described in Section 4.
### Antennas
We employed the CST5 electromagnetic simulation software to compute the farfield radiation pattern of PSP/FIELDS' four antennas. The spacecraft and antenna 3D models were constructed in CST based on general dimensions extracted from the publicly available PSP's 3D CAD model6. Since the antennas operate at the low frequency (i.e., large wavelength) regime, the antenna beam patterns are primarily affected by the physical size of the major components on the spacecraft. Only five major components are included in the simplified CST model: four antennas, a front heatshield, a hexagonal payload, two radiators, and two main solar panels. The material properties of those components (Table 2) were also simplified to be either lossy7 aluminum or idealized insulation foam.
Footnote 5: [https://www.3ds.com/products-services/simulia/products/cst-studio-suite/](https://www.3ds.com/products-services/simulia/products/cst-studio-suite/)
Footnote 6: [https://solarsystem.nasa.gov/system/resources/usdz_files/2356_PSP.usdz](https://solarsystem.nasa.gov/system/resources/usdz_files/2356_PSP.usdz)
Footnote 7: Accounted for skin depth effect.
Footnote 8: The 3D full-wave solver approximates solutions for the complete set of Maxwell’s equations without any simplifying assumption, such as 2D quasi-static.
Each of the four antennas was simulated in situ separately, using the CST's 3D full-wave8 Time Domain Solver9, for each frequency channel between 1 and 6 MHz. The farfield beam patterns of the two oppositely located monopole antennas were combined coherently with 180\({}^{\circ}\) phase offset to produce the toroidal free-space dipole beam pattern (shown in Figures 4 and 5). For this work, we calculate only the total power Stokes I beam, though the crossed-dipole system does permit Stokes Q, U, and V polarization beams to be calculated. The equations governing the convolution of the beam pattern with the sky are given in Section 3.5.
Footnote 9: The Time Domain Solver utilizes the Finite Integration Techniques solver method. The PSP model was simulated with open perfectly match layer boundary conditions in all six directions of the simulation domain box, resulting in a total meshcell number of 18,203,328.
### Emission
Previous efforts to model galactic emission, particularly above \(\sim\)40 MHz (such as the GSM mentioned above), rely on the interpolation of partial or full sky maps from the measured frequency to the frequency of
\begin{table}
\begin{tabular}{l l} \hline Component & \multicolumn{1}{c}{Assigned Material} \\ \hline \hline Antennas & Lossy Aluminum \\ Heathield & Foam (\(\epsilon_{r}=1\)) \\ Hexagonal Payload & Lossy Aluminum \\ Radiators & Lossy Aluminum \\ Solar Panels & Lossy Aluminum \\ \hline \end{tabular}
\end{table}
Table 2: Assumed material properties for the simplified PSP components used in the CST electromagnetic simulation.
interest. However, for lower frequencies, interpolation poses several potential problems. First, there is no measured sky map below the FIELDS band with which to anchor the interpolation from higher to lower frequencies. Second, interpolation does not offer a straightforward way to parameterize the model such that it can be varied in order to fit the FIELDS observations.
Instead, we adopt an analytic function for the emissivity of the galaxy in a galactocentric cylindrical coordinate system,
\[\varepsilon(\nu,R,Z)=A\bigg{(}\frac{R+r_{1}}{R_{0}}\bigg{)}e^{-R/R_{0}}e^{-|Z/Z _{0}|}\bigg{(}\frac{\nu}{\nu_{0}}\bigg{)}^{\beta}\ \mathrm{K/kpc}, \tag{9}\]
where \(r_{1}=0.1\) kpc is an offset to prevent a value of 0 occurring at the origin and \(\nu_{0}=408\) MHz is the reference frequency for which the spectral index \(\beta\) is defined. This is a modified version of the function used in the
Figure 4: 3D rendered models of the PSP spacecraft and major components included in the CST electromagnetic simulation. Note the commonly recognizable toroidal beam pattern for a dipole in free space.
Figure 5: Polar plots of two different cuts through the FIELDS Stokes I antenna beams from numerical simulations performed with CST Microwave Studio software for a subset of 6 frequency channels from the FIELDS RFS band. The beams are plotted in dB and are normalized such that the maximum value is 0. There is little variation in the normalized Stokes I beams with frequency, causing the curves to overlap significantly.
ULSA model10(Cong et al., 2021) and describes an axisymmetric emissivity that falls off exponentially in the radial and \(Z\) directions, with scale heights \(R_{0}\) and \(Z_{0}\), respectively. \(A\), \(R_{0}\), \(Z_{0}\), and \(\beta\) are treated as free parameters, while \(r_{1}\) and \(\nu_{0}\) are fixed.
Footnote 10: The ULSA emissivity function contains two additional parameters \(\alpha\) and \(\gamma\), which were omitted here due to their strong covariance with \(R_{0}\) and \(Z_{0}\). Tests with the ULSA emissivity function indicated that these additional parameters did not improve the goodness-of-fit.
In addition to galactic emission, there is also an extragalactic component of emission from unresolved sources that is approximately isotropic (before accounting for free-free absorption). Again, we follow Cong et al. (2021) and adopt
\[T_{E}(\nu)=1.2\bigg{(}\frac{\nu}{1~{}\mathrm{GHz}}\bigg{)}^{-2.58}~{}\mathrm{K} \tag{10}\]
as the brightness temperature of the isotropic extragalactic emission. This emission will be attenuated by the same free-free absorption that affects galactic emission, as described in Section 3.3.
### Absorption
At frequencies below \(\sim\)10 MHz, free-free absorption results in a significant attenuation of incident radiation. The optical depth of the absorption can be approximated by
\[\tau=3.28\times 10^{-7}\bigg{(}\frac{T_{e}}{10^{4}~{}\mathrm{K}}\bigg{)}^{-1.35 }\bigg{(}\frac{\nu}{\mathrm{GHz}}\bigg{)}^{-2.1}\bigg{(}\frac{\mathrm{EM}}{ \mathrm{pc~{}cm^{-6}}}\bigg{)}, \tag{11}\]
where \(T_{e}\) is the electron temperature and \(\mathrm{EM}\) is the emission measure (Condon and Ransom, 2016). Since the absorption will be dominated by free electrons in the warm interstellar medium (WIM), we use \(T_{e}=8000~{}\mathrm{K}\). \(\mathrm{EM}\) is given by the integral of the squared electron density along a line-of-sight (LOS), i.e.
\[\mathrm{EM}=\int n_{e}^{2}ds. \tag{12}\]
Several different models of the galactic free electron distribution exist, namely NE2001 (Cordes and Lazio, 2002) and YMW16 (Yao et al., 2017). Both models derive estimates of \(n_{e}\) from the measured dispersion of pulse arrival times from known pulsars. For this work we will use the YMW16 model of the free electron density due to its ability to better estimate pulsar distances, particularly at high galactic latitudes. Complicating the estimation of \(\mathrm{EM}\) from the model, however, is that pulsar measurements can only be used to calculate the dispersion measure (DM), given by
\[\mathrm{DM}=\int n_{e}ds. \tag{13}\]
If the free electron density is constant along the LOS, then DM can be converted directly to \(\mathrm{EM}\). Unfortunately, free electrons are generally clumped into clouds, such that the free electron density is not constant. Suppose (as an illustrative example) that the free electron density along a LOS can be described by \(n_{e}(s)=\overline{n_{e}}+\delta_{n_{e}}(s)\), where \(\delta_{n_{e}}(s)\) is a fluctuation on the average density \(\overline{n_{e}}\). Assuming that the fluctuations average to 0 over a given LOS, then \(\mathrm{DM}=\int\overline{n_{e}}ds\) and thus the fluctuation term can effectively be ignored. However, these fluctuations cannot be ignored when calculating \(\mathrm{EM}\), which gives \(\mathrm{EM}=\int(\overline{n_{e}}+\delta_{n_{e}})^{2}ds=\int\overline{n_{e}}^{ 2}ds+\int\delta_{n_{e}}^{2}ds\).11
Footnote 11: Note that a third term with a single power of \(\delta_{n_{e}}\) on the right-hand-side of this equation is 0 because of the assumption that the fluctuations average to 0 over the line of sight.
A detailed derivation of the \(\mathrm{EM}\) (see P22 Section 5), taking into account the clumping of free electrons into discrete clouds as well as intra-cloud density fluctuations, provides an expression for the \(\mathrm{EM}\) calculated from the synthetic density \(\overline{n}_{e}\), provided by the pulsar-based models:
\[\mathrm{EM} =\int f^{-1}\overline{n}_{e}^{2}ds, \tag{14a}\] \[f =\big{(}\eta^{-1}\zeta+Fl_{0}^{2/3}\big{)}^{-1}. \tag{14b}\]
In the above expression, \(\eta\) is the fraction of the LOS occupied by clouds, \(\zeta=\langle n_{e}^{2}\rangle/\langle n_{e}\rangle^{2}\) quantifies the intercloud variations in density12, \(F=\eta^{-1}\zeta\epsilon^{2}l_{0}^{-2/3}\) is the so-called "fluctuation parameter," and \(\epsilon^{2}=\delta n_{e}^{2}/n_{e}^{2}\) describes the intra-cloud variations. This expression assumes that the intra-cloud fluctuations follow a power law relationship in wavenumber with index 11/3 between an inner scale \(l_{1}\) and an outer scale \(l_{0}\gg l_{1}\). Generally, \(l_{0}\approx 1~{}\mathrm{pc}\). For most portions of the galaxy, \(f\) is dominated by the inter-cloud variations, i.e. \(f\approx\eta\zeta^{-1}\). Although sometimes density fluctuations within clouds can become significant (e.g. near the galactic center), we will generally refer to \(f\) as the "filling factor" due to its dependence on \(\eta\), the fraction of the LOS filled with clouds.
Footnote 12: Brackets indicate an average over the portion of the LOS filled by clouds.
Unfortunately, the filling factor is relatively unconstrained, making conversion from DM to \(\mathrm{EM}\) difficult. In this paper, we divide the filling factor into four galactic components: the thick disk, the thin disk, the spiral arms, and the galactic center (GC). Yao et al. (2017)
models the radial dependence of electron density in the thin disk, spiral arms, and galactic center using sech\({}^{2}\) functions that peak at each component's central radius and fall off with some characteristic scale length. For this work, a point is deemed to be "within" each component (for the purposes of the filling factor) if it is within two scale lengths of the center of the component, as given by the relevant equations of Yao et al. (2017). In cases where the components overlap with each other, we use the filling factor of the component that comes first in the list [galactic center, spiral arms, thin disk]. For these three components, the filling factor is assumed to be constant.
The rest of the galactic volume is assumed to be the thick disk. Motivated by Gaensler et al. (2008), which found that the the filling factor increases exponentially from the galactic plane with a scale height of approximately 0.7 kpc, we assume that the filling factor is dependent on the \(z\) galactic coordinate. The authors of Gaensler et al. (2008) also suggest that above \(\sim\)2 kpc, the filling factor decreases until \(\sim\)5 kpc, where it reaches a constant value. We take a functional form for the filling factor of the thick disk that increases exponentially from a value \(a\) in the plane to \(ae^{2}\) at two scale heights above the plane and subsequently decreases exponentially to an asymptotic value \(b\) at high latitudes, i.e.
\[f=\begin{cases}ae^{|z|/0.7}&|z|<1.4\text{ kpc}\\ b\big{[}1-\big{(}1-\frac{ae^{2}}{b}\big{)}e^{-(|z|-1.4)/0.7}\big{]}&|z|\geq 1. 4\text{ kpc}\end{cases}. \tag{15}\]
Simply due to its larger volume relative to the thin disk, spiral arms, or galactic center, the thick disk and its filling factor are more important in terms of the magnitude of the free-free absorption compared to the other galactic components. Specifically \(a\), which determines the filling factor near the plane where the free electron density is highest, is likely to be the most important quantity with regards to the absorption. \(b\), which describes the filling factor at high latitudes, where the free electron density is much lower, has a much smaller impact on the absorption. In order to minimize the dimensionality of the parameter space as much as possible, which increases the efficiency of nonlinear sampling methods, we treat only \(a\) as a free parameter, while holding \(b=0.01\), \(f_{\text{thin}}^{-1}=7\), \(f_{\text{arm}}^{-1}=3\), and \(f_{\text{GC}}^{-1}=1\times 10^{5}\) at constant values. These values are consistent with those used in P22.
### The Sun
Assuming that any transient solar emission, such as flares or bursts, have been removed through the excision process described above, the quiescent Sun contributes a brightness temperature of approximately 5800 K, the blackbody temperature of the solar photosphere. The brightness temperature of the Sun is dwarfed by that of the galaxy, but the Sun also acts as a blocking body, occulting the galaxy behind it. Table 3 provides the distance, apparent angular diameter, and apparent galactic coordinates of the Sun relative to PSP. The angular sizes of the Sun in Table 3 assume the Sun ends at the photosphere. While plasma at higher solar radii may still obscure the sky at these frequencies (the plasma frequency at 10 solar radii can be \(\sim\)1 MHz), even increasing the apparent angular size of the sun by an order of magnitude would still result in an extremely small effect, as described in the following paragraph. While the Sun is slightly larger for PSP compared to an observer on Earth, its angular size is still relatively small due to the fact that the coning rolls are performed near the apocenter of PSP's orbit.
The Sun will have the biggest effect on the sky brightness if it is blocking the brightest portion of the sky. Assuming that the portion of sky blocked by the Sun is constant at this maximum brightness \(T_{\text{max}}\), then the fractional portion of the all-sky brightness blocked by the Sun is
\[\frac{\Omega_{\odot}}{4\pi}\frac{T_{\text{max}}}{T_{\text{mean}}}. \tag{16}\]
Given the solid angles in Table 3, \(\Omega_{\odot}/4\pi\approx 10^{-5}\). Estimating the \(T_{\text{max}}/T_{\text{mean}}\) is a bit more difficult since low resolution maps will tend to underestimate this ratio. For the high resolution (\(n_{\text{side}}=128\)) Haslam map at 408 MHz (Haslam et al., 1982), the ratio is \(T_{\text{max}}/T_{\text{mean}}\approx 20\). The max to mean ratio is unlikely to be larger than this in the FIELDS band because the absorption will likely reduce the maximum apparent brightness. Using \(T_{\text{max}}/T_{\text{mean}}=20\), the fractional difference in the all-sky brightness caused by the Sun is (at worst) of order \(10^{-4}\), or a hundredth of a percent. While differences of this order may be important for extremely sensitive analy
\begin{table}
\begin{tabular}{c c c c c} \hline date & \(d_{\odot}\) (AU) & \(D_{\odot}\) (\({}^{\prime}\)) & \(\Omega_{\odot}\) (sr) & \((l,\,b)_{\odot}\) \\ \hline \hline
12/03/2020 & 0.795 & 40.2 & 1.076e-4 & (48.26, -53.69) \\
04/03/2020 & 0.815 & 39.3 & 1.024e-4 & (68.95, -60.86) \\
03/14/2020 & 0.816 & 39.2 & 1.023e-4 & (34.92, -44.56) \\
07/21/2019 & 0.809 & 39.6 & 1.041e-4 & (71.34, -61.23) \\
12/17/2018 & 0.801 & 40.0 & 1.061e-4 & (23.56, -30.68) \\ \hline \end{tabular}
\end{table}
Table 3: Quantities related to the Sun relative to PSP for each of the five coning roll days. The quantities are the distance between the Sun and PSP, the apparent angular diameter of the Sun in arcminutes (the angular diameter of the Sun as viewed from Earth is approximately \(30^{\prime}\)), the solid angle subtended by the Sun, and the apparent galactic longitude and latitude coordinates of the Sun.
ses, such as extraction of the 21-cm signal, it is dwarfed by both the statistical and systematic uncertainties of the FIELDS observations. We do implement the presence of the Sun in our modeling (using the quantities in Table 3), but it is not a significant component of the sky brightness and its contribution to any uncertainty in the model can be considered negligible.
### Synthetic Observations
This section describes how each of the components described above are used to construct the synthetic FIELDS observations that make up the model that is fit to the data. The sky viewed by the antennas (before accounting for the beam pattern) is given by an integral along the LOS:
\[\begin{split} T_{\text{sky}}(l,b,\nu)=&\int_{0}^{s_ {g}}\varepsilon(l,b,s,\nu)e^{-\tau(l,b,s,\nu)}ds\\ &+T_{E}(\nu)e^{-\tau(l,b,s_{g},\nu)},\end{split} \tag{17}\]
where \(s_{g}\) is the pathlength to the edge of the galaxy. The beam of either the V1-V2 or V3-V4 effective dipole is then rotated to the correct position on the sky according to the SPICE orientation data, and the brightness temperature of the synthetic observation is given by
\[T_{\text{ant}}(\nu)=\frac{\int_{4\pi}B(\theta,\phi,\nu)T_{\text{sky}}(\theta, \phi,\nu)d\Omega}{\int_{4\pi}B(\theta,\phi,\nu)d\Omega}. \tag{18}\]
The Stokes I beam function \(B(\theta,\phi,\nu)\), describing the sensitivity of the antennas to incoming unpolarized radiation, is provided by the CST simulation described in Section 3.1.13
Footnote 13: Though a small amount of polarized radiation can potentially couple into the Stokes I spectrum through “polarization leakage,” we do not account for this effect here.
The synthetic antenna temperature spectrum is calculated for 500 evenly spaced points over the course of a single roll period. The antenna temperature in Kelvin is then converted to power spectral density through Equation 2. For fitting the roll-averaged spectra, these 500 synthetic spectra are averaged together, whereas for the phase-binned data a cubic spline interpolation is performed in order to evaluate the model at the center of the phase bins.
Although the above equations that govern real observations involve continuous integrals, in order to calculate the synthetic observations, we must rely on discrete sums that estimate the true values of the integrals. We utilize the HEALPix formalism (Gorski et al., 2005) to discretize maps of the antenna beam, absorption, and emission. Each time the parameters of the absorption or emission change we must re-calculate the integrals in Equations 17 and 18. For each pixel, we sample \(s\) in steps of 0.1 kpc out to a maximum distance of 50 kpc. Each time we wish to create a synthetic observation, we must evaluate both the emission and absorption model at each step in \(s\) for each map pixel for each frequency channel. In order to perform a nonlinear fit, which involves constructing potentially hundreds of thousands of synthetic observations, we use \(n_{\text{side}}=4\), which corresponds to \(n_{\text{pixels}}=3,072\). This results in relatively low spatial resolution maps, but is mitigated by the extremely large size of the beam. Higher values for \(n_{\text{side}}\) result in intractable computation times to perform a fit, even with high performance computing resources.
### Fitting
Let \(\mathbf{y}\) be the data vector of the five concatenated roll-averaged spectra and let \(\mathbf{\mathcal{M}}(\mathbf{\theta})\) be the synthetic model of the data with parameters \(\mathbf{\theta}\), which is constructed from the components described above. To fit the model to the data we construct a likelihood function
\[\begin{split}\mathcal{L}(\mathbf{y}|\mathbf{\theta})=&|2 \pi\mathbf{C}|^{-1/2}\\ &\times\exp\biggl{\{}-\frac{1}{2}[\mathbf{y}-\mathbf{\mathcal{M}}(\mathbf{ \theta})]^{T}\mathbf{C}^{-1}[\mathbf{y}-\mathbf{\mathcal{M}}(\mathbf{\theta})]\biggr{\}}, \end{split} \tag{19}\]
where \(\mathbf{C}\) is the covariance matrix described in Equation 7. The data vector, covariance matrix, and the model are derived from either the roll-averaged data or the phase-binned data, which are fit separately, though the parameters of the model are the same. Ultimately, we want to explore the posterior distribution of the parameters given the data, \(p(\mathbf{\theta}|\mathbf{y})\), which is given by Bayes' theorem,
\[p(\mathbf{\theta}|\mathbf{y})=\frac{\mathcal{L}(\mathbf{y}|\mathbf{\theta})\cdot\pi(\mathbf{ \theta})}{\mathcal{Z}}, \tag{20}\]
where \(\pi(\mathbf{\theta})\) is the prior distribution on the parameters and the normalization term \(\mathcal{Z}\) is often referred to as the evidence.
To sample from the posterior distribution and perform parameter estimation, we use a nested sampling algorithm, which is able to efficiently sample parameter spaces with multi-modal distributions or significant degeneracies between parameters.14 Specifically, we use the PyMultiNest15 implementation (Buchner et al., 2014) of the MultiNest algorithm (Feroz et al., 2009, 2019).
Footnote 14: Nest Sampling is better in this regard compared to Markov Chain Monte Carlo (MCMC) algorithms, another commonly used sampling method.
Footnote 15: [https://github.com/JohannesBuchner/PyMultiNest](https://github.com/JohannesBuchner/PyMultiNest)
### Fitting
Let \(\mathbf{y}\) be the data vector of the five concatenated roll-averaged spectra and let \(\mathbf{\mathcal{M}}(\mathbf{\theta})\) be the synthetic model of the data with parameters \(\mathbf{\theta}\), which is constructed from the components described above. To fit the model to the data we construct a likelihood function
\[\begin{split}\mathcal{L}(\mathbf{y}|\mathbf{\theta})=&|2 \pi\mathbf{C}|^{-1/2}\\ &\times\exp\biggl{\{}-\frac{1}{2}[\mathbf{y}-\mathbf{\mathcal{M}}(\mathbf{ \theta})]^{T}\mathbf{C}^{-1}[\mathbf{y}-\mathbf{\mathcal{M}}(\mathbf{\theta})]\biggr{\}}, \end{split} \tag{21}\]
where \(\mathbf{C}\) is the covariance matrix described in Equation 7. The data vector, covariance matrix, and the model are derived from either the roll-averaged data or the phase-binned data, which are fit separately, though the parameters of the model are the same. Ultimately, we want to explore the posterior distribution of the parameters given the data, \(p(\mathbf{\theta}|\mathbf{y})\), which is given by Bayes' theorem,
\[p(\mathbf{\theta}|\mathbf{y})=\frac{\mathcal{L}(\mathbf{y}|\mathbf{\theta})\cdot\pi(\mathbf{ \theta})}{\mathcal{Z}}, \tag{22}\]
where \(\pi(\mathbf{\theta})\) is the prior distribution on the parameters and the normalization term \(\mathcal{Z}\) is often referred to as the evidence.
To sample from the posterior distribution and perform parameter estimation, we use a nested sampling algorithm, which is able to efficiently sample parameter spaces with multi-modal distributions or significant degeneracies between parameters.16 Specifically, we use the PyMultiNest17 implementation (Buchner et al., 2014) of the MultiNest algorithm (Feroz et al., 2009, 2019).
Footnote 17: Nest Sampling is better in this regard compared to Markov Chain Monte Carlo (MCMC) algorithms, another commonly used sampling method.
Footnote 18: [https://github.com/JohannesBuchner/PyMultiNest](https://github.com/JohannesBuchner/PyMultiNest)
### Fitting
Let \(\mathbf{y}\) be the data vector of the five concatenated roll-averaged spectra and let \(\mathbf{\mathcal{M}}(\mathbf{\theta})\) be the synthetic model of the data with parameters \(\mathbf{\theta}\), which is constructed from the components described above. To fit the model to the data we construct a likelihood function
\[\begin{split}\mathcal{L}(\mathbf{y}|\mathbf{\theta})=&|2\pi \mathbf{C}|^{-1/2}\\ &\times\exp\biggl{\{}-\frac{1}{2}[\mathbf{y}-\mathbf{\mathcal{M}}(\mathbf{ \theta})]^{T}\mathbf{C}^{-1}[\mathbf{y}-\mathbf{\mathcal{M}}(\mathbf{\theta})]\biggr{\}}, \end{split} \tag{23}\]
where \(\mathbf{C}\) is the covariance matrix described in Equation 7. The data vector, covariance matrix, and the model are derived from either the roll-averaged data or the phase-binned data, which are fit separately, though the parameters of the model are the same. Ultimately, we want to explore the posterior distribution of the parameters given the data, \(p(\mathbf{\theta}|\mathbf{y})\), which is given by Bayes' theorem,
\[p(\mathbf{\theta}|\mathbf{y})=\frac{\mathcal{L}(\mathbf{y}|\mathbf{\theta})\cdot\pi(\mathbf{\theta})}{ \mathcal{Z}}, \tag{24}\]
where \(\pi(\mathbf{\theta})\) is the prior distribution on the parameters and the normalization term \(\mathcal{Z}\) is often referred to as the evidence.
To sample from the posterior distribution and perform parameter estimation, we use a nested sampling algorithm, which is able to efficiently sample parameter spaces with multi-modal distributions or significant degeneracies between parameters.16 Specifically, we use the PyMultiNest17 implementation (Buchner et al., 2014) of the MultiNest algorithm (Feroz et al., 2009, 2019).
The parameters of the model and their prior distributions that are given to PyMultiNest to begin the sampling are shown in Table 4. \(a\) describes the filling factor of the absorption in the thick-disk component at the galactic plane (see Section 3.3), while \(A\), \(R_{0}\), \(Z_{0}\), and \(\beta\) describe the magnitude, radial scale height, vertical scale height, and spectral index (in brightness temperature) of the emission (see Section 3.2). The primary hyper-parameters of the nested sampling algorithm are \(n_{\rm live}\) and \(\mathcal{Z}_{\rm tol}\), which describe the number of active samples utilized by the algorithm and the tolerance on the Bayesian evidence, which is used as a stopping criterion. For the results presented in the following section, we use \(n_{\rm live}=5000\) and \(\mathcal{Z}_{\rm tol}=0.1\).16
Footnote 16: While the default value for \(n_{\rm live}\) is 400, we found that we needed to increase this value significantly to obtain accurate results, likely due to the degeneracies present in the parameter space.
## 4 Results and Discussion
### Roll-averaged Spectra
We first fit the roll-averaged spectra, fitting all five rolls simultaneously. A quantitative assessment of the validity of fitting data from different roll maneuvers simultaneously is contained in Appendix B. Figure 6 compares the 99.7% confidence interval of the best-fit reconstruction to the FIELDS roll-averaged spectra for each of the 5 coning roll maneuvers. Examining the reconstruction intervals, we find that they agree quite well with the FIELDS spectra. In each of the five spectra, the model is able to reproduce the maximum in the brightness between 3 and 4 MHz as well as the slope both above and below the maximum. As discussed in Section 2.2, there is noticeable scatter, particularly above 3 MHz, but this is unlikely to be intrinsic to the sky and is accounted for by the covariance matrix used to perform the fit. Since we are fitting a nonlinear model, we choose not to use the traditional chi squared goodness-of-fit statistic because of the difficulty in estimating the number of degrees of freedom for nonlinear models (see, e.g., Andrae et al., 2010). Instead, we examine the distribution of the normalized residuals in Appendix A.
While the model reconstructions are contained in a small interval shown by the gray contours in Figure 6, the parameter values can still vary significantly and there exist strong covariances between some of the parameters. Figure 7 displays the parameter posterior distribution in detail through a corner plot, which plots
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Units & Prior \\ \hline \hline \(a\) & – & Unif(0, 1) \\ \hline \(A\) & K kpc\({}^{-1}\) & Unif(0, 200) \\ \hline \(R_{0}\) & kpc & Unif(0, 30) \\ \hline \(Z_{0}\) & kpc & Unif(0, 30) \\ \hline \(\beta\) & – & Unif(-3, -2) \\ \hline \end{tabular}
\end{table}
Table 4: Parameters of the model \(\boldsymbol{\mathcal{M}}(\boldsymbol{\theta})\) for both the roll-averaged and phase-binned data and their prior distributions.
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Units & MAP Estimate \\ \hline \hline \(a\) & – & \(0.046^{+0.007}_{-0.009}\) \\ \hline \(R_{0}\) & kpc & \(1.746^{+0.232}_{-0.390}\) \\ \hline \(Z_{0}\) & kpc & \(7.775^{+9.064}_{-2.831}\) \\ \hline \(\beta\) & – & \(-2.431^{+0.065}_{-0.085}\) \\ \hline \end{tabular}
\end{table}
Table 5: One-dimensional maximum a posteriori estimates of the four non-marginalized free parameters from the roll-averaged fit. Uncertainties on the MAP estimates encompass the 68% confidence interval.
Figure 6: FIELDS HFR roll-averaged spectra for the five coning roll maneuvers listed in Table 1 (solid dots). The gray contours indicate the 99.7% confidence intervals obtained by fitting the simulated observation model to the FIELDS observations using PyMultiNest.
the posterior of 4 of the 5 free parameters. The fifth free parameter, \(A\), has been marginalized because of its direct degeneracy with the \(\Gamma\) and \(l_{\rm eff}\) calibration parameters (see Equation 1) and lack of physical relevance compared to the other free parameters. The most notable covariances agree with general expectations of how the model should behave. For example, the spectral index \(\beta\) and the thick disk filling factor \(a\) have a strong positive covariance. As \(a\) increases, the free-free optical depth decreases, leading to less absorption, which must be balanced by a more positive spectral index.
One-dimensional constraints on each parameter are given in Table 5. The values of each of the parameters are mostly in agreement with expectations. Gaensler et al. (2008) estimated \(a=0.04\pm 0.01\), which is consistent with the constraint of \(a=0.046^{+0.007}_{-0.009}\) obtained
Figure 7: Corner plot of the posterior distribution of 4 model parameters for a fit to the roll-averaged FIELDS data. While \(A\) (see Equation 9) is fit as a free parameter, we marginalize over it. Data from all 5 roll maneuvers are fit simultaneously. The two contours in each panel show the 68 and 95% confidence areas, while the shaded grid squares plot the two-dimensional histogram for each pair of parameters. The one-dimensional histograms show the full marginalized posterior distribution for each parameter.
from our fit. \(R_{0}=1.746^{+0.232}_{-0.390}\) and \(Z_{0}=7.775^{+9.064}_{-2.831}\) appear to be reasonable given that the radius of the Milky Way is \(\sim 30\) kpc. Note, however, the large upper uncertainty interval for \(Z_{0}\) caused by the tail of the distribution, which extends to the prior bound at \(Z_{0}=30\) kpc. Perhaps an important discrepancy in these two parameters is that \(Z_{0}>R_{0}\) for the entire posterior distribution, which is inconsistent with Cong et al. (2021), which found \(R_{0}=3.41\) kpc and \(Z_{0}=1.12\) kpc.17 Again, though, these values were obtained by fitting a map that is two orders of magnitude higher in frequency than the FIELDS HFR band. \(\beta=-2.5\) is generally used as a fiducial value for the spectral index, which is consistent with our result of \(\beta=-2.431^{+0.065}_{-0.085}\).
Footnote 17: Cong et al. (2021) did not publish uncertainties for these estimates.
### Phase-binned Spectra
Now that we have fit the roll-averaged spectra with our model, we turn to the phase-binned data. If a model is an accurate representation of the true sky, the same parameter values should be able to fit both the roll-averaged and phase-binned data, since both are derived from the same observations viewing the same sky. To this end, in Figure 8 we compare our forward model of the phase-binned observations using the maximum a posterior (MAP) values of the roll-averaged fit from Figure 7 to the phase-binned data from the 03/14/2020 roll maneuver. A visual assessment demonstrates a clear mismatch between the data and the model. Even though the peaks and troughs of both the model and the data appear to coincide in phase, their magnitudes are generally quite different. We show data from only one roll maneuver in Figure 8, but comparisons to data from the other four roll maneuvers yield similar levels of (dis)agreement.
If instead of using the roll-averaged MAP parameters for the forward model, we fit the phase-binned data itself, we find that the model generally appears to be insufficient to fully fit the phase-binned data. Utilizing the same nested sampling algorithm that was described in Section 3.6, we find that the posterior distributions of the model parameters tend to converge towards the edge of the prior boundaries, even when the prior bounds are increased to nonphysical regions of parameter space. For example, \(a\), the parameter describing the filling factor of free electrons in the thick disk, prefers values \(>1\), which corresponds to greater than 100% of the line-of-sight being filled with ionized hydrogen, clearly an unrealistic scenario. For this reason, we do not reproduce any posterior distributions or fits to the phase-binned data in this work, instead we argue that the model, particularly the analytic function for the galactic emissivity, is insufficient to accurately describe the phase-binned data. In the following section we describe aspects of the model that can potentially be improved in the future to better represent the true spatial distribution of the sky.
## 5 Conclusions
The purpose of this work is to utilize PSP/FIELDS observations to investigate the low frequency sky between 1 and 6 MHz. While P22, a companion to this paper, uses a model-agnostic spherical harmonic decomposition to describe the spatial distribution of the sky in the \(l=0\) and \(l=2\) modes, in this work we develop a nonlinear forward model of the observations and fit the model to the FIELDS data using a Bayesian nested sampling algorithm, yielding constraints on the underlying parameters. Before performing the fits, we split the FIELDS data into two different components: the roll-averaged spectrum, in which we average data from the entire roll, and the phase-binned spectrum, in which the roll period is broken into segments of equal phase.
Fitting the roll-averaged data with our five parameter model (one parameter associated with absorption and four parameters associated with emission) produces well-behaved posterior constraints for all five parameters. Comparing the MAP model reconstruction with the roll-averaged spectra shows good agreement. In general, the parameter constraints agree with previously published estimates, with the exception of the \(R_{0}\) and \(Z_{0}\) scale heights of the emissivity function, which prefer slightly larger values than were given in Cong et al. (2021).
While our forward model appears to be able to represent well the roll-averaged data, this is not the case for the phase-binned data. Using the MAP parameter values from the roll-averaged fit gives a reconstruction that is in poor agreement with the phase-binned data. If instead we fit the phase-binned data itself using the same nested sampling algorithm, the posterior parameter distributions become bunched near the edge of the prior bounds, attempting to reach unphysical portions of the parameter space. Even after significantly increasing the prior bounds, the model is still unable to provide a good fit. This inability to fit is likely caused by the greater amount of information about the spatial structure of the sky in the phase-binned data compared to the roll-averaged data. Averaging spectra together over the entire rolls smears the structure of the sky together, discarding some information in the process. This smearing effect seems to smooth out the sky to a sufficient level that it can be fit by our model. The phase-binned data,
in contrast, does not appear to smooth the sky to a sufficient level, revealing the inadequacy of the model.
There are several ways in which our forward model is likely insufficient and may be improved in the future to better fit the phase-binned FIELDS data (and other future low frequency experiments, such as upcoming lunar-based observations). Perhaps the most obvious shortcoming of the model is that the emissivity function is quite smooth in terms of spatial structure, whereas the true emission at these frequencies likely has significant small scale structure. While the analytic emissivity function used in this work has a significant advantage in terms of ease of evaluation, it cannot accurately represent small-scale structure. The emissivity function is also symmetric between the northern and southern galactic hemispheres. In reality, galactic features such as the northern polar spur and Loop I are likely to break this symmetry. In order to better repre
Figure 8: V1-V2 (left) and V3-V4 (right) antenna autocorrelations from the 03/14/2020 roll maneuver binned into 40 equal segments of roll phase compared to a forward model of the FIELDS observations (red curves) using the maximum a posteriori parameter values from the roll-averaged fit (Figure 7). Each binned curve is a different frequency channel, with the color of the curve corresponding to the frequency indicated by the color bar on the right. Each binned curve is mean subtracted before an artificial offset of 2 nV\({}^{2}\)/Hz times the index of the frequency channel is added to separate the curves such that they can be viewed simultaneously. The error bars on each bin are calculated according to Equation 8.
sent the true spatial structure of the sky, the emissivity function likely needs to be altered in some way to take these two features into account. Finally, the free-free absorption model may also need to be improved. Though the YMW16 model claims to be more accurate than the earlier NE2001 model, estimates of the optical depth are still a large source of potential error. Even with our implementation of the galactic component model for the filling factor, estimating the emission measure, the operative term in the optical depth calculation, is still highly uncertain. Even if the filling factor is known exactly, the model must infer the electron density distribution for the entire galaxy from a sample of only 189 lines-of-sight. A better forward model of the FIELDS observations will likely require a better absorption model, whether that be from a larger number of pulsar dispersion measurements, a more detailed model of galactic structure, or both.
We conclude by noting the implications of this work for future analysis of low frequency observations, particularly efforts to measure the Dark Ages portion of the global 21 cm signal, which lies in the frequency range \(\sim 1-50\) MHz. Measurement of the global signal involves differentiating it from the emission we have discussed here, which in the case of the Dark Ages can be over 6 orders of magnitudes brighter than the 21 cm signal. While some (such as the EDGES collaboration in Bowman et al., 2018) have utilized general, polynomial-based models to fit the foreground, these methods have the potential to produce false detections (see Tauscher et al., 2020) even at higher frequencies, where the brightness of the foreground emission relative to the 21 cm signal is smaller. As argued in Tauscher et al. (2020), the most robust analysis method for extracting the 21 cm global signal involves binning observations according the relative orientation of the antennas on the sky,18 taking advantage of the fact that the isotropic global signal remains constant. The results obtained from fitting our forward model of FIELDS observations, and the inability of the model to fit the phase-binned spectra, suggest that significant progress is needed in order to produce a physical model of the sky below 6 MHz that is accurate enough to be used for robust Dark Ages global 21 cm signal extraction.
Footnote 18: In the case of Tauscher et al. (2020), the binning is done in local sidereal time (LST) for a ground-based instrument, but this is functionally equivalent to the phase bins used in this work.
## 6 Acknowledgements
This work is directly supported by the National Aeronautics and Space Administration (NASA) Solar System Exploration Research Virtual Institute cooperative agreement number 80ARC017M0006. This work was also partially supported by the Universities Space Research Association via DR using internal funds for research development. DR's work was also partially supported by NASA grant 80NSSC23K0013. This work utilized the Blanca condom computing resource at the University of Colorado Boulder. Blanca is jointly funded by computing users and the University of Colorado Boulder. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Bang D. Nhan is a Jansky Fellow of the National Radio Astronomy Observatory. The The Parker Solar Probe/FIELDS experiment was developed and is operated under NASA contract NNN06AA01C.
|
2306.13835
|
Computron: Serving Distributed Deep Learning Models with Model Parallel
Swapping
|
Many of the most performant deep learning models today in fields like
language and image understanding are fine-tuned models that contain billions of
parameters. In anticipation of workloads that involve serving many of such
large models to handle different tasks, we develop Computron, a system that
uses memory swapping to serve multiple distributed models on a shared GPU
cluster. Computron implements a model parallel swapping design that takes
advantage of the aggregate CPU-GPU link bandwidth of a cluster to speed up
model parameter transfers. This design makes swapping large models feasible and
can improve resource utilization. We demonstrate that Computron successfully
parallelizes model swapping on multiple GPUs, and we test it on randomized
workloads to show how it can tolerate real world variability factors like
burstiness and skewed request rates. Computron's source code is available at
https://github.com/dlzou/computron.
|
Daniel Zou, Xinchen Jin, Xueyang Yu, Hao Zhang, James Demmel
|
2023-06-24T01:38:23Z
|
http://arxiv.org/abs/2306.13835v1
|
# Computron: Serving Distributed Deep Learning Models
###### Abstract
Many of the most performant deep learning models today in fields like language and image understanding are fine-tuned models that contain billions of parameters. In anticipation of workloads that involve serving many of such large models to handle different tasks, we develop Computron, a system that uses memory swapping to serve multiple distributed models on a shared GPU cluster. Computron implements a model parallel swapping design that takes advantage of the aggregate CPU-GPU link bandwidth of a cluster to speed up model parameter transfers. This design makes swapping large models feasible and can improve resource utilization. We demonstrate that Computron successfully parallelizes model swapping on multiple GPUs, and we test it on randomized workloads to show how it can tolerate real world variability factors like burstiness and skewed request rates. Computron's source code is available at [https://github.com/dlzou/computron](https://github.com/dlzou/computron).
## 1 Introduction
In recent years, researchers and practitioners have dramatically improved the performance of deep learning models, particularly large language models (LLMs), using two techniques: massive parameterization and fine-tuning. Many pre-trained models with billions of parameters have been released, and each of them is being customized for a myriad of tasks through fine-tuned variants. In a plausible scenario, organizations would host many of these large models, each similar in architecture and size but tuned to some specific task, to serve the needs of their internal personnel and external users.
The usual way to serve a model using GPUs is to keep all of its parameters in GPU memory so that inference runs directly on the accelerator device. When a model is too large to fit in a single GPU's memory, a common technique is to distribute it to multiple GPUs through model parallelism. The amount of memory onboard each GPU is limited, so an organization would need to purchase many GPUs to serve all of its models, which could be quite expensive. Worse, the costly hardware is underutilized when some models receive requests at low or irregular rates.
Among existing ML serving systems, some such as AlpaServe [7] and Energon-AI [4] employ model parallelism to serve large models in a distributed fashion. There are also systems like Clockwork [5] that use memory swapping to overcome the limitation of GPU memory and improve utilization.
In this paper, we present Computron, a prototype serving system that unifies model parallelism and swapping. Computron makes it possible to serve multiple large distributed models that, in total, can exceed the memory capacity of the GPU cluster they share. In terms of usability, Computron supports Colossal-AI's [6] functionality for easy model parallelism during model development, and it integrates with asynchronous Python frameworks for service deployment. We discuss several ordering and synchronization problems that constrain the design of such a system, and we explain how our design solves these problems to achieve parallelized swapping on multiple GPUs. We evaluate Computron in two ways. First, we isolate the swapping component to demonstrate that model parallel swapping does in fact reduce the time taken to load a distributed model into GPU memory. Second, we test Computron under more realistic conditions on randomly generated workloads that simulate conditions where requests may be bursty and skewed to a subset of models.
## 2 Background and Problem
Deep learning is being rapidly adopted in countless business and scientific applications. In many of these applications where some form of service deployment is involved, the serving system is a crucial component of the deep learning workflow. These systems generally operate in a request-response paradigm by listening to inference requests, running the requested model on inputs on specialized hardware such as
GPUs, then responding with the output. The design of such a system revolves around the tradeoff of reducing latency so that end users experience less waiting versus increasing efficiency to save operational cost, all without compromising model accuracy.
Recent research in language models has popularized the practice of fine-tuning, where either a portion or all of a pre-trained model's parameters are trained on new data from a specific task in order to achieve higher accuracy. For example, the pre-trained BERT [3] model can be fine-tuned to a variety of language understanding tasks--from text classification to part-of-speech tagging to natural language inference--just by retraining the last layers, and GPT-3 [2] has been fine-tuned on human feedback so that the resulting InstructGPT [10] model can better align to user intentions. Fine-tuning generally involves none or minor modifications of the model architecture. As fine-tuned models become commonplace, so will workloads that involve serving multiple models with highly similar memory footprints and access patterns.
A second trend spearheaded by the language modeling community has been to increase model size in pursuit of better accuracy and generalizability. At the extreme, Megatron-Turing NLG [11] contains 530 billion parameters, and the comparatively space-efficient LLAMA [12] still contains up to 65 billion parameters. On Chatbot Arena [14], a platform where humans rate the quality of language model outputs, most models with competitive performance have at least 6 billion parameters.
Serving multiple instances of such large models would exceed the memory capacity of all consumer GPUs and many high-end ones. The standard solution is to distribute large models across multiple GPUs using model parallelism. Two forms, tensor parallelism (TP) and pipeline parallelism (PP), are well-studied and commonly used in training workloads [8], but they are just recently beginning to see use in serving systems as well. Even when all models can fit into aggregate GPU memory, the AlpaServe [7] team found that there is still reason to use model parallelism in a serving system because it can reduce latency in real world workloads.
Another challenge of serving deep learning models is that real world serving workloads are often unpredictable. Request arrival distributions may be bursty. Furthermore, across multiple models, request rates may be skewed--some models may receive a lot more requests than others--and the rates may shift over time. Hosting all of these models in GPU memory leads to underutilization in the face of dynamic request patterns, as resources are over-provisioned to models with low request rates. The resource allocation imbalance is worse for localized services that expect irregular traffic, and the hardware cost is higher when larger models distributed across multiple devices are involved. Both of these factors act against the interests of smaller organizations.
We survey a number of prior works and find that while there are many designs from which to take inspiration, to the best of our knowledge, no single system addresses both the model parallelism and the resource utilization challenges that we have outlined. AlpaServe [7] and Energon-AI [4] are capable of serving large models using model parallelism, but they host all models on GPUs following some form of static assignment and are thus bounded by the amount of available GPU memory. Clockwork [5] serves many deep learning models on a limited number of GPUs by swapping models between CPU and GPU memory, which works well when the models are on the order of millions of parameters or less. However, this approach is not suited for larger models with billions of parameters that can take several seconds to transfer. ZeRO-Inference [1] parallelizes parameter transfers across GPUs in order to multiply CPU-GPU bandwidth, but it is meant for individual layers within a single massive model.
## 3 Design
In SS2, we provide motivation for the problem of serving multiple distributed deep learning models, and we identify the key issue of resource underutilization when handling bursty, skewed requests. To deal with these challenges, we propose a serving system that co-locates multiple models on the same cluster of GPUs and dynamically swaps distributed model parameters between CPU and GPU memories. Active models are swapped into GPU memory so that requests can be served quickly, while unused models are swapped out to reduce the unnecessary consumption of resources.
The underlying technique of offloading unused models to CPU memory has already been applied by state-of-the-art serving systems like Clockwork [5] to great effect. Our particular approach is comparable to demand paging in the context of virtual memory management; when a model is requested whose parameters do not currently reside in GPU memory, a replacement policy is used to pick another model to swap out, and then the requested model is swapped in. Like other systems, we assume the existence of large CPU memory to hold unused model parameters. More sophisticated fetching algorithms can be used here instead, but are beyond the scope of this paper.
Inspired by prior works, we seek to investigate whether model parallelism can also be used to reduce the latency of model swapping. We hypothesize that on systems where GPUs have independent PCIe links to the CPU, by specifying higher degrees of TP and PP to distribute model shards across more GPUs, model parameter shards can be loaded in parallel to take advantage of greater aggregate link bandwidth between the CPU and GPUs. A similar optimization is used by ZeRO-Inference. Should this prove true in practice, it would become feasible to perform dynamic swapping while serving large models that share a group of devices, just like smaller models. However, a number of design considerations arise when we put our hypothesis to the test.
### Architecture
Our system uses an engine-worker architecture to manage multiple distributed model instances at the same time. The centralized engine receives, queues, and waits for the completion of all model requests. Workers are launched per GPU in accordance with a user-provided parallel configuration (TP and PP dimensions) to manage shards of model parameters. Because we assume all models have a similar size and architecture, we make the simplification to co-locate each distributed model instance onto the GPU cluster using the same configuration. Fig. 1 gives an example of the architecture for a \(TP=2\), \(PP=2\) configuration.
When the engine receives a request for some model, it pushes the request object along with a timestamp into a queue specifically for that model. Concurrently, the engine repeatedly picks a queue to pop oldest request objects, then packs and submits them to workers in the first pipeline stage as a single batch entry. Workers at each pipeline stage evaluate batch entries in submitted order, up to the last stage, at which point last stage workers send the batch output back to the engine. PP communication occurs through FIFO pipes, while TP communication is done through distributed collectives, as represented by the arrows in Fig. 1.
### Model Parallel Swapping
In our design, the responsibility of making swapping decisions is delegated to the engine, so in addition to submitting batch entries, the engine can initiate another type of action through what we refer to as load entries. A load entry commands a worker to either load or offload the parameters of an instance.
Challenges arise when designing how load entries should be submitted to and processed by distributed workers. A model can only be evaluated on a batch entry after the model's parameters are loaded into GPU memory, so as the engine schedules batch and load entries in some order it deems correct, workers must respect the load dependencies of that schedule. Furthermore, data dependencies between adjacent pipeline stage workers delay when later stage workers receive batch entries, ruling out certain designs like broadcasting the load entry, as illustrated in Fig. 2. These load and data dependencies are resolved if workers synchronously process load entries in pipeline order just like batch entries, but this naive solution has two issues: a batch entry to some model is unnecessarily blocked by load entries to another unrelated model, and no loading parallelism is achieved by workers of different stages in the same pipeline, as shown in Fig. 3.
We propose an asynchronous mechanism for handling load entries that mitigates these issues. After being submitted by the engine, load entries are pipelined through worker stages just like batch entries, but a worker does not wait for loading to complete before passing the load entry forward to the next stage. This can be done using the stream feature of the CUDA programming model. On top of the default CUDA stream that executes kernels for model inference, each worker launches two additional streams to run loading and offloading operations concurrently. A load entry is completed when every worker finishes loading/offloading and sends a response back to the engine. The engine is responsible for avoiding load dependency violations by making sure batch entries for a model are submitted to workers only after that model has been fully loaded. This design allows a later batch entry to proceed without waiting for a previous load entry involving another model to complete, and this also enables workers of different stages to load shards of a model's parameters in parallel. The paths
Figure 1: Computron architecture for \(TP=2\), \(PP=2\). One worker is launched per GPU. Two models, A and B, are co-located in the same parallel configuration.
Figure 3: Synchronous load entry reduces loading parallelism and causes unnecessary blocking.
Figure 2: Broadcasted load entry violates load dependency.
of batch and load entries through one branch of the system pipeline are depicted in Fig. 4.
One more detail is the use of pinned memory. CUDA requires the CPU-side data buffer to be in page-locked memory during CPU-GPU data transfers to prevent interruptions caused by paging. Data objects on CPU are stored in paged memory by default, so data transfers would incur an extra copy on the CPU side from paged memory to page-locked memory. We eliminate this extra data movement by making sure that when a model is offloaded, the parameters are kept pinned in CPU memory.
## 4 Implementation
We build Computron as a serving system that supports model parallel swapping based on the considerations presented in SS3. We borrow some components from Energon-AI [4], such as the RPC-based FIFO pipe implementation used for communication between pipelined worker stages. Just like Energon-AI, Computron is compatible with Colossal-AI [6] functionality, meaning that users can easily incorporate model parallelism in their models with minimal changes to PyTorch source code.
As Computron launches, Colossal-AI automatically handles setting up the context and communication groups for model parallelism, and it does so using the same configuration for each instance. The engine is implemented using Python's asyncio library, and request scheduling is done in a completely asynchronous fashion. Because of this, Computron integrates with asynchronous Python web frameworks such as FastAPI. Requests are scheduled in batches based on the oldest timestamp, and model swapping uses an LRU replacement policy.
## 5 Evaluation
We design two separate sets of experiments in order to characterize Computron's performance. In the first set of experiments, we intentionally induce the worst case for handling each request and measure how the time to swap models scales with model parallelism. In the second set, we generate simulated request workloads using a random arrival process to study how Computron handles more realistic scenarios.
We conduct experiments on a single GPU node of the Perlmutter supercomputer managed by NERSC. The GPU node has one AMD EPYC 7763 CPU and four NVIDIA A100 GPUs, each connected to the CPU through a PCIe 4.0 x16 link [9].
### Swapping Latency
In SS3, we hypothesize that model parallelism linearly decreases the time taken to load and offload model parameters between CPU and GPU memory. To check this hypothesis, we design an experiment that forces the worst case scenario where each request must perform a swap. We launch models concurrently and configure the engine to only allow one model to reside in GPU memory at any given time. We then send alternating blocking requests to the two models and measure the times taken to swap and execute a model at each request. The model size is fixed in order to test the strong scaling properties of CPU-GPU swapping. The model chosen is OPT-13B [13], an open-source pre-trained transformer language model released by Meta AI. Using half-precision floats, OPT-13B has a memory footprint of about 24 GB. This model is chosen because it can fit into the memory of a single A100 GPU, which serves as a baseline for swapping time.
Before running the experiment, we estimate the lower bound for how long swapping should take for comparison. Each CPU-GPU link has a bandwidth of 32 GB/s, so a single GPU is expected to load or offload an OPT-13B model instance in \(24/32=0.75\) seconds. On our test system, aggregate CPU-GPU bandwidth increases linearly with the number of GPUs, so as the model is distributed to more GPUs using either TP or PP, the swapping time is expected to inversely decrease. Swapping time includes both the offloading of one model and the loading of another, and because our asynchronous implementation overlaps the two, we measure from when the offload entry is submitted to when both offload and load entries are completed.
With the theoretical lower bound in mind, we first run the experiment with three trials that scale the degree of model TP. We use a small input token length of 2. Fig. 5 visualizes the results of these trials on \(TP=1\), \(TP=2\), and \(TP=4\), all with \(PP=1\). The left plot examines how average time spent swapping scales with TP. These trials confirm that the swap latency does decrease as TP increases, as we hypothesized. However, the latency on a single GPU is noticeably higher
Figure 4: Comparison of how batch entries and load entries are processed in a linear worker pipeline.
than the lower bound, and the scaling appears to be less than linear; this difference may be explained by the alpha-beta communication model. Model parameters are transmitted not as one long stream, but as separate messages for the individual tensors. Each TP shard still contains the same number of tensors as the original model albeit smaller, so the same number of messages must be sent by each worker when loading. Taking the expression \(\alpha+\beta*n\) for total latency, while message size \(n\) is reduced per worker, the per message latency term \(\alpha\) remains the same, leading to sublinear scaling.
The right plot shows swapping and execution times in proportion to the end-to-end latency. From the plot, it is clear that swapping latency remains the bottleneck in all cases, but as the number of GPUs increases, the proportion of overall time spent swapping decreases; this highlights how model parallelism benefits swapping even more than execution.
Next, we run an experiment that varies PP degree between 1, 2, and 4 worker stages. Similar to the TP experiments, Fig. 6 shows that increasing PP also decreases the swapping latency. We postulate that in this case, sublinear scaling stems from delays as a load entry is pipelined through workers. Since workers process batch entries synchronously, load entries, despite being asynchronous, must still wait for their turn.
TP and PP are often used together in practice, so we ran an additional trial with \(TP=2\), \(PP=2\). From Fig. 7, we see that the mixed parallelism configuration has lower latency than both pure TP and pure PP for the same number of workers, and it in fact approaches the ideal scaling target. The positive effect of mixing parallelism may be because the previously described overheads in the TP case and in the PP case are lessened at smaller degrees.
### Simulated Workloads
We characterize the practical performance of our model parallel swapping design with simulated workloads of serving multiple OPT-13B models on the same cluster of four A100 GPUs. For all experiments conducted, we follow the configuration of \(TP=2\), \(PP=2\).
Each simulation trial begins with several warm up requests that are not recorded. Then, requests are sent to all models over a 30 second period, with the distribution of requests to each model following a random independent Gamma arrival process. Each request has an input token length of 8. Across simulations, we vary two parameters: the assignment of mean arrival rates to each model, and the coefficient of variation (CV) that is shared by all models. For our purposes, assigning different mean arrival rates simulates how request rates skew toward a subset of models, while CV adjusts the burstiness of requests. For instance, \(CV=4\) is a high degree of burstiness, and \(Rates=(10,1,1)\) represents a skew toward the first model relative to the other two.
Our first set of simulations serves three models at once, limiting to at most two models in GPU memory at all times, and we check that GPU memory usage approximately matches the footprint of two OPT-13B models. The maximum batch size is 8. Tab. 1 summarizes the average end-to-end latencies for the grid of parameters we measured, with three variations in skew and three variations in CV. Fig. 8 visualizes request latency CDFs of all models combined for each pair of (Rates, CV).
We observe a common pattern that as CV increases from 0.25 to 4, the latency tends to decrease, which can be seen both in the table of average latencies and in the CDF curves in each plot shifting toward the top left corner. This suggests Computron performs better when request patterns are bursty. Intuitively, bursty request distributions mean higher likelihood of consecutive requests to the same model, so less model swaps occur because the engine schedules the oldest request first with LRU replacement.
In the three-model simulation, changing the skew of request rates only marginally increases the maximum latency, and in general has little impact on the overall latency distribution. This provides evidence that Computron can tolerate workloads with imbalanced requests rates. Though our simulations
Figure 5: Swapping latency with changing TP scale.
Figure 6: Swapping latency with changing PP scale.
Figure 7: Swapping latency for \(TP=2\), \(PP=2\).
only test static request rate assignments, we expect that this tolerance can also extend to dynamic scenarios where the skewness of request rates changes over time.
The second set of simulations serves six models at once, limiting to at most four models in GPU memory at all times and with the maximum batch size set to 32. The results in Fig. 9 show similar patterns as the previous three-model simulations. When \(CV=4\), the latency distribution of serving six models is actually lower than serving three models on average based on Tab. 2, which indicates that good resource utilization can be achieved when requests are bursty. On the other hand, latencies of lower CV trials are scaled by approximately a factor of two. A possible explanation for this is that in lower CV trials, GPUs have already been maximally utilized conditioned on the request distribution and scheduling order, so doubling the workload leads to doubled latency.
Looking at the bigger picture, both the three-model and the six-model simulations reveal that many requests take longer than the isolated latency measurements from SS5.1 would suggest. Two possible causes for this are the scheduling algorithm and the choice of maximum batch size. Our simple oldest-request-first scheduling algorithm overlooks global information that may be used to reduce average latency. The maximum batch size trades off between the rate at which a model's request queue is drained and the compute time of that batch, and it may have some interactions with the request arrival distributions. We defer more thorough investigation of these effects to future work.
## 6 Conclusion
We design and implement Computron, a system that is capable of serving multiple deep learning models with billions of parameters on the same cluster of GPUs, and it can exceed aggregate GPU memory capacity through model parallel swapping. In isolated tests, we demonstrate that our design takes advantage of both TP and PP to speed up the swapping of distributed models. Simulated random workloads show that our system can tolerate bursty and skewed request patterns. These features enable organizations to efficiently serve many cutting-edge large models for different tasks when compute resources are limited.
An optimization that may significantly improve serving performance is to speculatively load or offload models. In real world scenarios, requests to different models are often not independent processes, but instead have predictable patterns, such as the same model being requested many times consecutively to generate a sequence, a subset of models often being requested in some fixed order, or a model being more frequently requested at a certain time of day. More sophisticated load scheduling algorithms with predictive capabilities can drastically reduce the number of on-demand swaps, and by extension, serving latency.
A problem that has not been resolved in this work is handling models with different sizes, and even different model parallelism configurations. Our system currently assumes that every model instance is evenly distributed across the cluster in the same way and with the same memory footprint. Removing that assumption brings many complexities such as the decision problem of what to load/offload when swapping and whether workers should handle each model differently.
|
2302.10613
|
Approximating Bin Packing with Conflict Graphs via Maximization
Techniques
|
We give a comprehensive study of bin packing with conflicts (BPC). The input
is a set $I$ of items, sizes $s:I \rightarrow [0,1]$, and a conflict graph $G =
(I,E)$. The goal is to find a partition of $I$ into a minimum number of
independent sets, each of total size at most $1$. Being a generalization of the
notoriously hard graph coloring problem, BPC has been studied mostly on
polynomially colorable conflict graphs. An intriguing open question is whether
BPC on such graphs admits the same best known approximation guarantees as
classic bin packing.
We answer this question negatively, by showing that (in contrast to bin
packing) there is no asymptotic polynomial-time approximation scheme (APTAS)
for BPC already on seemingly easy graph classes, such as bipartite and split
graphs. We complement this result with improved approximation guarantees for
BPC on several prominent graph classes. Most notably, we derive an asymptotic
$1.391$-approximation for bipartite graphs, a $2.445$-approximation for perfect
graphs, and a $\left(1+\frac{2}{e}\right)$-approximation for split graphs. To
this end, we introduce a generic framework relying on a novel interpretation of
BPC allowing us to solve the problem via maximization techniques.
Our framework may find use in tackling BPC on other graph classes arising in
applications.
|
Ilan Doron-Arad, Hadas Shachnai
|
2023-02-21T11:46:05Z
|
http://arxiv.org/abs/2302.10613v1
|
# Approximating Bin Packing with Conflict Graphs via Maximization Techniques
###### Abstract
We give a comprehensive study of _bin packing with conflicts_ (BPC). The input is a set \(I\) of items, sizes \(s:I\rightarrow[0,1]\), and a conflict graph \(G=(I,E)\). The goal is to find a partition of \(I\) into a minimum number of independent sets, each of total size at most \(1\). Being a generalization of the notoriously hard graph coloring problem, BPC has been studied mostly on polynomially colorable conflict graphs. An intriguing open question is whether BPC on such graphs admits the same best known approximation guarantees as classic bin packing.
We answer this question negatively, by showing that (in contrast to bin packing) there is no asymptotic polynomial-time approximation scheme (APTAS) for BPC already on seemingly easy graph classes, such as _bipartite_ and _split_ graphs. We complement this result with improved approximation guarantees for BPC on several prominent graph classes. Most notably, we derive an asymptotic \(1.391\)-approximation for bipartite graphs, a \(2.445\)-approximation for perfect graphs, and a \(\left(1+\frac{2}{\varepsilon}\right)\)-approximation for split graphs. To this end, we introduce a generic framework relying on a novel interpretation of BPC allowing us to solve the problem via _maximization_ techniques. Our framework may find use in tackling BPC on other graph classes arising in applications.
## 1 Introduction
We study the _bin packing with conflicts (BPC)_ problem. We are given a set \(I\) of \(n\) items, sizes \(s:I\rightarrow[0,1]\), and a conflict graph \(G=(I,E)\) on the items. A _packing_ is a partition \((A_{1},\ldots,A_{t})\) of \(I\) into independent sets called _bins_, such that for all \(b\in\{1,\ldots,t\}\) it holds that \(s\left(A_{b}\right)=\sum_{\ell\in A_{b}}s(\ell)\leq 1\). The goal is to find a packing in a minimum number of bins. Let \(\mathcal{I}=(I,s,E)\) denote a BPC instance. We note that BPC is a generalization of _bin packing (BP)_ (where \(E=\emptyset\)) as well as the graph coloring problem (where \(s(\ell)=0\)\(\forall\ell\in I\)).1 BPC captures many real-world scenarios such as resource clustering in parallel computing [2], examination scheduling [21], database storage [16], and product delivery [4]. As the special case of graph coloring cannot be approximated within a ratio better than \(n^{1-\varepsilon}\)[31], most of the research work on BPC has focused on families of conflict graphs which can be optimally colored in polynomial time [24, 17, 16, 23, 8, 6, 7, 15].
Footnote 1: See the formal definitions of _graph coloring_ and _independent sets_ in Section 2.
Let \(\mathrm{OPT}=\mathrm{OPT}(\mathcal{I})\) be the value of an optimal solution for an instance \(\mathcal{I}\) of a minimization problem \(\mathcal{P}\). As in the bin packing problem, we distinguish
between _absolute_ and _asymptotic_ approximation. For \(\alpha\geq 1\), we say that \(\mathcal{A}\) is an absolute \(\alpha\)-approximation algorithm for \(\mathcal{P}\) if for any instance \(\mathcal{I}\) of \(\mathcal{P}\) we have \(\mathcal{A}(\mathcal{I})/\mathrm{OPT}(\mathcal{I})\leq\alpha\), where \(\mathcal{A}(\mathcal{I})\) is the value of the solution returned by \(\mathcal{A}\). Algorithm \(\mathcal{A}\) is an _asymptotic_\(\alpha\)-approximation algorithm for \(\mathcal{P}\) if for any instance \(\mathcal{I}\) it holds that \(\mathcal{A}(\mathcal{I})\leq\alpha\mathrm{OPT}(\mathcal{I})+o(\mathrm{OPT}( \mathcal{I}))\). An APTAS is a family of algorithms \(\{\mathcal{A}_{\varepsilon}\}\) such that, for every \(\varepsilon>0\), \(\mathcal{A}_{\varepsilon}\) is a polynomial time asymptotic \((1+\varepsilon)\)-approximation algorithm for \(\mathcal{P}\). An _asymptotic fully polynomial-time approximation scheme (AFPTAS)_ is an APTAS \(\{\mathcal{A}_{\varepsilon}\}\) such that \(\mathcal{A}_{\varepsilon}(\mathcal{I})\) runs in time \(\mathrm{poly}(|\mathcal{I}|,\frac{1}{\varepsilon})\), where \(|\mathcal{I}|\) is the encoding length of the instance \(\mathcal{I}\).
It is well known that, unless P=NP, BP cannot be approximated within ratio better than \(\frac{3}{2}\)[10]. This ratio is achieved by First-Fit Decreasing (FFD) [28].2 Also, BP admits an AFPTAS [19], and an additive approximation algorithm which packs any instance \(\mathcal{I}\) in at most \(\mathrm{OPT}(\mathcal{I})+O(\log(\mathrm{OPT}(\mathcal{I})))\) bins [14]. Despite the wide interest in BPC on polynomially colorable graphs, the intriguing question whether BPC on such graphs admits the same best known approximation guarantees as classic bin packing remained open.
Footnote 2: We give a detailed description of Algorithm FFD in Appendix B.
We answer this question negatively, by showing that (in contrast to bin packing) there is no APTAS for BPC even on seemingly easy graph classes, such as _bipartite_ and _split_ graphs. We complement this result with improved approximation guarantees for BPC on several prominent graph classes. For BPC on bipartite graphs, we obtain an asymptotic \(1.391\)-approximation. We further derive improved bounds of \(2.445\) for perfect graphs, \(\left(1+\frac{2}{e}\right)\) for split graphs, and
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Absolute** & \multicolumn{2}{c}{**Asymptotic**} \\ \cline{2-4} & Lower Bound & Upper Bound & Lower Bound & Upper Bound \\ \hline General graphs & \(n^{1-\varepsilon}\)[31] & \(O\left(\frac{n(\log\log n)^{2}}{(\log n)^{3}}\right)\)[13] & \(n^{1-\varepsilon}\)[31] & \(O\left(\frac{n(\log\log n)^{2}}{(\log n)^{3}}\right)\)[13] \\ Perfect graphs &. & \(\mathbf{2.445}\) (\(2.5\)[8]) & \(\mathbf{c>1}\) & \(\mathbf{2.445}\) (\(2.5\)[8]) \\ Chordal graphs &. & \(\frac{7}{3}\)[8] & \(\mathbf{c>1}\) & \(\frac{7}{3}\)[8] \\ Cluster graphs &. & \(2\)[1] & & \(1\)[7] \\ Cluster complement &. & \(\mathbf{3/2}\) & \(\mathbf{3/2}\) & \(\mathbf{3/2}\) \\ Split graphs &. & \(\mathbf{1+2/e}\) (\(2\)[15]) & \(\mathbf{c>1}\) & \(\mathbf{1+2/e}\) (\(2\)[15]) \\ Bipartite graphs &. & \(\frac{5}{3}\)[15] & \(\mathbf{c>1}\) & \(\mathbf{1.391}\) (\(\frac{5}{3}\)[15]) \\ Partial \(k\)-trees &. & \(2+\varepsilon\)[17] & & \(1\)[16] \\ Trees &. & \(\frac{5}{3}\)[15] & &. \\ No conflicts & \(\frac{3}{2}\)[10] & \(\frac{3}{2}\)[29] & & \(1\)[26] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Known results for Bin Packing with Conflict Graphs
\(\frac{5}{3}\) for bipartite graphs.3 Finally, we obtain a tight \(\frac{3}{2}\)-asymptotic lower bound and an absolute \(\frac{3}{2}\)-upper bound for graphs that are the complements of cluster graphs (we call these graphs below _complete multi-partite_).
Footnote 3: Recently, Huang et al. [15] obtained a \(\frac{5}{3}\)-approximation for bipartite graphs, simultaneously and independently of our work. We note that the techniques of [15] are different than ours, and their algorithm is more efficient in terms of running time.
Table 1 summarizes the known results for BPC on various classes of graphs. New bounds given in this paper are shown in boldface. Entries that are marked with \(\boldsymbol{\cdot}\) follow by inference, either by using containment of graph classes (trees are partial \(k\)-trees), or since the hardness of BPC on all considered graph classes follows from the hardness of classic BP. Empty entries for lower bounds follow from tight upper bounds. We give a detailed overview of previous results in Appendix A.
**Techniques:** There are several known approaches for tackling BPC instances. One celebrated technique introduced by Jansen and Ohring [17] relies on finding initially a minimum coloring of the given conflict graph, and then packing each color class using a bin packing heuristic, such as First-Fit Decreasing. A notable generalization of this approach is the sophisticated integration of _precoloring extension_[17, 8], which completes an initial partial coloring of the conflict graph, with no increase to the number of color classes. Another elegant technique is a matching-based algorithm, applied by Epstein and Levin [8] and by Huang et al. [15].
The best known algorithms (prior to this work), e.g., for perfect graphs [8] and split graphs [15] are based on the above techniques. While the analyses of these algorithms are tight, the approximation guarantees do not match the existing lower bounds for BPC on these graph classes; thus, obtaining improved approximations requires new techniques.
In this paper we present a novel point of view of BPC involving the solution of a maximization problem as a subroutine. We first find an _initial packing_ of a subset \(S\subseteq I\) of items, which serves as a baseline packing with _high potential_ for adding items (from \(I\setminus S\)) without increasing the number of bins used. The remaining items are then assigned to extra bins using a simple heuristic. Thus, given a BPC instance, our framework consists of the following main steps.
1. Find an initial packing \(\mathcal{A}=(A_{1},\ldots,A_{m})\) of high potential for \(S\subseteq I\).
2. Maximize the total size of items in \(\mathcal{A}\) by adding items in \(I\setminus S\).
3. Assign the remaining (unpacked) items to extra bins using a greedy approach respecting the conflict graph constraints.
The above generic framework reduces BPC to cleverly finding an initial packing of high potential, and then efficiently approximating the corresponding maximization problem, while exploiting structural properties of the given conflict graph. One may view classic approaches for solving BP (e.g., [20]), as an application of this technique: find an initial packing of high potential containing the _large_ items; then add the _small_ items using First-Fit. In this setting, the tricky part is to find an initial high potential packing, while adding the small items is trivial. However,
in the presence of a conflict graph, solving the induced maximization problem is much more challenging.
Interestingly, we are able to obtain initial packings of high potential for BPC on several conflict graph classes. To solve the maximization problem, we first derive efficient approximation for maximizing the total size of items within a _single_ bin. Our algorithm is based on finding a maximum weight independent set of _bounded_ total size in the graph, combined with enumeration over items of large sizes. Using the single bin algorithm, the maximization problem is solved via application of the _separable assignment problem (SAP)_[9] framework, adapted to our setting. Combined with a hybrid of several techniques (to efficiently handle different types of instances) this leads to improved bounds for BPC on perfect, split, and bipartite graphs (see Sections 3, 4, and Appendix F). Our framework may find use in tackling BPC on other graph classes arising in applications.
**Organization:** In section 2 we give some definitions and preliminary results. Section 3 presents an approximation algorithm for BPC on perfect graphs and an asymptotic approximation on bipartite graphs. In Section 4 we give an algorithm for split graphs. We present our hardness results in Section 5 and conclude in Section 6. Due to space constraints, some of our results are deferred to the Appendix. For convenience, the last page of the paper includes a table of contents.
## 2 Preliminaries
For any \(k\in\mathbb{R}\), let \([k]=\{1,2,\ldots,\lfloor k\rfloor\}\). Also, for a function \(f:A\rightarrow\mathbb{R}_{\geq 0}\) and a subset of elements \(C\subseteq A\), we define \(f(C)=\sum_{e\in C}f(e)\).
**Coloring and Independent Sets:** Given a graph \(G=(V,E)\), an _independent set_ in \(G\) is a subset of vertices \(S\subseteq V\) such that for all \(u,v\in S\) it holds that \((u,v)\notin E\). Let \(\mathsf{IS}(G)\) be the collection of all independent sets in \(G\). Given weight function \(w:V\rightarrow\mathbb{R}_{\geq 0}\), a _maximum independent set w.r.t._\(w\) is an independent set \(S\in\mathsf{IS}(G)\) such that \(w(S)\) is maximal. A _coloring_ of \(G\) is a partition \((V_{1},\ldots,V_{t})\) of \(V\) such that \(\forall i\in[t]:V_{i}\in\mathsf{IS}(G)\); we call each subset of vertices \(V_{i}\)_color class_\(i\). Let \(\chi(G)\) be the minimum number of colors required for a coloring of \(G\). A graph \(G\) is _perfect_ if for every induced subgraph \(G^{\prime}\) of \(G\) the cardinality of the maximal clique of \(G^{\prime}\) is equal to \(\chi(G^{\prime})\); note that \(G^{\prime}\) is also a perfect graph. The following well known result is due to [12].
Lemma 2.1: _Given a perfect graph \(G=(V,E)\), a minimum coloring of \(G\) and a maximum weight independent set of \(G\) can be computed in polynomial time._
**Bin Packing with Conflicts:** Given a BPC instance \(\mathcal{I}\), let \(G_{\mathcal{I}}=(I,E)\) denote the conflict graph of \(\mathcal{I}\). A _packing_ of a subset of items \(S\subseteq I\) is a partition \(\mathcal{B}=(B_{1},\ldots,B_{t})\) of \(S\) such that, for all \(i\in[t]\), \(B_{i}\) is an independent set in \(G_{\mathcal{I}}\), and \(s(B_{i})\leq 1\). Let \(\#\mathcal{B}\) be the number of bins (i.e., entries) in \(\mathcal{B}\).
In this paper we consider BPC on several well studied classes of perfect graphs and the acronym BPC refers from now on to perfect conflict graphs. For _bin packing with bipartite conflicts (BPB)_, where the conflict graph is bipartite, we assume a bipartition of \(V\) is known and given by \(X_{V}\) and \(Y_{V}\). Recall that
\(G=(V,E)\) is a split graph if there is a partition \(K,S\) of \(V\) into a clique and an independent set, respectively. We call this variant of BPC _bin packing with split graph conflicts (BPS)_.
The following notation will be useful while enhancing a partial packing by new items. For two packings \(\mathcal{B}=(B_{1},\ldots,B_{t})\) and \(\mathcal{C}=(C_{1},\ldots,C_{r})\), let \(\mathcal{B}\oplus\mathcal{C}=(B_{1},\ldots,B_{t},C_{1},\ldots,C_{r})\) be the _concatenation_ of \(\mathcal{B}\) and \(\mathcal{C}\); also, for \(t=r\) let \(\mathcal{B}+\mathcal{C}=(B_{1}\cup C_{1},\ldots,B_{t}\cup C_{t})\) be the _union_ of the two packings; note that the latter is not necessarily a packing. We denote by \(\mathsf{items}(\mathcal{B})=\bigcup_{i\in[t]}B_{i}\) the set of items in the packing \(\mathcal{B}\). Finally, let \(\mathcal{I}=(I,s,E)\) be a BPC instance and \(T\subseteq I\) a subset of items. Define the BPC instances \(\mathcal{I}\cap T=(T,s,E_{T})\) and \(\mathcal{I}\setminus T=(I\setminus T,s,E_{I\setminus T})\) where for all \(X\in\{T,I\setminus T\}\)\(E_{X}=\{(u,v)\in E\mid u,v\in X\}\). **Bin Packing Algorithms:** We use \(\mathcal{I}=(I,s)\) to denote a BP instance, where \(I\) is a set of \(n\) items for some \(n\geq 1\), and \(s:I\rightarrow[0,1]\) is the size function. Let \(L_{\mathcal{I}}=\{\ell\in I\mid s(\ell)>\frac{1}{2}\}\) be the set of _large_ items, \(M_{\mathcal{I}}=\{\ell\in I\mid\frac{1}{3}<s(\ell)\leq\frac{1}{2}\}\) the set of _medium_ items, and \(S_{\mathcal{I}}=\{\ell\in I\mid s(\ell)\leq\frac{1}{3}\}\) the set of _small_ items. Our algorithms use as building blocks also algorithms for BP. The results in the next two lemmas are tailored for our purposes. We give the detailed proofs in Appendix 0.B.4
Footnote 4: For more details on algorithms FFD and AsymptoticBP see, e.g., [30].
**Lemma 2.2**.: _Given a BP instance \(\mathcal{I}=(I,s)\), there is a polynomial-time algorithm First-Fit Decreasing (FFD) which returns a packing \(\mathcal{B}=(B_{1},\ldots,B_{t})\) of \(\mathcal{I}\) where \(\#\mathcal{B}\leq(1+2\cdot\max_{\ell\in I}s(\ell))\cdot s(I)+1\). Moreover, it also holds that \(\#\mathcal{B}\leq|L_{\mathcal{I}}|+\frac{3}{2}\cdot s(M_{\mathcal{I}})+\frac{ 4}{3}\cdot s(S_{\mathcal{I}})+1\)._
**Lemma 2.3**.: _Given a BP instance \(\mathcal{I}=(I,s)\), there is a polynomial-time algorithm AsymptoticBP which returns a packing \(\mathcal{B}=(B_{1},\ldots,B_{t})\) of \(\mathcal{I}\) such that \(t=\mathrm{OPT}(\mathcal{I})+o(\mathrm{OPT}(\mathcal{I}))\). Moreover, if \(\mathrm{OPT}(\mathcal{I})\geq 100\) then \(t\leq 1.02\cdot\mathrm{OPT}(\mathcal{I})\)._
## 3 Approximations for Perfect and Bipartite Graphs
In this section we consider the bin packing problem with a perfect or bipartite conflict graph. Previous works (e.g., [17], [8]) showed the usefulness of the approach based on finding first a minimal coloring of the given conflict graph, and then packing each color class as a separate bin packing instance (using, e.g., algorithm FFD). Indeed, this approach yields efficient approximations for BPC; however, it does reach a certain limit. To enhance the performance of this _coloring based_ approach, we design several subroutines. Combined, they cover the problematic cases and lead to improved approximation guarantees (see Table 1).
Our first subroutine is the coloring based approach, with a simple modification to improve the asymptotic performance. For each color class \(C_{i},i=1,\ldots,k\) in a minimal coloring of the given conflict graph, we find a packing of \(C_{i}\) using FFD, and another packing using AsymptoticBP (see Lemma 2.3). We choose the packing which has smaller number of bins. Finally, the returned packing is the concatenation of the packings of all color classes. The pseudocode of Algorithm Color_Sets is given in Algorithm 1.
For the remainder of this section, fix a BPC instance \(\mathcal{I}=(I,s,E)\). The performance guarantees of Algorithm Color_Sets are stated in the next lemma.
Lemma 3.1: _Given a BPC instance \(\mathcal{I}=(I,s,E)\), Algorithm Color_Sets returns in polynomial time in \(|\mathcal{I}|\) a packing \(\mathcal{B}\) of \(\mathcal{I}\) such that \(\#\mathcal{B}\leq\chi(G_{\mathcal{I}})+|L_{\mathcal{I}}|+\frac{3}{2}\cdot s(M_ {\mathcal{I}})+\frac{4}{3}\cdot s(S_{\mathcal{I}})\). Moreover, if \(\mathcal{I}\) is a BPB instance then \(\#\mathcal{B}\leq\frac{3}{2}\cdot|L_{\mathcal{I}}|+\frac{4}{3}\cdot(\mathrm{ OPT}(\mathcal{I})-|L_{\mathcal{I}}|)+o(\mathrm{OPT}(\mathcal{I}))\)._
Note that the bounds may not be tight for instances with many large items. Specifically, if \(|L_{\mathcal{I}}|\approx\mathrm{OPT}(\mathcal{I})\) then a variant of Algorithm Color_Sets was shown to yield a packing of at least \(2.5\cdot\mathrm{OPT}(\mathcal{I})\) bins [8]. To overcome this, we use an approach based on the simple yet crucial observation that there can be at most one large item in a bin. Therefore, we view the large items as _bins_ and assign items to these bins to maximize the total size packed in bins including large items. We formalize the problem initially on a single bin.
Definition 3.2: In the _bounded independent set problem (BIS)_ we are given a graph \(G=(V,E)\), a weight function \(w:V\rightarrow\mathbb{R}_{\geq 0}\), and a budget \(\beta\in\mathbb{R}_{\geq 0}\). The goal is to find an independent set \(S\subseteq V\) in \(G\) such that \(w(S)\) is maximized and \(w(S)\leq\beta\). Let \(\mathcal{I}=(V,E,w,\beta)\) be a BIS instance.
Towards solving BIS, we need the following definitions. For \(\alpha\in(0,1]\), \(\mathcal{A}\) is an \(\alpha\)-approximation algorithm for a maximization problem \(\mathcal{P}\) if, for any instance \(\mathcal{I}\) of \(\mathcal{P}\), \(\mathcal{A}\) outputs a solution of value at least \(\alpha\cdot OPT(\mathcal{I})\). A _polynomial-time approximation scheme (PTAS)_ for \(\mathcal{P}\) is a family of algorithms \(\{A_{\varepsilon}\}\) such that, for any \(\varepsilon>0\), \(A_{\varepsilon}\) is a polynomial-time \((1-\varepsilon)\)-approximation algorithm for \(\mathcal{P}\). A _fully PTAS (FPTAS)_ is a PTAS \(\{A_{\varepsilon}\}\) where, for all \(\varepsilon>0\), \(A_{\varepsilon}\) is polynomial also in \(\frac{1}{\varepsilon}\). We now describe a PTAS for BIS. Fix a BIS instance \(\mathcal{I}=(V,E,w,\beta)\) and \(\varepsilon>0\). As there can be at most \(\varepsilon^{-1}\) items with weight at least \(\varepsilon\cdot\beta\) in some optimal solution OPT for \(\mathcal{I}\), we can _guess_ this set \(F\) of items via enumeration. Then, to add smaller items to \(F\), we define a residual graph \(G_{F}\) of items with weights at most \(\varepsilon\cdot\beta\) which are not adjacent to any item in \(F\). Formally, define \(G_{F}=(V_{F},E_{F})\), where
\[V_{F}=\{v\in V\backslash F\mid w(v)\leq\varepsilon\cdot\beta,\forall u\in F:(v, u)\notin E\},\,E_{F}=\{(u,v)\in E\mid u,v\in V_{F}\}\]
Now, we find a maximum weight independent set \(S\) in \(G_{F}\). Note that this can be done in polynomial time for perfect and bipartite graphs. If \(w(F\cup S)\leq\beta\) then
we have an optimal solution; otherwise, we discard iteratively items from \(S\) until the remaining items form a feasible solution for \(\mathcal{I}\). Since we discard only items with relatively small weights, we lose only an \(\varepsilon\)-fraction of the weight relative to the optimum. The pseudocode for the scheme is given in Algorithm 2.
```
1:Initialize \(A\leftarrow\emptyset\).
2:for all independent sets \(F\subseteq V\) in \((V,E)\) s.t. \(|F|\leq\varepsilon^{-1},w(F)\leq\beta\)do
3: Define the residual graph \(G_{F}=(V_{F},E_{F})\).
4: Find a maximum independent set \(S\) of \(G_{F}\) w.r.t. \(w\).
5:while\(w(F\cup S)>\beta\)do
6: Choose arbitrary \(z\in S\).
7: Update \(S\gets S\setminus\{z\}\).
8:endwhile
9:if\(w(A)<w(F\cup S)\)then
10: Update \(A\gets F\cup S\).
11:endif
12:endfor
13:Return \(A\).
```
**Algorithm 2**\(\mathsf{PTAS}((V,E,w,\beta),\varepsilon)\)
**Lemma 3.3**.: _Algorithm 2 is a \(\mathsf{PTAS}\) for \(\mathsf{BIS}\)._
We now define our maximization problem for multiple bins. We solve a slightly generalized problem in which we have an initial partial packing in \(t\) bins. Our goal is to add to these bins (from unpacked items) a subset of items of maximum total size. Formally,
**Definition 3.4**.: _Given a \(\mathsf{BPC}\) instance \(\mathcal{I}=(I,s,E)\), \(S\subseteq I\), and a packing \(\mathcal{B}=(B_{1},\ldots,B_{t})\) of \(S\), define the maximization problem of \(\mathcal{I}\) and \(\mathcal{B}\) as the problem of finding a packing \(\mathcal{B}+\mathcal{C}\) of \(S\cup T\), where \(T\subseteq I\setminus S\) and \(\mathcal{C}=(C_{1},\ldots,C_{t})\) is a packing of \(T\), such that \(s(T)\) is maximized._
Our solution for \(\mathsf{BIS}\) is used to obtain a \((1-\frac{1}{e}-\varepsilon)\)-approximation for the maximization problem described in Definition 3.4. This is done using the approach of [9] for the more general _separable assignment problem (SAP)_.
**Lemma 3.5**.: _Given a \(\mathsf{BPC}\) instance \(\mathcal{I}=(I,s,E)\), \(S\subseteq I\), a packing \(\mathcal{B}=(B_{1},\ldots,B_{t})\) of \(S\), and a constant \(\varepsilon>0\), there is an algorithm \(\mathsf{MaxSize}\) which returns in time polynomial in \(|\mathcal{I}|\) a \((1-\frac{1}{e}-\varepsilon)\)-approximation for the maximization problem of \(\mathcal{I}\) and \(\mathcal{B}\). Moreover, given an \(\mathsf{FPTAS}\) for \(\mathsf{BIS}\) on the graph \((I,E)\), the weight function \(s\), and the budget \(\beta=1\), \(\mathsf{MaxSize}\) is a \((1-\frac{1}{e})\)-approximation algorithm for the maximization problem of \(\mathcal{I}\) and \(\mathcal{B}\)._
We use the above to obtain a feasible solution for the instance. This is done via a reduction to the maximization problem of the instance with a singleton packing of the large items and packing the remaining items in extra bins. Specifically, in
the subroutine \(\mathsf{MaxSolve}\), we initially put each item in \(L_{\mathcal{I}}\) in a separate bin. Then, additional items from \(S_{\mathcal{I}}\) and \(M_{\mathcal{I}}\) are added to the bins using Algorithm \(\mathsf{MaxSize}\). The remaining items are packed using Algorithm \(\mathsf{Color\_Sets}\). The pseudocode of the subroutine \(\mathsf{MaxSolve}\) is given in Algorithm 3.
```
1:Define \(T\leftarrow(\{\ell\}\ |\ \ell\in L_{\mathcal{I}})\).
2:\(\mathcal{A}\leftarrow\mathsf{MaxSize}(\mathcal{I},L_{\mathcal{I}},T,\varepsilon)\).
3:\(\mathcal{B}\leftarrow\mathsf{Color\_Sets}(\mathcal{I}\setminus\mathsf{items}( \mathcal{A}))\).
4:Return \(\mathcal{A}\oplus\mathcal{B}\).
```
**Algorithm 3**\(\mathsf{MaxSolve}(\mathcal{I}=(I,s,E),\varepsilon)\)
The proof of Lemma 3.6 uses Lemmas 3.1, 3.3, and 3.5.
**Lemma 3.6**.: _Given a BPC instance \(\mathcal{I}=(I,s,E)\) and an \(\varepsilon>0\), Algorithm \(\mathsf{MaxSolve}\) returns in polynomial time in \(|\mathcal{I}|\) a packing \(\mathcal{C}\) of \(\mathcal{I}\) such that there are \(0\leq x\leq s(M_{\mathcal{I}})\) and \(0\leq y\leq s(S_{\mathcal{I}})\) such that the following holds._
1. \(x+y\leq\mathrm{OPT}(\mathcal{I})-|L_{\mathcal{I}}|+\left(\frac{1}{e}+\varepsilon \right)\cdot\frac{|L_{\mathcal{I}}|}{2}\)_._
2. \(\#\mathcal{C}\leq\chi(G_{\mathcal{I}})+|L_{\mathcal{I}}|+\frac{3}{2}\cdot x+ \frac{4}{3}\cdot y\)_._
Lemma 3.6 improves significantly the performance of Algorithm \(\mathsf{Color\_Sets}\) for instances with many large items. However, Algorithm \(\mathsf{MaxSize}\) may prefer small over medium items; the latter items will be packed by Algorithm \(\mathsf{Color\_Sets}\) (see Algorithm 3). The packing of these medium items may harm the approximation guarantee. Thus, to tackle instances with many medium items, we use a reduction to a maximum matching problem for packing the large and medium items in at most \(\mathrm{OPT}(\mathcal{I})\) bins.5 Then, the remaining items can be packed using Algorithm \(\mathsf{Color\_Sets}\). The graph used for the following subroutine \(\mathsf{Matching}\) contains all large and medium items; there is an edge between any two items which can be assigned to the same bin in a packing of the instance \(\mathcal{I}\). Formally,
Footnote 5: We note that a maximum matching based technique for BPC is used also in [8, 15].
**Definition 3.7**.: _Given a BPC instance \(\mathcal{I}=(I,s,E)\), the auxiliary graph of \(\mathcal{I}\) is \(H_{\mathcal{I}}=(L_{\mathcal{I}}\cup M_{\mathcal{I}},E_{H})\), where \(E_{H}=\{(u,v)\ |\ u,v\in L_{\mathcal{I}}\cup M_{\mathcal{I}},s(\{u,v\})\leq 1,(u,v) \notin E\}\)._
Algorithm \(\mathsf{Matching}\) finds a maximum matching in \(H_{\mathcal{I}}\) and outputs a packing of the large and medium items where pairs of items taken to the matching are packed together, and the remaining items are packed in extra bins using Algorithm \(\mathsf{Color\_Sets}\). The pseudocode of the subroutine \(\mathsf{Matching}\) is given in Algorithm 4.
The proof of Lemma 3.8 follows by noting that the cardinality of a maximum matching in \(H_{\mathcal{I}}\) in addition to the number of unmatched vertices in \(L_{\mathcal{I}}\cup M_{\mathcal{I}}\) is at most \(\mathrm{OPT}(\mathcal{I})\).
```
1:Find a maximum matching \(\mathcal{M}\) in \(H_{\mathcal{I}}\).
2:\(\mathcal{B}\leftarrow(\{u,v\}\ |\ (u,v)\in\mathcal{M})\oplus(\{v\}\ |\ v\in M_{ \mathcal{I}}\cup L_{\mathcal{I}},\forall u\in M_{\mathcal{I}}\cup L_{\mathcal{I} }:(u,v)\notin\mathcal{M})\).
3:Return \(\mathcal{B}\oplus\mathsf{Color\_Sets}(\mathcal{I}\setminus(M_{\mathcal{I}} \cup L_{\mathcal{I}}))\).
```
**Algorithm 4**\(\mathsf{Matching}(\mathcal{I}=(I,s,E))\)
**Lemma 3.8**.: _Given a BPC instance \(\mathcal{I}=(I,s,E)\), Algorithm \(\mathsf{Matching}\) returns in polynomial time in \(|\mathcal{I}|\) a packing \(\mathcal{A}\) of \(\mathcal{I}\) such that \(\#\mathcal{A}\leq\mathrm{OPT}(\mathcal{I})+\chi(G_{\mathcal{I}})+\frac{4}{3} \cdot s(S_{\mathcal{I}})\)._
We now have the required components for the approximation algorithm for BPC and the asymptotic approximation for BPB. Our algorithm, \(\mathsf{ApproxBPC}\), applies all of the above subroutines and returns the packing which uses the smallest number of bins. We use \(\varepsilon=0.0001\) for the error parameter in \(\mathsf{MaxSolve}\). The pseudocode of \(\mathsf{ApproxBPC}\) is given in Algorithm 5.
```
1:Let \(\varepsilon=0.0001\).
2:Compute \(\mathcal{A}_{1}\leftarrow\mathsf{Color\_Sets}(\mathcal{I})\), \(\mathcal{A}_{2}\leftarrow\mathsf{MaxSolve}(\mathcal{I},\varepsilon)\), \(\mathcal{A}_{3}\leftarrow\mathsf{Matching}(\mathcal{I})\).
3:Return \(\arg\min_{\mathcal{A}\in\{\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}\}}\# \mathcal{A}\).
```
**Algorithm 5**\(\mathsf{ApproxBPC}(\mathcal{I})\)
We give below the main result of this section. The proof follows by the argument that the subroutines \(\mathsf{Color\_Sets}\), \(\mathsf{MaxSolve}\), and \(\mathsf{Matching}\) handle together most of the difficult cases. Specifically, if the instance contains many large items, then \(\mathsf{MaxSolve}\) produces the best approximation. If there are many large and medium items, then \(\mathsf{Matching}\) improves the approximation guarantee. Finally, for any other case, our analysis of the \(\mathsf{Color\_Sets}\) algorithm gives us the desired ratio. We summarize with the next result.
**Theorem 3.9**.: _Algorithm 5 is a \(2.445\)-approximation for BPC and an asymptotic \(1.391\)-approximation for BPB._
## 4 Split Graphs
In this section we enhance the use of maximization techniques for BPC to obtain an absolute approximation algorithm for BPS. In particular, we improve upon the recent result of Huang et al. [15]. We use as a subroutine the maximization technique as outlined in Lemma 3.5. Specifically, we start by obtaining an FPTAS for the BIS problem on split graphs. For the following, fix a BPS instance \(\mathcal{I}=(I,s,E)\). It is well known (see, e.g., [11]) that a partition of the vertices of a split graph into a clique and an independent set can be found in polynomial time. Thus, for simplicity we assume that such a partition of the split graph \(G\) is known and given by \(K_{G},S_{G}\). We note that an FPTAS for the BIS problem on split graphs follows from a result of Pferschy and Schauer [25] for _knapsack with conflicts_, since split graphs are a subclass of chordal graphs. We give a simpler FPTAS for our problem in Appendix D.
**Lemma 4.1**.: _There is an algorithm_ FPTAS-BIS _that is an_ FPTAS _for the_ BIS _problem on split graphs._
Our next goal is to find a suitable initial packing \(\mathcal{B}\) to which we apply MaxSize. Clearly, the vertices \(K_{G_{\mathcal{I}}}\) must be assigned to different bins. Therefore, our initial packing contains the vertices of \(K_{G_{\mathcal{I}}}\) distributed to \(|K_{G_{\mathcal{I}}}|\) bins as \(\{\{v\}\mid v\in K_{G_{\mathcal{I}}}\}\). In addition, let \(\alpha\in\{0,1,\ldots,\lceil 2\cdot s(I)\rceil+1\}\) be a _guess_ of \(\operatorname{OPT}(\mathcal{I})-|K_{G_{\mathcal{I}}}|\); then, \((\emptyset)_{i\in[\alpha]}\) is a packing of \(\alpha\) bins that do not contain items. Together, the two above packings form the initial packing \(\mathcal{B}_{\alpha}\). Our algorithm uses MaxSize to add items to the existing bins of \(\mathcal{B}_{\alpha}\) and packs the remaining items using FFD. Note that we do not need an error parameter \(\varepsilon\), since we use MaxSize with an FPTAS (see Lemma 3.5). For simplicity, we assume that \(\operatorname{OPT}(\mathcal{I})\geq 2\) (else we can trivially pack the instance in a single bin). We give the pseudocode of our algorithm for BPS in Algorithm 6.
```
1:for\(\alpha\in\{0,1,\ldots,\lceil 2\cdot s(I)\rceil+1\}\)do
2: Define \(\mathcal{B}_{\alpha}=\{\{v\}\mid v\in K_{G_{\mathcal{I}}}\}\oplus(\emptyset)_ {i\in[\alpha]}\)
3:\(\mathcal{A}_{\alpha}\leftarrow\textsf{MaxSize}(\mathcal{I},K_{G_{\mathcal{I}} },\mathcal{B}_{\alpha})\).
4:\(\mathcal{A}_{\alpha}^{*}\leftarrow\mathcal{A}_{\alpha}\oplus\textsf{FFD}( \mathcal{I}\setminus\textsf{items}(\mathcal{A}_{\alpha}))\).
5:endfor
6: Return \(\operatorname*{arg\,min}_{\alpha\in\{0,1,\ldots,\lceil 2\cdot s(I)\rceil+1\}}\# \mathcal{A}_{\alpha}^{*}\).
```
**Algorithm 6** Split-Approx\((\mathcal{I}=(I,s,E))\)
By Lemmas 4.1 and 3.5 we have a \(\left(1-\frac{1}{e}\right)\)-approximation for the maximization problem of the BPS instance \(\mathcal{I}\) and an initial partial packing \(\mathcal{B}\). Hence, for a correct guess \(\alpha=\operatorname{OPT}(\mathcal{I})-|K_{G_{\mathcal{I}}}|\), the remaining items to be packed by FFD are of total size at most \(\frac{s(I)}{e}\) and can be packed in \(\frac{2\cdot\operatorname{OPT}(\mathcal{I})}{e}\) bins. Thus, we have
**Theorem 4.2**.: _Algorithm 6 is a \(\left(1+\frac{2}{e}\right)\)-approximation for BPS._
## 5 Asymptotic Hardness for Bipartite and Split Graphs
In this section we show that there is no APTAS for BPB and BPS, unless \(P=NP\). We use a reduction from the _Bounded 3-dimensional matching (B3DM)_ problem, that is known to be MAX SNP-complete [18].
For the remainder of this section, let \(c>2\) be some constant. A B3DM instance is a four-tuple \(\mathcal{J}=(X,Y,Z,T)\), where \(X,Y,Z\) are three disjoint finite sets and \(T\subseteq X\times Y\times Z\); also, for each \(u\in X\cup Y\cup Z\) there are at most \(c\) triples in \(T\) to which \(u\) belongs. A _solution_ for \(\mathcal{J}\) is \(M\subseteq T\) such that for all \(u\in X\cup Y\cup Z\) it holds that \(u\) appears in at most one triple of \(M\). The objective is to find a solution \(M\) of maximal cardinality. Let \(\operatorname{OPT}(\mathcal{J})\) be the value of an optimal solution for \(\mathcal{J}\). We use in our reduction a _restricted_ instance of B3DM defined as follows.
**Definition 5.1**.: _For \(k\in\mathbb{N}\), a_ B3DM _instance \(\mathcal{J}\) is \(k\)-restricted if \(\operatorname{OPT}(\mathcal{J})\geq k\)._
In the next lemma we show the hardness of \(k\)-restricted B3DM. Intuitively, since B3DM instances \(\mathcal{J}\) with \(\mathrm{OPT}(\mathcal{J})\leq k\) are polynomially solvable for a fixed \(k\) (e.g., by exhaustive enumeration), it follows that restricted-B3DM must be hard to approximate, by the hardness result of Kann [18].
**Lemma 5.2**.: _There is a constant \(\alpha>1\) such that for any \(k\in\mathbb{N}\) there is no \(\alpha\)-approximation for the \(k\)-restricted B3DM problem unless P=NP._
We give below the main idea of our reduction, showing the asymptotic hardness of BPB and BPS. A more formal description and the proof of Lemma 5.2 are given in Appendix E. For a sufficiently large \(n\in\mathbb{N}\), let \(\mathcal{J}=(X,Y,Z,T)\) be an \(n\)-restricted instance of B3DM, and let the components of \(\mathcal{J}\), together with appropriate indexing, be \(U=X\cup Y\cup Z\) and \(T\), where
\[X=\{x_{1},\ldots,x_{\tilde{x}}\},Y=\{y_{1},\ldots,y_{\tilde{y}}\},Z=\{z_{1}, \ldots,z_{\tilde{z}}\},T=\{t_{1},\ldots,t_{\tilde{t}}\}.\]
We outline our reduction for BPB and later show how it can be modified to yield the hardness result for BPS. Given an \(n\)-restricted B3DM instance, we construct a sequence of BPB instances. Each BPB instance contains an item for each element \(u\in U\), and an item for each triple \(t\in T\). There is an edge \((u,t)\) if \(u\in U\) and \(t\in T\), and \(u\) does not appear in \(t\), i.e., we forbid packing an element \(u\) in the same bin with a triple not containing \(u\), for any \(u\in U\). Since we do not know the exact value of \(\mathrm{OPT}(\mathcal{J})\), we define a family of instances with different number of _filler items_; these items are packed in the optimum of our constructed BPB instance together with elements not taken to the solution for \(\mathcal{J}\).
Specifically, for a _guess_\(i\in\{n,n+1,\ldots,|T|\}\) of \(\mathrm{OPT}(\mathcal{J})\), we define a BPB instance \(\mathcal{I}_{i}=(I_{i},s,E)\). The set of items in \(\mathcal{I}_{i}\) is \(I_{i}=U\cup P_{i}\cup T\cup Q_{i}\), where \(P_{i},Q_{i}\) are a set of \(\tilde{t}-i\) (filler) items and a set of \(\tilde{x}+\tilde{y}+\tilde{z}-3\cdot i\) (filler) items, respectively, such that \(P_{i}\cap U=\emptyset\) and \(Q_{i}\cap U=\emptyset\). The bipartite (conflict) graph of \(\mathcal{I}_{i}\) is \(G_{i}=(I_{i},E)\), where \(E=E_{X}\cup E_{Y}\cup E_{Z}\) is defined as follows.
\[E_{X} =\{(x,t)\ |\ x\in X,t=(x^{\prime},y,z)\in T,x\neq x^{\prime}\}\] \[E_{Y} =\{(y,t)\ |\ y\in Y,t=(x,y^{\prime},z)\in T,y\neq y^{\prime}\}\] \[E_{Z} =\{(z,t)\ |\ z\in Z,t=(x,y,z^{\prime})\in T,z\neq z^{\prime}\}\]
Finally, define the sizes of items in \(\mathcal{I}_{i}\) to be
\[\forall u\in U,p\in P_{i},q\in Q_{i},t\in T:\ s(u)=0.15,s(p)=0.45,s(q)=0.85,s(t )=0.55.\]
By the above, the only way to pack three items from \(x,y,z\in U\) with a triple \(t\in T\) is if \((x,y,z)=t\); also, \(s\left(\{x,y,z,t\}\right)=1\). For an illustration of the reduction see Figure 1.
Given a packing \((A_{1},\ldots,A_{q})\) for the BPB instance \(\mathcal{I}_{i}\), we consider all _useful bins_\(A_{b}\) in the packing, i.e., \(A_{b}=\{x,y,z,t\}\), where \(x\in X,y\in Y,z\in Z\) and \(t=(x,y,z)\). The triple \(t\) from bin \(A_{b}\) is taken to our solution for the original \(n\)-restricted B3DM instance \(\mathcal{J}\). Note that taking all triples as described above forms a feasible solution for \(\mathcal{J}\), since each element is packed only once. Thus, our goal becomes to find a packing for the reduced BPB instance with a maximum
number of useful bins. Indeed, since \(s(A_{b})=1\) for any useful bin \(A_{b}\), finding a packing with many useful bins coincides with an efficient approximation for BPB.
For the optimal guess \(i^{*}=\mathrm{OPT}(\mathcal{J})\), it is not hard to see that the optimum for the BPB instance \(\mathcal{I}_{i^{*}}\) satisfies \(s(I_{i^{*}})=\mathrm{OPT}(\mathcal{I}_{i^{*}})\); that is, all bins in the optimum are _fully_ packed. For a sufficiently large \(n\), and assuming there is an APTAS for BPB, we can find a packing of \(\mathcal{I}_{i^{*}}\) with a large number of bins that are fully packed. A majority of these bins are useful, giving an efficient approximation for the original B3DM instance. A similar reduction to BPS is obtained by adding to the bipartite conflict graph of the BPB instance an edge between any pair of vertices in \(T\); thus, we have a _split_ conflict graph. We summarize the above discussion in the next result (the proof is given in Appendix E).
**Theorem 5.3**.: _There is no APTAS for BPB and BPS, unless P=NP._
## 6 Discussion
In this work we presented the first theoretical evidence that BPC on polynomially colorable graphs is harder than classic bin packing, even in the special cases of bipartite and split graphs. Furthermore, we introduced a new generic framework for tackling BPC instances, based on a reduction to a maximization problem. Using this framework, we improve the state-of-the-art approximations for BPC on several well studied graph classes.
We note that better bounds for the maximization problems solved within our framework will imply improved approximation guarantees for BPC on perfect, bipartite, and split graphs. It would be interesting to apply our techniques to improve the known results for other graph classes, such as chordal graphs or partial \(k\)-trees.
|
2308.05523
|
Efficient detection of multidimensional single-photon time-bin
superpositions
|
The ability to detect quantum superpositions lies at the heart of fundamental
and applied aspects of quantum mechanics. The time-frequency degree of freedom
of light enables encoding and transmitting quantum information in a
multi-dimensional fashion compatible with fiber and integrated platforms.
However, the ability to efficiently detect time-frequency superpositions is not
yet available. Here we show, that multidimensional time-bin superpositions can
be detected using a single time-resolved photon detector. Our approach uses
off-the shelf components and is based on the temporal Talbot effect -- a
time-frequency counterpart of the well-know near field diffraction effect. We
provide experimental results and discuss the possible applications in quantum
communication, quantum information processing, and time-frequency quantum state
tomography.
|
Adam Widomski, Maciej Ogrodnik, Michał Karpiński
|
2023-08-10T12:05:08Z
|
http://arxiv.org/abs/2308.05523v1
|
# Efficient detection of multidimensional single-photon time-bin superpositions
###### Abstract
The ability to detect quantum superpositions lies at the heart of fundamental and applied aspects of quantum mechanics. The time-frequency degree of freedom of light enables encoding and transmitting quantum information in a multi-dimensional fashion compatible with fiber and integrated platforms. However, the ability to efficiently detect time-frequency superpositions is not yet available. Here we show, that multidimensional time-bin superpositions can be detected using a single time-resolved photon detector. Our approach uses off-the shelf components and is based on the temporal Talbot effect - a time-frequency counterpart of the well-know near field diffraction effect. We provide experimental results and discuss the possible applications in quantum communication, quantum information processing, and time-frequency quantum state tomography.
[http://dx.doi.org/10.1364/ao.XX.XXXXXX](http://dx.doi.org/10.1364/ao.XX.XXXXXX)
## 1 Introduction
The ability to detect quantum superpositions is key for most of quantum experiments, from quantum communication [1, 2, 3, 4], quantum state and process tomography [5, 6, 7, 8, 9, 10, 11, 12], entanglement verification [13, 14, 15, 16, 17] to quantum computation [18]. Given the growing use of the time-frequency degree of freedom of quantum light, which enables multidimensional encoding of quantum information in a single spatial mode, compatible with fiber optic and integrated optical setups [19, 20, 21, 22, 23], a necessity arises to efficiently detect single-photon superpositions in time and frequency [24, 25, 20, 19].
The measurement of temporal mode superpositions was performed by means of the optically nonlinear quantum pulse gate [26, 27], enabling quantum tomography [28], and by means of electro-optic sideband generation in frequency bins, enabling time-frequency entanglement detection [7, 8, 21]. However, those approaches require active spectral modification, either electro-optically or by nonlinear optical interactions. Franson interferometers [29, 30] are an alternative for the detection of time-bin superpositions, but a single interferometer detects the phase between only two time bins. Building a setup enabling multidimensional state discrimination requires nesting the interferometers [31, 32]. This implies higher complexity, cost, and in particular, losses. In the Franson interferometer tree method the probability of detecting a \(d\)-dimensional superposition scales as \(1/d\), due to the increasing number of possible paths in the nested interferometer [31].
Another possibility to approximately detect time-bin superpositions is by directly resolving their spectrum. However, resolving the signals in time and frequency simultaneously at the single-photon level is challenging due to the limits given by the time-frequency uncertainty relation [20, 33]. The highest resolution of efficient multiplexed single-photon spectral measurements is offered by means of the dispersive Fourier Transform [34, 35, 36, 37, 38], however, the large values of group dispersion delay needed to reach the required spectral resolution can result in prohibitively high losses [20].
Here we show, that the temporal Talbot effect [39], a time-frequency equivalent of the well-known near field diffraction effect [40], enables discrimination between \(d\) orthogonal states of a \(d\)-dimensional time-bin superposition. Our method only requires a passive experimental setup comprising a dispersive medium and a single time-correlated single photon counter. We show, that whereas the discrimination is not perfect, the information content increases with the dimension. The setup can be constructed with off-the-shelf components, requires significantly less dispersion than a conventional dispersive Fourier spectrometer, and offers constant detection loss regardless of the dimension \(d\).
## 2 Methods
### Temporal Talbot effect
The Talbot effect is a phenomenon of self-imaging occurring for periodically placed intensity distributions [40]. Those self-images are created due to diffraction, and repeat themselves
after a certain characteristic distance. The optical space-time duality [41] lets us translate spatial effects within the paraxial approximation to temporal effects in short pulse propagation in a second-order dispersive medium, in the narrowband regime [42]. This means that a superposition of equally spaced pulses is self-imaged by propagation in a medium with a certain group dispersion delay (GDD) (which we will further refer to as the Talbot GDD) [39], just as equally spaced pulses would be self-imaged after some propagation distance due to diffraction (Fig. 1a). This temporal Talbot effect enables multiple bright light applications such as multiplication of the repetition rate of a periodic pulse train [43], factorization of numbers [44] or real-time calculation of microwave spectrograms [45].
In analogy to the spatial case, the temporal location of the self-images depends on the relative phases of the sources. This enables us to employ the temporal Talbot effect to detect different time bin superpositions. In particular, we use the fact that the temporal Talbot effect can be seen as a discrete Fourier transform of input pulse trains [46, 47, 48].
### Detecting superpositions
Let us consider optical pulses in 4 time bins:
\[\ket{t_{0}},\ket{t_{1}},\ket{t_{2}},\ket{t_{3}}. \tag{1}\]
Those time bins are approximately Gaussian-shaped, 40 ps wide and separated by 284 ps (the experimental details will be presented in sec. 2c.).
Those states can be directly measured in time by means of time-correlated single photon counting (TCSPC). We also created 4 superpositions of those states that form a discrete Fourier transform [49] of the time bin basis:
\[\ket{f_{n}}=\frac{1}{\sqrt{d}}\sum_{m=0}^{d-1}e^{\frac{-2m}{d}}\ket{t_{m}}, \tag{2}\]
with the dimension \(d=4\).
Let us now consider propagation through a medium with group delay dispersion corresponding to the Talbot GDD (\(\beta_{2}\)). The Talbot GDD is linked to the separation between the pulses (\(\tau\)) [43] as:
\[\tau=\sqrt{\frac{2\pi\beta_{2}}{s}},s=1,2... \tag{3}\]
Finally, the signals are detected by time-resolved single photon counting at the output of the GDD medium. When the condition in Eq. 3 is met, a train of pulses in time will be observed (cf. the dashed line in Fig. 1a due to the temporal Talbot effect.
In Fig. 1b we show the dependence of the output time-resolved photon detection probabilities on the symbol separation for a given GDD value of 12900 ps\({}^{2}\), corresponding to the first Talbot separation in the experiment. Fringe pattern can be observed for symbol separations matching the conditions.
In Fig. 2a we present the optical pulses representing the 4 time bins. Those pulses were further used to form superpositions. Adequate phases are adjusted with the voltage signals (blue rectangles) driving an electro-optic phase modulator. The histograms corresponding to the photon times of arrival of the states from the basis \(\{\ket{f_{i}}\}\) are presented in the Fig. 2b.
In our method only a single detector is used, and all detections are taken into account, but for a fraction of detections the result may be ambiguous. This is in contrast to the Franson interferometer tree method [32] where for a fraction of detections we get unambiguous measurement. For single-shot state discrimination in Talbot effect method we need a procedure to assign symbol to a measured time of arrival. We want to do it with maximal _correctness_, that is a ratio of correct outcomes to errors. We could try to do that by some probabilistic procedure based on conditional probabilities of each symbol for each time of arrival. Next we show that the optimal strategy is to simply take the most probable symbol at a given time.
With a fixed experimental setup this is a task of distinguishing classical random variables which is a simple case of Bayesian decision theory [50]. Let us denote the random variable associated
Figure 1: a) A Talbot carpet displaying probability of finding a photon at a specified instance of time as a function of group delay dispersion (GDD). Color intensity depicts the probability density. Horizontal dashed lines represent GDD values necessary to measure the distribution in the temporal far field and by means of the temporal Talbot effect. The carpet was generated considering no phase difference between the pulses. b) An orthogonal Talbot carpet representing the probability to detect a photon in time as a function of symbol separation, calculated for \(\text{GDD}=12900\) ps\({}^{2}\) and \(\tau=284\) ps corresponding to the first Talbot separation.
Figure 2: a) Numerical calculation of signal’s temporal profiles. Yellow rectangles represent temporal symbols generated by means of fast electro-optic amplitude modulation. Blue rectangles represent voltage signals used for phase modulation. b) Simulated single-photon time of arrival distributions for the temporal Talbot effect for the experimental parameters provided in the text.
with the symbol \(i\) as \(X_{i}\). Symbol number \(i\) is chosen randomly, described by the random variable \(\sigma\) with the uniform distribution on the \(\{0,1,\ldots,d-1\}\). Next, a random time of arrival value \(X_{i}=t_{0}\) is drawn. Our task is to decide from which \(X_{i}\) a value \(t_{0}\) was drawn. Here we consider case with discrete time which corresponds to the realistic setup with a finite time resolution. We want to guess the outcome of \(\sigma\) given the outcome \(t_{0}\) of \(X_{\sigma}\). Based on probabilities \(P(X_{i}=t_{0})\) we may decide the outcome probabilistically. Given the result \(t_{0}\) we give answer \(i\) with a probability \(a_{t_{0}}(i)\). We want to maximize the correctness, ratio of correct results to wrong results, given by
\[C=\sum_{t}\sum_{i}a_{t}(i)P(\sigma=i|X_{\sigma}=t). \tag{4}\]
By the Bayes rule we have
\[P(\sigma=i|X_{\sigma}=t)=\frac{P(X_{\sigma}=t|\sigma=i)P(\sigma=i)}{P(X_{ \sigma}=t)}=\frac{P(X_{i}=t)\frac{1}{d}}{P(X_{\sigma}=t)}, \tag{5}\]
so the function to optimize takes the form
\[C=\sum_{t}\frac{1}{d\cdot P(X_{\sigma}=t)}\sum_{i}a_{t}(i)P(X_{i}=t). \tag{6}\]
We have
\[\sum_{i}a_{t}(i)P(X_{i}=t)\leq\max_{j}P(X_{j}=t)\sum_{i}a_{t}(i)=\max_{j}P(X_{ j}=t), \tag{7}\]
and this bound is saturated for the deterministic strategy where \(a_{t}(i)=1\) for \(i\) such that \(P(X_{i}=t)\) is maximal. For this game the optimal strategy is to always choose the outcome \(i\) such that
\[P(X_{i}=t_{0})=\max_{j}P(X_{j}=t_{0}). \tag{8}\]
Depending on the application it may be advantageous to have lower error rate at the cost of lower efficiency. We can do that in the postprocessing by taking only measurement times at which one symbol is much more probable than others.
### Experimental setup
Our experimental setup is shown in Fig. 3. A laser operating at the telecom wavelength of 1560 nm in the continuous wave (CW) mode was used as source of light. For technical reasons, we amplified the optical signal with an erbium-doped fiber amplifier (EDFA, Pritel, HPP-PMFA-22-10) and employed a bandpass filter to reduce the noise. Optical pulses were carved by means of fast amplitude electro-optic modulation with a Mach-Zehnder modulator (MZM, Thorlabs, LNA6213). Those pulses were used to form superpositions consisting of \(d=4\) single states in a predefined phase relation.The superpositions were directed to the electro-optic phase modulator (EOPM, EOSpace), which was used to impose adequate phases on every component of the superposition. Both modulators support 40 GHz of analog bandwidth, and were driven with an amplified radio-frequency (RF) signal generated with the fast arbitrary waveform generator (AWG, Keysight, M1896A) providing 33 GHz of the analog bandiwdth and sampling rate up to 92.16 Gs/s. The full-width at half maximum (FWHM) duration of a one-bit signal was therefore \(\sim\) 12 ps. The phase factors were adjusted with the EOPM by programming four driving voltage signals consisting of approximately 150-ps-wide rectangular pulses (Fig. 2a) such, that their amplitudes corresponded to fractions or multiples of the half-wave voltage (\(V_{\pi}\)). To generate optical signals we assigned 3 bits from the AWG memory per symbol. The separation between the symbols was equal to 282 ps insted of 284 ps due to the finite sampling frequency. The AWG was also a source of the 10 MHz clock (CLK) signal distributed over an electrical cable to the detection section. We biased the MZM for minimal transmission with the direct current (DC) voltage from the power supply (Keysight, E36313A) using the feedback loop consisting of a 90:10 fiber optic beam splitter and a power meter (Thorlabs, PM400). Finally, the pulses were attenuated to the single photon level with the electronic variable optical attenuator (EVOA, Thorlabs, EVOA1550F).
On the detection side, we reconnected the optical path in order to either measure the photon counts directly in time, or to detect the superpositions. Time-domain measurement was achieved by directing the signals to niobium nitride superconducting nanowire single-photon detectors (SNSPDs, Single Quantum) and histogramming the time tags using a time-to-digital converter (Swabian Instruments, Time Tagger Ultra). The time tagger exhibited jitter equal to \(\sim\) 10 ps root mean square (RMS), and the SNSPDs of \(\sim\) 5 ps RMS. Thus, the total jitter of the receiver system was \(\sim\) 11 ps RMS. For the superposition measurement, photons were additionally transmitted through a chirped-fiber-Bragg-grating-based dispersion compensating module (DCM), resulting in dispersive stretching of the pulses. We used a DCM providing group delay dispersion (GDD) equivalent to 12900 ps\({}^{2}\) (Proximion, DCMHDC-100H-P510). This amount of dispersion was not enough to perform the dispersive Fourier transform, but sufficed to observe the temporal Talbot effect (see Fig.1a), which we used to distinguish superpositions of the optical signals with different phases.
Figure 3: Experimental setup. A continuous wave (CW) telecommunications laser is modulated with an electro-optic Mach-Zehnder modulator (MZM) and electro-optic phase modulator (EOPM) to generate optical pulses forming superpositions. The optical signals are attenuated to the single photon level with an electronic variable optical attenuator (EVOA). The signals are detected either directly in the time domain (T) or in the superposition domain (S) by means of the temporal Talbot effect performed with the dispersion compensating module (DCM) providing group delay dispersion (GDD). Photon time-of-arrival histograms were acquired by means of time-correlated single photon counting (TCSPC). Yellow lines represent optical fiber connections.
## 3 Results
We demonstrated our technique with four four-dimensional superpositions. We analysed histogram's widths, shapes, and locations on the time axis to verify our method. Temporal symbols were approximately-Gaussian, 46 ps wide, and separated by approximately 284 ps, as expected. We examined these temporal properties by utilising the \(|f_{0}\rangle\) state, since all the components of this superposition were equally modulated in phase (Fig 4a). Fig 4b presents histograms of all the \(\{f_{i}\}\) states after propagation through the DCM. The obtained fringes were distinguishable in time yielding the possibility to detect each state from the \(\{f_{i}\}\) basis.
To display the essence of utilising the Talbot effect for efficient detection, we measured the probability of successful measurement for symbol separations lower and greater than the first Talbot separation. The outcomes of this measurement are presented in Fig. 4c. We evaluated the correctness, according to the aforementioned selection criterion, eq. 8. The result indicates that any deviation from the Talbot separation results in the decrease of the correctness. This indicates that the measurements are performed outside the temporal far field regime.
## 4 Conclusion and discussion
We propose a new method to detect time-bin states from discrete Fourier transform basis. Those states could be also detected by a tree of Franson interferometers as proposed in [32, 51]. For a system of dimension \(d\) this method requires \(d-1\) aligned Franson interferometers and \(d\) detectors and uses postselection of results that discards \(\frac{d-1}{d}\) of measurements. Real efficiency is further limited by the insertion loss of interferometers in the optical path. In our method only one dispersion module and one detector are used for the measurement. It offers constant efficiency irrespective of the dimension, at the cost of error rate increasing with the number of different states to be distinguished. One can recognize a trade-off between the Franson interferometer tree method with a constant error rate, decreasing efficiency and increasing complexity, and our method with a constant efficiency, constant complexity and error rate increasing with dimension. The error rate in our method can also be decreased by postselection of the results without the change in the experimental setup.
With larger alphabet we send more bits of information, but we also witness higher error rates. To quantify that trade-off we may use mutual information to evaluate the scaling with dimension of the amount of information that can be sent using the Talbot effect method for detection. In particular, we calculate mutual information between the random variable \(\sigma\) of the prepared state and the random variable of the time of arrival \(X_{\sigma}\). In Fig. 5 we can see how it increases with the increase of the dimension of the prepared superposition.
In summary, we show that the temporal Talbot effect in the single-photon-counting regime enables efficient detection of time-bin superpositions in an all-fiber setup. The proposed selection algorithm allowed us to distinguish the superpositions for all detected photons with with error rate of 36.5% for the Talbot separation between pulses. Compared to the Franson interferometer tree method we have higher efficiency at the cost of higher error rate. Other post selection strategies, including rejection of ambiguous measurements, can alter that trade-off. Our technique can be easily expanded to higher dimensions with the increase in bits of information per pulse. It offers low complexity and high flexibility, which is key for quantum-photonic applications. We expect it will enable development of robust and efficient quantum communication methods, as well as techniques for time-frequency entanglement detection and characterization and quantum state and process tomography.
Funding.A part of this work was carried out within the Project
Figure 4: a) Measured histogram of the \(|f_{0}\rangle\) state in the time domain. b) Measured histograms of the \(|f_{i}\rangle\) states after frequency-to-time mapping c) Measured correctness of the \(|f_{i}\rangle\) state discrimination, compared to the theoretical dependencies taking into account no jitter (noiseless), and 11 ps root mean square (RMS) jitter (noisy) cases. The error bars indicate sampling error due to the finite sampling frequency of the AWG. Imperfections in symbol preparation (not included in the simulation) lead to a small decrease in the measured correctness.
Figure 5: Numerical simulation of the mutual information of input state and (noisy) time of arrival random variables.
QuLCHE, supported by the National Science Centre of Poland project no. 2019/32/Z/ST2/00018, under QuantERA, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no 731473. The authors appreciate funding from the University of Warsaw within the of the "Excellence Initiative - Research University" framework.
###### Acknowledgements.
We thank M. Mikolajczyk, R. Demkowicz-Dobrzanski and F. Sosnicki for insightful discussions.
## Disclosures
The authors declare no conflicts of interest.
|
2303.00729
|
Adiabatic eigenstate deformations and weak integrability breaking of
Heisenberg chain
|
We consider the spin-1/2 XXX chain weakly perturbed away from integrability
by an isotropic next-to-nearest neighbor exchange interaction. Recently, it was
conjectured that this model possesses an infinite tower of quasiconserved
integrals of motion (charges) [D. Kurlov et al., Phys. Rev. B 105, 104302
(2022)]. In this work we first test this conjecture by investigating how the
norm of the adiabatic gauge potential (AGP) scales with the system size, which
is known to be a remarkably accurate measure of chaos. We find that for the
perturbed XXX chain the behavior of the AGP norm corresponds to neither an
integrable nor a chaotic regime, which supports the conjectured
quasi-integrability of the model. We then prove the conjecture and explicitly
construct the infinite set of quasiconserved charges. Our proof relies on the
fact that the XXX chain perturbed by next-to-nearest exchange interaction can
be viewed as a truncation of an integrable long-range deformation of the
Heisenberg spin chain.
|
Pavel Orlov, Anastasiia Tiutiakina, Rustem Sharipov, Elena Petrova, Vladimir Gritsev, Denis V. Kurlov
|
2023-03-01T18:44:04Z
|
http://arxiv.org/abs/2303.00729v3
|
# Adiabatic eigenstate deformations and weak integrability breaking of Heisenberg chain
###### Abstract
We consider the spin-\(\frac{1}{2}\) Heisenberg chain (XXX model) weakly perturbed away from integrability by an isotropic next-to-nearest neighbor exchange interaction. Recently, it was conjectured that this model possesses an infinite tower of _quasiconserved_ integrals of motion (charges) [D. Kurlov _et al._, Phys. Rev. B **105**, 104302 (2022)]. In this work we first test this conjecture by investigating how the norm of the adiabatic gauge potential (AGP) scales with the system size, which is known to be a remarkably accurate measure of chaos. We find that for the perturbed XXX chain the behavior of the AGP norm corresponds to neither an integrable nor a chaotic regime, which supports the conjectured quasi-integrability of the model. We then prove the conjecture and explicitly construct the infinite set of quasiconserved charges. Our proof relies on the fact that the XXX chain perturbed by next-to-nearest exchange interaction can be viewed as a truncation of an integrable long-range deformation of the Heisenberg spin chain.
## I Introduction
Quantum chaos has become a subject of intensive research over the last decades. Despite significant efforts and numerous milestones achieved, there still is a large number of open questions (for a review, see, e.g. Refs. [1; 2; 3; 4]). On the contrary, in classical systems, chaos is a well understood phenomenon that relies on the exponential sensitivity of the phase space trajectories to initial conditions [5]. This does not occur in integrable systems, because their trajectories are confined to certain subregions (tori) of the phase space, due to the presence of many conservation laws [6]. Moreover, the renowned Kolmogorov-Arnold-Moser (KAM) theorem states that classical integrable systems under weak integrability-breaking perturbations remain stable for a sufficiently long time, because such perturbations do not destroy and only slightly deform most of the phase-space tori [7; 8; 9; 10; 11].
Extending the physical picture of chaos from classical systems to the quantum ones is far from being straightforward, already due to the fact that the notion of phase space trajectories does not apply to quantum systems. Thus, in the quantum case one has to define chaos differently. A particularly successful and widely accepted approach to quantum chaos is based on the random matrix theory (RMT) [12; 13] and the celebrated eigenstate thermalization hypothesis (ETH) [14; 15; 16], which describes how isolated quantum systems relax locally to thermal equilibrium. In the context of RMT and ETH, chaotic quantum systems are most commonly characterized by their spectral properties, such as the level spacing statistics [17; 18; 19], mean gap ratio [20; 21], and spectral form factor [22; 23]. For instance, according to the Bohigas-Giannini-Schmit conjecture [24] chaotic systems exhibit Wigner-Dyson level spacing statistics due to the repulsion between the energy levels. On the contrary, in integrable quantum systems the energy levels are uncorrelated, so that the corresponding level spacing statistics is Poissonian [25]. Intuitively this is can be understood from the fact that integrable systems possess a macroscopic number of local conserved quantities (charges) that commute with the Hamiltonian and one another, which is a widely accepted criterion for quantum integrability. This is also the reason why quantum integrable systems do not follow the ETH. Instead, their thermalization is governed by the so-called generalized Gibbs ensemble (GGE) that takes into account the additional conservation laws [26; 27; 28].
Recently, an alternative approach to quantum chaos has been formulated, which utilizes concepts of quantum geometry [29] and relies on the rate of deformations of eigenstates under infinitesimal perturbations. It turns out that the generator of these deformations, dubbed the adiabatic gauge potential (AGP), provides an exceptionally sensitive measure of chaos [30; 31]. Indeed, the Frobenius norm of the AGP is nothing other than the distance between the nearby eigenstates (the so-called Fubini-Studi metric) [32; 33; 34]. It can be easily shown that this norm scales with the system size in a drastically different manner for chaotic and integrable quantum systems. Namely, the AGP norm exhibits an exponential scaling for chaotic systems described by the ETH, whereas for integrable systems the scaling is polynomially
bounded [30]. Therefore, one can say that quantum chaos manifests itself in the exponential sensitivity of the eigenstates to the integrability-breaking perturbations, which provides a certain analogy with the classical chaos. The AGP norm is remarkably accurate in distinguishing the chaotic systems from the integrable ones, since it is sensitive to integrability-breaking perturbations that are exponentially small in the system size. This by far exceeds the sensitivity of standard probes of chaos, such as the spectral form factor or level statistics. Moreover, from the practical perspective, the difference between the polynomially bounded and exponential scaling of the AGP norm is easy to detect numerically, even for relatively small system sizes. These and other arguments demonstrate that the AGP norm is an extremely useful tool for detecting chaotic behavior in quantum many-body systems. Over the past few years the AGP based approach has lead to a number of important achievements and insights on quantum chaos [35] and quantum control [36; 37; 38]. It was also used in the context of the many-body localization [39; 40; 41; 42; 43; 44; 45; 46; 47].
One of the most important questions in the field of quantum chaos is related to the fate of many-body systems under weak integrability-breaking perturbations. Generalizing the KAM theorem to the quantum case is a long-standing problem. Despite recent findings demonstrating some progress in this direction [48], a complete understanding is missing. To some extent this can be explained by the fact that even the very definition of quantum integrability is subtle [49]. Extensive research shows that weakly-nonintegrable quantum systems do not thermalize for sufficiently long times \(t_{\rm th}\sim\lambda^{-2}\), where \(\lambda\ll 1\) is the strength of the perturbations [50; 51], as can be understood using the Pauli master equation and Fermi's golden rule-like arguments [52]. At times smaller than \(t_{\rm th}\), weakly-nonintegrable systems exhibit a different, the so-called prethermal behaviour at earlier times [53; 54; 55; 56]. It is believed that the prethermal phase should be described by some effective GGE [57; 58].
Remarkably, in some cases the thermalization time turns out to be much larger than the naive \(t_{\rm th}\sim\lambda^{-2}\) scaling [52; 59]. For instance, the spin-\(\frac{1}{2}\) isotropic Heisenberg chain (XXX model), weakly perturbed by an isotropic next-to-nearest neighbor exchange interaction was found to exhibit transport behavior consistent with the thermalization time \(t_{\rm th}\sim\lambda^{-4}\)[60]. This anomalously large thermalization time was attributed to the existence of an _approximate_ integral of motion, conserved with the accuracy \(O(\lambda^{2})\). This question has been further addressed in Ref. [61], which explicitly constructed a few higher-order quasiconserved charges for the spin-\(\frac{1}{2}\) XXX model, weakly perturbed by an isotropic next-to-nearest neighbor exchange interaction. In the same work it was conjectured that and one can construct as many quasiconserved charges for the perturbed XXX chain, as there are exactly conserved charges for the unperturbed integrable model (infinitely many in the thermodynamic limit). Similalry, a few first quasiconserved charges were constructed for an isotropic XY chain perturbed by a next-to-nearest neigbor XY-interaction [62]. There have been numerous studies of spectral properties in weakly-nonintegrable models, for instance showing a crossover from Poissonian to Wigner-Dyson level statistics, see e.g. Refs. [63; 64; 17; 65; 66]. However, since the AGP norm has proven to be much more sensitive and efficient in detecting quantum chaos, it is natural to ask whether it can provide further insight into the behavior of weakly-nonintegrable systems.
The aim of this work is twofold. First, we investigate the perturbed XXX chain using the AGP-based approach. Steps in this direction (albeit with a different logic - see discussion in Sec. III) were already performed in Ref. [30]. There, it was observed that apart from the polynomially bounded and exponential scaling for integrable and chaotic systems, respectively, the AGP norm can exhibit yet another regime. Namely, for weakly-nonintegrable systems one has a sharp crossover between the regimes of a polynomially bounded and exponential scaling of the AGP norm. The latter regime has been associated with the emergence of exponentially slow relaxation dynamics [30]. The crossover was found to occur at a critical perturbation strength that is exponentially small in the system size. This picture agrees with our findings presented in this paper. We find that the AGP norm for an integrable model (XXX chain) weakly perturbed by an integrability-breaking perturbation (isotropic next-to-nearest neighbor exchange interaction) exhibits the crossover between the polynomially bounded and exponential scaling at a critical perturbation strength, which is exponentially small in the length of the chain. The observed behavior of the AGP norm is distinct from both integrable and purely chaotic regimes. We argue that this strongly supports the conjecture on the quasi-integrability of the perturbed XXX model, put forward in Ref. [61].
In the second part of the paper we proceed with the analytic proof of this conjecture. We present an explicit construction of the infinite set of quasi-conserved charges for the isotropic Heisenberg Hamiltonian perturbed by a weak next-to-nearest neighbor exchange interaction. We employ the idea of integrability-preserving long-range deformations, introduced in Ref. [67], and show its direct relation to the notion of AGP. Our proof of quasi-integrability relies on the fact that the perturbed XXX chain can be viewed as a truncation of an integrable spin chain with long-range interactions.
The rest of the paper is organized as follows. In Sec. II we introduce the perturbed XXX model and discuss the conjecture on its quasi-integrability [68]. In Sec. III we briefly review the notion of the AGP. We numerically investigate the scaling of the AGP norm for our model and demonstrate that the results are consistent with the conjectured quasi-integrability. Then, in Sec. IV we proceed with the analytic proof of the conjecture and present an explicit construction of an infinite set of the quasiconserved charges for the perturbed XXX chain. Finally, in Sec. V we discuss our results and conclude. For
the sake of completeness the paper is supplemented with an appendix where we briefly discuss another weakly-nonintegrable model - isotropic XY chain perturbed by the next-to-nearest neighbor XY interaction, studied in Ref. [62].
## II Perturbed XXX model
In this section, we introduce the model and briefly discuss the conjecture on its quasi-integrability, put forward in Ref. [61]. We consider the Hamiltonian
\[H(\lambda)=H_{0}+V(\lambda), \tag{1}\]
where \(H_{0}\) is the integrable part and \(V(\lambda)\) is the perturbation. The real parameter \(\lambda\) controls the perturbation strength and we assume \(\lambda\ll 1\), so that the perturbation is weak. The term \(H_{0}\) corresponds to the spin-\(\frac{1}{2}\) isotropic Heisenberg chain and reads
\[H_{0}=J\sum_{j}\mathbf{\sigma}_{j}\cdot\mathbf{\sigma}_{j+1}, \tag{2}\]
where \(\mathbf{\sigma}_{j}=\{\sigma_{j}^{x},\sigma_{j}^{y},\sigma_{j}^{z}\}\) is the vector of Pauli matrices, the dot denotes the scalar product, and \(J\) is the exchange coupling constant. Hereinafter we work in units with \(J=1\). The Hamiltonian (2) is integrable and can be solved exactly by the Bethe ansatz [69; 70]. The model possesses an infinite number of conserved charges that commute with the Hamiltonian and one another
\[[H_{0},\mathcal{Q}_{n}]=[\mathcal{Q}_{m},\mathcal{Q}_{n}]=0. \tag{3}\]
We stress that the conserved charges \(\mathcal{Q}_{n}\) are required to be local in the sense that they are given by a sum of operators with finite support. By convention, \(\mathcal{Q}_{1}\) is the total magnetization, which is clearly conserved by \(H_{0}\), and the second conserved charge coincides with the Hamiltonian itself, \(\mathcal{Q}_{2}=H_{0}\). The higher charges can be iteratively generated from \(\mathcal{Q}_{2}\) as [71; 72; 73]
\[\mathcal{Q}_{n+1}=\big{[}\mathcal{B}[\mathcal{Q}_{2}],\mathcal{Q}_{n}\big{]}, \tag{4}\]
where \(\mathcal{B}[\mathcal{Q}_{2}]\) is the so-called boost operator. Explicitly it is given by
\[\mathcal{B}[\mathcal{Q}_{2}]=\frac{1}{2i}\sum_{j}j\mathbf{\sigma}_{j}\cdot\mathbf{ \sigma}_{j+1}, \tag{5}\]
and we see that \(\mathcal{B}[\mathcal{Q}_{2}]\) is constructed out of the second charge (Hamiltonian), cf. Eq. (2). Note that every next charge has a larger support as compared to the previous one. For the Heisenberg model the \(n\)th charge \(\mathcal{Q}_{n}\) generated by Eq. (4) is a sum of term with the support on up to \(n\) lattice sites. In addition, \(\mathcal{Q}_{n}\) usually contains terms that are also present in the previous charges [71]. It is convenient to work with a different basis \(\{Q_{m}^{(0)}\}\) in which every next charge does not contain the terms from the previous charges. The first charges are the same in both bases (\(\mathcal{Q}_{m}=Q_{m}^{(0)}\) for \(m=1,2\)), whereas the next two conserved charges in this basis read
\[Q_{3}^{(0)} =\sum_{j}\big{(}\mathbf{\sigma}_{j}\times\mathbf{\sigma}_{j+1}\big{)} \cdot\mathbf{\sigma}_{j+2}, \tag{6}\] \[Q_{4}^{(0)} =\sum_{j}\big{(}(\mathbf{\sigma}_{j}\times\mathbf{\sigma}_{j+1})\times \mathbf{\sigma}_{j+2}\big{)}\cdot\mathbf{\sigma}_{j+3}+\sum_{j}\mathbf{\sigma}_{j}\cdot \mathbf{\sigma}_{j+2}, \tag{7}\]
where the cross denotes the vector product and the general form of \(Q_{n}^{(0)}\) for the XXX model can be found in Ref. [71].
Let us now turn to the second term in Eq. (1). Following Ref. [61], for the perturbation \(V(\lambda)\) we take the isotropic next-to-nearest neighbor exchange interaction:
\[V(\lambda)=\lambda\sum_{j}\mathbf{\sigma}_{j}\cdot\mathbf{\sigma}_{j+2}. \tag{8}\]
Note that both \(H_{0}\) and \(V(\lambda)\) are translation and \(SU(2)\) invariant and we assume that the system is in the thermodynamic limit. The perturbation (8) breaks the integrability and the quantities \(Q_{n}^{(0)}\) are no longer conserved since they do not commute with the total Hamiltonian \(H(\lambda)\). Moreover, one clearly has \(\|[H(\lambda),Q_{n}^{(0)}]\|\propto\lambda\), so that the quantities \(Q_{n}^{(0)}\) change significantly over times much shorter than \(t_{\rm th}\). Thus, they can not be responsible for the existence of the prethermal phase. One can try to deform \(Q_{n}^{(0)}\) into
\[\tilde{Q}_{n}(\lambda)=Q_{n}^{(0)}+\lambda\,Q_{n}^{(1)}, \tag{9}\]
where the correction \(Q_{n}^{(1)}\) is chosen such that the deformed charges \(\tilde{Q}_{n}(\lambda)\) satisfy
\[\|[H(\lambda),\tilde{Q}_{n}(\lambda)]\|\propto\lambda^{2}, \tag{10}\]
and also commute with each other with the accuracy \(O(\lambda^{2})\). If this is possible, then the _quasi-conserved_ charges \(\tilde{Q}_{n}(\lambda)\) constraint the dynamics and prevent the system from being truly chaotic during the prethemal phase. With a tedious but straightforward brute force approach the authors of Ref. [61] have constructed the first four nontrivial quasi-conserved charges, \(\tilde{Q}_{n}(\lambda)\) with \(3\leq n\leq 6\), for the perturbed Heisenberg model. For instance, the correction to \(Q_{3}^{(0)}\) was found to be
\[Q_{3}^{(1)}=\sum_{j}\left(\mathbf{\sigma}_{j}\times\mathbf{\sigma}_{j+1}\right)\cdot\mathbf{ \sigma}_{j+3}+\sum_{j}\left(\mathbf{\sigma}_{j}\times\mathbf{\sigma}_{j+2}\right)\cdot \mathbf{\sigma}_{j+3}. \tag{11}\]
It was then conjectured that one has as many quasi-conserved charges for the perturbed model as there are exact conservation laws for the unperturbed one. We prove this conjecture in Sec. IV, where we explicitly construct an infinite tower on quasi-conserved charges. However, before doing so, let us first test the conjecture of Ref. [61] using an extremely sensitive probe of chaos - the AGP norm.
## III Adiabatic gauge potential and integrability breaking
In this section we briefly review the notion of adiabatic gauge potential and its power in detecting chaos. For an in-depth discussion see Refs. [30] and [34]. We then numerically investigate the scaling of AGP norm with the system size for the perturbed Heisenberg model. We show that the results are consistent with the conjectured quasi-integrability of the model.
### Adiabatic gauge potential
Consider a Hamiltonian \(H(\lambda)\) depending on a parameter \(\lambda\). Let \(\{\ket{n(\lambda)}\}\) be its orthonormal eigenbasis, so that one has
\[H(\lambda)\ket{n(\lambda)}=E_{n}(\lambda)\ket{n(\lambda)}. \tag{12}\]
Then, there exists a unitary transformation that adiabatically rotates the eigenstates as
\[\ket{n(\lambda)}=U(\lambda)\ket{n_{0}}, \tag{13}\]
where \(\ket{n_{0}}=\ket{n(0)}\). The generator of this transformation is the so-called adiabatic gauge potential defined as
\[\mathcal{A}_{\lambda}=i\left[\partial_{\lambda}U(\lambda)\right]U^{\dagger}( \lambda), \tag{14}\]
so that its action on the eigenstates is \(\mathcal{A}_{\lambda}\ket{n(\lambda)}=i\partial_{\lambda}\ket{n(\lambda)}\). It can be easily shown that the AGP satisfies the following operator equation [74; 34]
\[i\partial_{\lambda}H(\lambda)=\left[\mathcal{A}_{\lambda},H(\lambda)\right]-i \mathcal{F}(\lambda), \tag{15}\]
where the operator \(\mathcal{F}(\lambda)\) is diagonal in the eigenbasis of \(H(\lambda)\). Explicitly, it is given by
\[\mathcal{F}(\lambda)=-\sum_{n}\frac{\partial E_{n}(\lambda)}{\partial\lambda} \ket{n(\lambda)}\!\bra{n(\lambda)}. \tag{16}\]
The relation (15) can be easily derived from the fact that the rotated Hamiltonian \(\tilde{H}(\lambda)=U^{\dagger}(\lambda)H(\lambda)U(\lambda)\) commutes with its derivative \(\partial_{\lambda}\tilde{H}(\lambda)\). Similarly, Eq. (16) follows immediately from the Schrodinger equation (12). Differentiating both sides of Eq. (12) with respect to \(\lambda\) and substituting \(\partial_{\lambda}H(\lambda)\) from Eq. (15), one arrives at Eq. (16).
With the help of the Hellmann-Feynman theorem one can easily see that the matrix elements of the AGP calculated between the eigenstates of \(H(\lambda)\) read
\[\bra{m(\lambda)}\mathcal{A}_{\lambda}\ket{n(\lambda)}=-\frac{i}{\omega_{mn}} \bra{m(\lambda)}\partial_{\lambda}H(\lambda)\ket{n(\lambda)}, \tag{17}\]
where \(\omega_{mn}=E_{m}(\lambda)-E_{n}(\lambda)\). In order to allow for degeneracies (accidental or not), one has to regularize the AGP as
\[\bra{m}\mathcal{A}_{\lambda}(\mu)\ket{n}=-\frac{i\,\omega_{mn}}{\omega_{mn}^ {2}+\mu^{2}}\bra{m}\partial_{\lambda}H(\lambda)\ket{n}, \tag{18}\]
where \(\mu\) is a small energy cutoff and we suppressed the dependence on \(\lambda\) for brevity. The results of Ref. [30] show that the optimal cutoff choice is \(\mu\sim L\mathcal{D}^{-1}\). In order to lighten the notations, from now on we drop the argument \(\mu\), so that \(\mathcal{A}_{\lambda}\) refers to the regularized AGP.
The Frobenius norm of the regularized AGP reads
\[\|\mathcal{A}_{\lambda}\|^{2}=\frac{1}{\mathcal{D}}\sum_{n}\sum_{m\neq n}|\bra {m}\mathcal{A}_{\lambda}\ket{n}|^{2}, \tag{19}\]
where \(\mathcal{D}\) is the dimension of the Hilbert space. It is then straightforward to see that for chaotic systems, described by the ETH, the AGP norm scales exponentially with the system size. Indeed, according to the ETH, the off-diagonal matrix elements of any local operator scale as \(e^{-S/2}\), where \(S\) is the entropy of the system [14]. Similarly, for the level spacings one has \(\omega_{mn}\sim e^{-S}\). Thus, provided that the cutoff is chosen as \(\mu\sim e^{-S}\), we immediately see that for chaotic systems \(\|\mathcal{A}_{\lambda}\|^{2}\sim e^{sL}\), with some \(\kappa>0\). On the contrary, for integrable models the AGP norm behaves differently and its scaling of the system size is bounded polynomially, as was demonstrated in Ref. [30]. It turns out that for _weakly_ nonintegrable systems the AGP norm exhibits yet another behavior, as we demonstrate below.
### Scaling of the AGP norm for the perturbed XXX model
We now proceed with calculating the AGP norm for the spin-\(\frac{1}{2}\) XXX model weakly perturbed by an isotropic next-to-nearest neighbor interaction. The Hamiltonian is \(H(\lambda)=H_{0}+V(\lambda)\), where \(H_{0}\) is the integrable XXX Hamiltonian (2) and the perturbation \(V(\lambda)\) is given by Eq. (8). In this section we consider the system on a finite lattice of \(L\) sites and impose periodic boundary conditions in order to retain the translation invariance of the model. For the AGP norm in Eq. (19) we include only the eigenstates of \(H(\lambda)\) belonging to the Hilbert space sector with zero magnetization [75]. Accordingly, for the normalization factor in Eq. (19) we take the size \(\mathcal{D}_{0}\) of zero magnetization sector. Similarly, the cutoff is chosen as \(\mu=L/\mathcal{D}_{0}\). Then, using Eq. (19) we calculate numerically the AGP norm as a function of system size for \(8\leq L\leq 20\). Since in our case the perturbation \(V(\lambda)\) is extensive, we rescale the AGP norm as \(\|\mathcal{A}_{\lambda}\|^{2}/L\). The results are presented in Fig. 1 for different values of perturbation strength \(\lambda\). One can clearly see that the rescaled AGP norm enters the regime of exponential scaling at a certain system size-dependent critical perturbation strength \(\lambda^{*}(L)\). For \(\lambda\lesssim\lambda^{*}(L)\), the (rescaled) AGP norm scaling is bounded by a polynomial in \(L\), as one would expect for an integrability-preserving perturbation [30]. Interestingly, we find that in our case the scaling is logarithmic, \(\|\mathcal{A}_{\lambda}\|^{2}/L\sim\log L\). In the opposite case, for \(\lambda\gtrsim\lambda^{*}(L)\), the rescaled AGP norm scales exponentially with \(L\). In this regime one has
and our results give \(\kappa=1.62\pm 0.05\). As we demonstrate on the inset in Fig. 1, the critical coupling decreases exponentially with the system size and we find \(\lambda^{*}\sim e^{-0.44L}\).
The fact that the AGP norm in Fig. 1 scales in a drastically different way from both the integrability-preserving perturbations and the genuinely chaotic ones, supports the conjecture on the quasi-integrability of the perturbed XXX chain [61]. Moreover, our results suggest that the AGP norm can be used as a very useful tool not only for detecting chaos but also for distinguishing the chaotic perturbations from the weak integrability-breaking ones, in the spirit of the KAM theorem. Indeed, consider again the Hamiltonian \(H(\lambda)=H_{0}+V(\lambda)\), where \(H_{0}\) is integrable and \(V(\lambda)\) is the perturbation. Assume that \(V(\lambda)\) is known to break the integrability, which is usually easy to check. Then, in order to tell whether the integrability is strongly or weakly broken, all one needs is to calculate the AGP norm in the _integrability-breaking_ direction \(V(\lambda)\) for sufficiently large system size and zero value of \(\lambda\) [i.e. one calculates the AGP using the eigenstates of an unperturbed Hamiltonian, cf. Eq. (19). If the AGP norm scales exponentially, then the perturbation \(V(\lambda)\) is chaotic and it completely breaks the integrability of \(H_{0}\). On the contrary, if the scaling of the AGP norm at \(\lambda=0\) is bounded polynomially, then \(V(\lambda)\) only weakly breaks the integrability.
We illustrate this idea in Fig. 2, which shows the scaling of the AGP norm for the XXX chain perturbed by the operators of the form \(V(\lambda)=\lambda\sum_{j}\mathbf{\sigma}_{j}\cdot\mathbf{\sigma}_{j+m}\) with \(2\leq m\leq 5\). For each \(m\), we calculate the AGP at \(\lambda=0\). The results unambiguously demonstrate that the case \(m=2\), which corresponds to the next-to-nearest exchange interaction, is special, since the scaling of the AGP norm is bounded polynomially. On the contrary, for less local perturbations with \(3\leq m\leq 5\) the AGP norm at \(\lambda=0\) scales exponentially with the system size. Therefore, the next-to-nearest exchange interaction breaks the integrability of the XXX model only weakly, whereas the perturbations with larger support are genuinely chaotic as they break the integrability stronger.
Let us finish this section with a remark. Naively, one may conclude the scaling of the AGP norm in Fig. 1 strongly resembles the one presented in Fig. 3 in Ref. [30]. While it is true that the effects of integrability-breaking perturbations on the behavior of the AGP norm have already been studied in Ref. [30], their protocol is very different from ours. They consider a weakly-nonintegrable system and calculate the AGP for a perturbation that, unlike in our case, is _different from the one breaking the integrability_. Then, they find that the scaling of the AGP norm in the _integrable_ direction also demonstrates the crossover between the regimes of polynomially bounded and exponential scalings, similar to the one in Fig. 1. We would like to emphasize that our protocol (where the AGP is calculated in the direction of the same perturbation that breaks the integrability) allows for a transparent physical interpretation, as demonstrated in this section.
## IV Long-range deformations and quasiconserved charges
In this section we proceed with proving the conjecture on the quasi-integrability of the Heisenberg chain (2) weakly perturbed by the next-to-nearest neighbor exchange interaction (8). First, we briefly review the idea of integrability-preserving long-range deformations, discussed in Ref. [67], and relate it with the AGP. Then, we show how truncating the formal series for the charges of a long-range deformed integrable spin chain leads us to a quasi-integrable and with quasiconserved charges. We then present an explicit construction of the quasi-conserved charges for the perturbed XXX chain.
### AGP and integrability-preserving deformations
We consider the Hamiltonian \(H(\lambda)=H_{0}+V(\lambda)\) from Eq. (1). The term \(H_{0}\) is the integrable part, and one has an infinite set of mutually commuting conserved charges \(Q_{n}^{(0)}\). Then, let \(\{|n_{0}\rangle\}\) and \(\{|n(\lambda)\rangle\}\) be the eigenbasis of \(H_{0}\) and \(H(\lambda)\), respectively, so that the transformation \(U(\lambda)\) from Eq. (13) connects the unperturbed basis with the perturbed one. The transformation \(U(\lambda)\)
Figure 1: Main panel: The rescaled AGP norm \(\|\mathcal{A}_{\lambda}\|^{2}/L\) as a function of the system size \(L\) for the XXX Hamiltonian \(H_{0}\) in Eq. (2), weakly perturbed by the isotropic next-to-nearest exchange interaction \(V(\lambda)\), given by Eq. (8). The AGP is calculated using Eq. (19). One can clearly see a crossover from the polynomially bounded to the exponential scaling, which occurs at a system-size dependent critical perturbation strength \(\lambda^{*}(L)\). The solid lines are the exponential fits \(\|\mathcal{A}_{\lambda}\|^{2}/L\propto e^{\kappa L}\), with \(\kappa=1.62\pm 0.05\). Inset: The scaling of the critical perturbation strength \(\lambda^{*}\) with the system size. The solid line is the exponential fit with \(e^{-0.44L}\).
is generated by the AGP \(\mathcal{A}_{\lambda}\), as follows from Eq. (14). The AGP satisfies the operator relation (15).
Then, following Ref. [67], let us assume that the perturbation \(V(\lambda)\) does not break the integrability of the total Hamiltonian \(H(\lambda)\). In this case one has an infinite set of deformed conserved charges \(Q_{n}(\lambda)\) that satisfy
\[\big{[}H(\lambda),Q_{n}(\lambda)\big{]}=\big{[}Q_{m}(\lambda),Q_{n}(\lambda) \big{]}=0, \tag{20}\]
along with \(Q_{n}(0)=Q_{n}^{(0)}\). Keeping in mind that \(H(\lambda)\) satisfies the relation (15), and the Hamiltonian is a conserved charge itself \([H(\lambda)=Q_{2}(\lambda)\) by convention], let us try a deformation similar to (15) for the other charges:
\[i\partial_{\lambda}Q_{n}(\lambda)=\big{[}\mathcal{A}_{\lambda},Q_{n}(\lambda )\big{]}-i\mathcal{C}_{n}(\lambda), \tag{21}\]
where \(\mathcal{C}_{n}(\lambda)\) is an operator commuting with _all_ charges \(Q_{m}(\lambda)\). Denoting by \(\mathcal{E}_{n,m}(\lambda)\) the eigenvalue of \(Q_{n}(\lambda)\) that corresponds to the eigenstate \(|m(\lambda)\rangle\), we have the spectral decomposition \(\mathcal{C}_{n}(\lambda)=-\sum_{m}\partial_{i}\mathcal{E}_{n,m}(\lambda)\,|m (\lambda)\rangle\)\(\langle m(\lambda)|\), which is similar to that in Eq. (16). On the other hand, \(\mathcal{C}_{n}(\lambda)\) can be written as a linear combination of conserved charges:
\[\mathcal{C}_{n}(\lambda)=\sum_{m}\alpha_{n,m}(\lambda)Q_{m}(\lambda). \tag{22}\]
Moreover, due to the fact that the charges are defined up to an arbitrary linear transformation, the coefficients \(\alpha_{n,m}(\lambda)\) in Eq. (22) can be arbitrary functions of \(\lambda\). Note that for \(n=2\) Eq. (21) reduces to Eq. (15).
Using the Jacobi identity, we immediately obtain
\[i\partial_{\lambda}\big{[}Q_{m}(\lambda),Q_{n}(\lambda)\big{]}=\Big{[} \mathcal{A}_{\lambda},\big{[}Q_{m}(\lambda),Q_{n}(\lambda)\big{]}\Big{]}. \tag{23}\]
Because the initial condition is \(\big{[}Q_{m}(0),Q_{n}(0)\big{]}=0\), the solution to Eq. (23) is identically zero, i.e. \(\big{[}Q_{m}(\lambda),Q_{n}(\lambda)\big{]}=0\), in agreement with Eq. (20). In other words, for a set of mutually commuting unperturbed charges \(Q_{n}^{(0)}\) there always exists a commutativity-preserving deformation (21). Of course, this does not imply that an arbitrary perturbation \(V(\lambda)\) is integrability-preserving. Indeed, in general the deformed charges \(Q(\lambda)\) from Eq. (21) at non-zero \(\lambda\) lose the _locality_ property. In order for the deformed charges \(Q_{n}(\lambda)\) to be local,
\[Q_{n}(\lambda)=\sum_{j}q_{n,j}(\lambda), \tag{24}\]
the AGP \(\mathcal{A}_{\lambda}\) in Eq. (21) must belong to certain special classes of operators, as was shown in Ref. [67]. Namely, \(\mathcal{A}_{\lambda}\) can be (_i_) local and \(\lambda\)-independent; (_ii_) a Boost operator constructed out of one of the charges \(Q_{n}(\lambda)\); or (_iii_) the so-called bilocal operator. If the AGP in Eq. (21) belongs to one of these three cases, then the deformed charges \(Q_{n}(\lambda)\) are of the form (24). For the sake of completeness, below we briefly discuss these three cases, before we proceed to explicitly construct the infinite family of quasi-conserved charges for the perturbed XXX chain.
The most obvious choice for the integrability-preserving deformation is the local and \(\lambda\)-independent AGP, i.e.
\[\mathcal{A}_{\lambda}^{(\text{loc})}=\sum_{j}A_{j}, \tag{25}\]
where \(A_{j}\neq A_{j}(\lambda)\) has a finite support. This is a trivial case since it corresponds to merely a basis transformation for the unperturbed Hamiltonian \(H_{0}\) with the unitary operator \(U(\lambda)=e^{-i\lambda\sum_{j}A_{j}}\). Another option for the integrability-preserving long-range deformation is the AGP of the form
\[\mathcal{A}_{\lambda}^{(\text{boost})}\propto\mathcal{B}[Q_{n}(\lambda)]= \frac{1}{2i}\sum_{j}jq_{n,j}(\lambda), \tag{26}\]
i.e. the AGP is the boost operator of the \(n\)th conserved charge (24). Finally, one can show [67] that taking the so-called _bilocal_ operator for the AGP also yields an integrability-preserving deformation:
\[\mathcal{A}_{\lambda}^{(\text{biloc})}\propto[Q_{m}(\lambda)|Q_{n} (\lambda)]\\ \equiv\frac{1}{2}\sum_{j}\{q_{m,j}(\lambda),q_{n,j}(\lambda)\}+ \sum_{i<j}\{q_{m,i},q_{n,j}\}, \tag{27}\]
where \(\{\cdot,\cdot\}\) is the anticommutator. It is straightforward to check the commutator in Eq. (21) results in a local operator if the AGP is given by Eq (25), (26), or (27).
Figure 2: The rescaled AGP norm \(\|\mathcal{A}_{\lambda}\|^{2}/L\) versus the system size \(L\) for the XXX model perturbed by the interaction \(V(\lambda)=\lambda\sum_{j}\boldsymbol{\sigma}_{j}\cdot\boldsymbol{\sigma}_{j+m}\) for \(2\leq m\leq 5\). In all cases the AGP norm is calculated in the direction of \(V(\lambda)\) at \(\lambda=0\). For \(m=2\), which corresponds to the next-to-nearest neighbor exchange in Eq. (8), the perturbation is weakly integrability-breaking, so that the rescaled AGP norm exhibits a polynomially bounded scaling (black dots). Data points for \(m=2\) are the same as the \(\lambda=0\) points in Fig 1. For less local perturbations with \(3\leq m\leq 5\), the AGP norm scales exponentially, which indicates that these perturbations are truly chaotic as they break the integrability strongly. The blue solid line is the exponential fit \(\|\mathcal{A}_{\lambda}\|^{2}/L\propto e^{\kappa L}\), with \(\kappa\approx\ln 2\).
For a given AGP, one can easily solve Eq. (21) perturbatively in \(\lambda\). Indeed, the long-range Hamiltonian \(H(\lambda)=H_{0}+V(\lambda)\) can be written as a formal power series in \(\lambda\) as
\[H(\lambda)=H_{0}+\sum_{k=1}^{+\infty}\lambda^{k}V_{k}. \tag{28}\]
The deformation \(V(\lambda)\) is generated by some AGP \(\mathcal{A}_{\lambda}\). Then, for this AGP and the corresponding deformed charges \(Q_{n}(\lambda)\) we write similar formal expansions:
\[\mathcal{A}_{\lambda}=\sum_{k=0}^{+\infty}\lambda^{k}\mathcal{A} ^{(k)}, \tag{29}\] \[Q_{n}(\lambda)=\sum_{k=0}^{+\infty}\lambda^{k}Q_{n}^{(k)}. \tag{30}\]
Then, from Eq. (21) we immediately obtain
\[\sum_{k=0}^{+\infty}i(k+1)\lambda^{k}Q_{n}^{(k+1)}=\sum_{p,r=0}^{ +\infty}\lambda^{p+r}\left[\mathcal{A}^{(p)},Q_{n}^{(r)}\right]\\ -i\sum_{p,r=0}^{+\infty}\frac{\lambda^{p+r}}{p!}\sum_{m}\alpha_{ n,m}^{(p)}(0)Q_{m}^{(r)}, \tag{31}\]
which allows one to construct \(Q_{n}(\lambda)\) to the desired order in \(\lambda\) iteratively. Let us emphasize that \(Q_{n}(\lambda)\) commute with each other and \(H(\lambda)\) only if one takes into account the complete infinite series in Eqs. (28), (29), and (30). Truncating the series would lead to _approximate_ conservation laws, which we are going to use below.
We also note that the construction of long-range deformed integrable Hamiltonians outlined above is deeply connected with the so-called \(T\bar{T}\)-deformations, well known in the field theory literature, see e.g. Ref. [76].
### Quasiconserved charges in the perturbed XXX model
We now return to the Heisenberg model \(H_{0}\) in Eq. (2) perturbed by the next-to-nearest exchange interaction \(V(\lambda)=\lambda V_{1}\), where
\[V_{1}=\sum_{j}\mathbf{\sigma}_{j}\cdot\mathbf{\sigma}_{j+2}, \tag{32}\]
as described in detail in Sec. II. Our aim is to prove that the perturbed XXX Hamiltonian \(H(\lambda)=H_{0}+\lambda V_{1}\) has as many quasi-conserved charges as there are exactly conserved ones for the unperturbed XXX model, a conjecture put forward in Ref. [61].
The idea of the proof is extremely simple. Let us consider an integrability-preserving long-range deformation of \(H_{0}\), generated by some appropriate AGP \(\mathcal{A}_{\lambda}\). Further, assume that one can choose \(\mathcal{A}_{\lambda}\) in such a way that to the first order in \(\lambda\) the solution to Eq. (15) is nothing else than \(H(\lambda)=H_{0}+\lambda V_{1}\). Keeping in mind that \(H(\lambda)=Q_{2}(\lambda)\), so that \(H_{0}=Q_{2}^{(0)}\) and \(V_{1}=Q_{2}^{(1)}\), one can equivalently work with Eq. (31), after setting there \(n=2\) and truncating both sides to the _zeroth_ order in \(\lambda\) [one power is lost after taking the derivative in Eq. (15)]. This yields
\[V_{1}=-i\big{[}\mathcal{A}^{(0)},H_{0}\big{]}-\sum_{m}\alpha_{m}Q_{m}^{(0)}, \tag{33}\]
where we denoted \(\alpha_{m}\equiv\alpha_{2,m}(0)\). Thus, our goal is to find the operator \(\mathcal{A}^{(0)}\) and the coefficients \(\alpha_{m}\) such that Eq. (33) is satisfied for a given perturbation \(V_{1}\). If this is possible, one can immediately construct _all_ quasi-conserved charges by truncating the series in Eq. (30) as
\[\tilde{Q}_{n}(\lambda)=Q_{n}^{(0)}+\lambda Q_{n}^{(1)}, \tag{34}\]
and then solving for \(Q_{n}^{(1)}\) from Eq. (31), truncated to the zeroth order in \(\lambda\):
\[Q_{n}^{(1)}=-i\big{[}\mathcal{A}^{(0)},Q_{n}^{(0)}\big{]}-\sum_{m}\alpha_{n,m }Q_{m}^{(0)}, \tag{35}\]
with \(\alpha_{n,m}\equiv\alpha_{n,m}(0)\) for brevity. The quasi-conserved charges \(\tilde{Q}_{n}\) commute with each other and the Hamiltonian \(H(\lambda)=H_{0}+\lambda V_{1}\) with the accuracy \(O(\lambda^{2})\)_by construction_.
We are now finally in the position to demonstrate that the algorithm outlined above can be performed for the perturbed XXX chain. Indeed, consider the zeroth order AGP of the form
\[\mathcal{A}^{(0)}=i\mathcal{B}\big{[}Q_{3}^{(0)}\big{]}=\frac{1}{2}\sum_{j}j \left(\mathbf{\sigma}_{j}\times\mathbf{\sigma}_{j+1}\right)\cdot\mathbf{\sigma}_{j+2}, \tag{36}\]
which is nothing other than the boost operator constructed from the conserved charge \(Q_{3}^{(0)}\) of the Heisenberg Hamiltonian, see Eq. (6). It is straightforward to check that for the Heisenberg Hamiltonian \(H_{0}\) from Eq. (2) one has
\[-i\big{[}\mathcal{A}^{(0)},H_{0}\big{]}=V_{1}-2H_{0}-Q_{4}^{(0)}, \tag{37}\]
where \(Q_{4}^{(0)}\) is the conserved charge of the Heisenberg model, explicitly given by Eq. (7). Therefore, given the perturbation \(V_{1}\) from Eq. (32) and the AGP from Eq. (36), one can satisfy Eq (33) if we set the coefficients \(\alpha_{m}=2\delta_{m,2}+\delta_{m,4}\). We then immediately obtain the infinite tower of quasi-conserved charges \(\tilde{Q}_{n}(\lambda)\) for the perturbed XXX model, given in terms of the charges \(Q_{n}^{(0)}\) of the unperturbed Hamiltonian. Explicitly, from Eq. (35) for any \(n\geq 3\) we have
\[\tilde{Q}_{n}(\lambda)=Q_{n}^{(0)}-\lambda\sum_{m}\alpha_{n,m}Q_{m}^{(0)}+ \lambda\left[\mathcal{B}\big{[}Q_{3}^{(0)}\big{]},Q_{n}^{(0)}\right], \tag{38}\]
where \(\alpha_{n,m}\) are arbitrary real numbers. Therefore, we have proven the conjecture of Ref. [61] on the quasi-integrability of the perturbed XXX chain. In the same way one can easily construct other quasi-integrable models and their quasi conserved charges. It is also straightforward to generalize this procedure to an arbitrary order in \(\lambda\).
For completeness, in App. A we demonstrate an example of a weakly-nonintegrable model (isotropic XY chain perturbed by the next-to-nearest XY interaction), whose quasiconserved charges are constructed with the help of the AGP from the class of bilocal operators.
## V Conclusions
In this work, we first have studied weak integrability breaking in the spin-\(\frac{1}{2}\) Heisenberg chain (XXX model) perturbed by the isotropic next-to-nearest neighbor exchange interaction. We have shown that in this model the AGP norm scales with the system size in a distinct way, different from both the polynomially bounded scaling characteristic of integrability-preserving perturbations and the exponential scaling for the chaotic ones. Instead, for the perturbed XXX model we find that the (rescaled) AGP norm exhibits a sharp crossover from the regime of polynomially bounded to the exponential scaling. The crossover occurs at a critical perturbation strength that is exponentially small in the system size, and we have \(\lambda^{*}\sim e^{-0.44L}\). In the regime of exponential scaling, i.e. for \(\lambda\gtrsim\lambda^{*}\), for the rescaled AGP norm we have found \(\|\mathcal{A}_{\lambda}\|^{2}/L\propto e^{\kappa L}\), where \(\kappa=1.62\pm 0.05\). On the contrary, for \(\lambda\lesssim\lambda^{*}\) the scaling of the AGP norm is bounded polynomially, just like it is for intgerability-preserving perturbations. These findings strongly support the conjectured quasi-integrability of the perturbed XXX chain made in Ref. [61].
In the second part of this work we have presented an analytic proof of this conjecture. Using the algebraic methods developed in Ref. [67], we explicitly constructed an infinite tower of quasi-conserved charges, which commute with one another and the Hamiltonian of the perturbed XXX model with the accuracy \(O(\lambda^{2})\). The main idea of our proof is based on the fact that the perturbed XXX model can be viewed as the truncation to the first order in \(\lambda\) of an integrable long-range spin-chain with the Hamiltonian \(H(\lambda)=H_{0}+\sum_{k=1}^{+\infty}\lambda^{k}V_{k}\), where the operators \(V_{k}\) have an increasingly large support. The generator of this long range is nothing other than the AGP, satisfying certain additional requirements needed to preserve locality. In order to generate the deformation that to the first order in \(\lambda\) gives the next-to-nearest neighbor exchange interaction, one has to take the AGP whose value at \(\lambda=0\) yields the boost operator constructed from the third conserved charge of the unperturbed XXX model.
Together, our results demonstrate that the AGP norm can be used not only to detect the onset of chaos, but also a useful tool in distinguishing different types of integrability-breaking perturbations, i.e. the truly chaotic perturbations from those that only weakly break the integrability. In the latter case the system possesses a macroscopic number of quasi-conserved charges, that can be found using the approach discussed in Sec. IV. In order to tell whether a given perturbation \(V(\lambda)\) is chaotic or weakly integrability-breaking, one simply needs to calculate the AGP in the direction of \(V(\lambda)\) at \(\lambda=0\). Then, the (rescaled) AGP norm scales exponentially for the chaotic perturbations, whereas for weakly integrability-breaking ones the scaling is bounded polynomially. We expect that our findings can be useful for the studies of transport properties in weakly-nonintegrable models, see e.g. [77; 60; 78]. It would be interesting to further investigate the spectral properties of the weakly perturbed XXX chain in the regime of \(\lambda\lesssim\lambda^{*}(L)\), and other quasi-integrable models in similar settings. For instance, from the results of Refs. [64; 65] one can expect that in this regime the level spacing statistics should not deviate significantly from the Poissonian, whereas for \(\lambda\gtrsim\lambda^{*}(L)\) it should gradually deform into the Wigner-Dyson statistics. We leave these questions for future work.
Before we finish, let us make a general remark [79] on the spectral-based criteria to quantum chaos. It is important to keep in mind that the many-body eigenstates are not observables, and it takes an exponentially large (in the system size) time to resolve the individual eigenstates corresponding to exponentially close energies. As a result, the properties of the eigenstates can also be exponential sensitive to perturbations, which makes detecting chaos much harder. For instance, it is well known that some _non-chaotic_ systems can have Wigner-Dyson-like level spacing distributions, such as quadratic systems with two bosonic modes [80], or spin systems with a single impurity [81]. Finally, a common drawback of all spectral-based measures, including the AGP, is that they are limited to relatively small system sizes, amenable to exact diagonalization.
_Note added._ While finishing the manuscript, we have learned that the authors of Ref. [82] obtained similar results on the existence of quasi-conserved charges for weakly nonintegrable Hamiltonians. As far as our studies overlap, our results are in agreement with each other. Key differences between Ref. [82] and our work are the following: (i) we additionally investigate the scaling of the AGP norm and find that for a weakly-nonintegrable Heisenberg model it behaves in a distinct way; (ii) the authors of Ref. [82] extend the proof of quasi-integrability to Hamiltonians with higher order perturbations and illustrate the procedure with a number of different models.
## Acknowledgements
The numerical computations in this work were performed using QuSpin [83; 84]. We acknowledge useful discussions with Igor Aleiner, Boris Altshuler, Jacopo de Nardis, Anatoli Polkovnikov, and Gora Shlyapnikov. We
thank Piotr Sierant and Dario Rosa for drawing our attention to Refs. [42; 46; 31] and Ref. [47], respectively. We are grateful to an anonymous referee for very useful comments and for drawing our attention to Refs. [80; 81]. The work of VG is part of the DeltaITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) funded by the Dutch Ministry of Education, Culture and Science (OCW). VG is also partially supported by RSF 19-71-10092. The work of AT was supported by the ERC Starting Grant 101042293 (HEPIQ). RS acknowledges support from Slovenian Research Agency (ARRS) - research programme P1-0402.
## Appendix A Quasi-conserved charges of the perturbed isotropic XY chain
In this appendix we discuss the construction of quasi-conserved charges for the isotropic XY chain weakly perturbed by the next-to-nearest XY interaction, studied in Ref. [62]. The Hamiltonian is given by
\[H(\lambda)=H_{0}+\lambda V_{1}, \tag{10}\]
where \(H_{0}\) is the integrable part
\[H_{0}=\sum_{j}\left(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y}\sigma_{j+1} ^{y}\right), \tag{11}\]
and \(V_{1}\) is the perturbation of strength \(\lambda\ll 1\), which reads
\[V_{1}=\sum_{j}\left(\sigma_{j}^{x}\sigma_{j+2}^{x}+\sigma_{j}^{y}\sigma_{j+2} ^{y}\right). \tag{12}\]
At \(\lambda=0\) the model is integrable (it can be mapped onto free fermions) and it has two families of conserved quantities [72]. Explicitly, the first family is given by
\[Q_{n}^{(0)}=\begin{cases}\sum\limits_{j}\left(e_{n,j}^{xx}+e_{n,j}^{yy}\right),&\quad\text{$n$ even};\\ \sum\limits_{j}\left(e_{n,j}^{xy}-e_{n,j}^{yx}\right),&\quad\text{$n$ odd}, \end{cases} \tag{13}\]
and the second family can be written as
\[I_{n}^{(0)}=\begin{cases}\sum\limits_{j}\left(e_{n,j}^{xy}-e_{n,j}^{yx}\right),&\quad\text{$n$ even};\\ \sum\limits_{j}\left(e_{n,j}^{xx}+e_{n,j}^{yy}\right),&\quad\text{$n$ odd}, \end{cases} \tag{14}\]
where \(n\geq 2\). In Eqs. (13) and (14) we introduced the operators
\[e_{n,j}^{\alpha\beta}=\sigma_{j}^{\alpha}\sigma_{j+1}^{z}...\sigma_{j+n-2}^{z }\sigma_{j+n-1}^{\beta}. \tag{15}\]
For instance, from Eq. (13) with \(n=2\) one has
\[Q_{2}^{(0)}=\sum_{j}\left(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y}\sigma_ {j+1}^{y}\right) \tag{16}\]
which is simply the isotropic XY Hamiltonian \(H_{0}\) itself, and Eq. (14) for \(n=2\) gives
\[I_{2}^{(0)}=\sum_{j}\left(\sigma_{j}^{x}\sigma_{j+1}^{y}-\sigma_{j}^{y}\sigma_ {j+1}^{x}\right), \tag{17}\]
which is the Dzyaloshinski-Moriya interaction. The conserved charges \(Q_{n}^{(0)}\) are invariant under the parity transformation \(\sigma_{j}^{\alpha}\rightarrow-\sigma_{j}^{\alpha}\), whereas the charges \(I_{n}^{(0)}\) change their sign.
As was shown in Ref. [62] the perturbed Hamiltonian (10) possesses a quasi-conserved quantity that commutes with \(H(\lambda)\) with the accuracy \(O(\lambda^{2})\). Let us try to construct the AGP that generates this quasiconserved charge and try to construct more of them. In analogy with the perturbed XXX model, discussed in Sec. IV, one could try to use the AGP proportional to the boost operator constructed from \(Q_{3}^{(0)}\) or \(I_{3}^{(0)}\). However, it turns out that for the isotropic XY model the boost operators \(\mathcal{B}[Q_{n}^{(0)}]\) and \(\mathcal{B}[I_{n}^{(0)}]\) with _any_\(n\) simply generate other charges from Eqs. (13) and (14). For this reason, one can not generate a nontrivial long-range deformation of the isotropic \(XY\) chain using only boost operators.
Therefore, we have to look for the AGP in the class of bilocal operators. Let us try the following one:
\[\mathcal{A}_{\lambda}=\frac{1}{4}[S_{z}|I_{2}(\lambda)], \tag{18}\]
where \(S_{z}=\sum_{j}\sigma_{j}^{z}\) is the \(z\)-projection of total spin and the factor of \(\frac{1}{4}\) is included for later convenience. To zeroth order in \(\lambda\), the explicit expression for the AGP reads
\[\mathcal{A}^{(0)}=\frac{1}{4}[\sigma^{z}|I_{2}^{(0)}]\\ =\frac{1}{2}\sum_{j}\sum_{r>0}\sigma_{j}^{z}(\sigma_{j+r}^{x}\sigma _{j+r+1}^{y}-\sigma_{j+r}^{y}\sigma_{j+r+1}^{x}). \tag{19}\]
Then, using Eq. (33) we obtain
\[V_{1}=\sum_{j}\sum_{\alpha=x,y}\sigma_{j}^{\alpha}\sigma_{j+2}^ {\alpha}+2\sum_{j}\sigma_{j}^{z}\sigma_{j+1}^{z}\\ -\sum_{m}\left(\beta_{m}Q_{m}^{(0)}+\gamma_{m}I_{m}^{(0)}\right), \tag{20}\]
where we took into account that the unperturbed model has two families of conserved charges. The first term on the right hand side of Eq. (20) is precisely the perturbation \(V_{1}\) from Eq. (12). However, the second term is the nearest neighbor Ising interaction, and it is clearly impossible to cancel it using the charges \(Q_{m}^{(0)}\) and \(I_{m}^{(0)}\).
The trick here is to add a correction to the AGP, which would eliminate the Ising interaction from Eq. (20). One immediately observes that this correction is exactly the AGP that generates the deformation of the isotropic XY chain into the XXZ chain with the Hamiltonian
\[H_{\text{XXZ}}=\sum_{j}\left(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y}\sigma _{j+1}^{y}-2\lambda\sigma_{j}^{z}\sigma_{j+1}^{z}\right). \tag{21}\]
Taking into account Eq. (101), let us rewrite the perturbation (103) as
\[V_{1}=-i\big{[}\mathcal{A}^{(0)},H_{0}\big{]}+\partial_{\lambda}H_{\text{XXZ}}. \tag{104}\]
Keeping in mind that the Hamiltonian \(H(\lambda)=H_{0}+\lambda V_{1}\) is a (quasiconserved) charge itself, one can see that the remaining charges are constructed in a way similar to Eq. (104). Thus, we write
\[Q_{n}^{(1)}=-i\big{[}\mathcal{A}^{(0)},Q_{n}^{(0)}\big{]}+\big{(}\partial_{ \lambda}Q_{n}^{\text{XXZ}}\big{)}\big{|}_{\lambda=0}, \tag{105}\]
where \(\mathcal{A}^{(0)}\) is given by Eq. (102), \(Q_{n}^{\text{XXZ}}\) is the \(n\)th conserved charge of the XXZ model (101), and we only keep the terms in \(Q_{n}^{\text{XXZ}}\) that are linear in \(\lambda\). The conserved charges \(Q_{n}^{\text{XXZ}}\) can be granted with the help of the boost operator as
\[Q_{n+1}^{\text{XXZ}}\propto\big{[}\mathcal{B}[H_{\text{XXZ}}],Q_{n}^{\text{ XXZ}}\big{]}. \tag{106}\]
Then, the family of charges in Eq. (103) gets deformed as
\[\tilde{Q}_{n}(\lambda)=Q_{n}^{(0)}+\lambda Q_{n}^{(1)}. \tag{107}\]
Note that the second family of charges, given by Eq. (104), is destroyed by the perturbation. It is straightforward to check that the quasicoserved charges (107) commute with each other and the perturbed isotropic XY chain (102) with the accuracy \(O(\lambda^{2})\).
|
2308.15730
|
Fully Embedded Time-Series Generative Adversarial Networks
|
Generative Adversarial Networks (GANs) should produce synthetic data that
fits the underlying distribution of the data being modeled. For real valued
time-series data, this implies the need to simultaneously capture the static
distribution of the data, but also the full temporal distribution of the data
for any potential time horizon. This temporal element produces a more complex
problem that can potentially leave current solutions under-constrained,
unstable during training, or prone to varying degrees of mode collapse. In
FETSGAN, entire sequences are translated directly to the generator's sampling
space using a seq2seq style adversarial auto encoder (AAE), where adversarial
training is used to match the training distribution in both the feature space
and the lower dimensional sampling space. This additional constraint provides a
loose assurance that the temporal distribution of the synthetic samples will
not collapse. In addition, the First Above Threshold (FAT) operator is
introduced to supplement the reconstruction of encoded sequences, which
improves training stability and the overall quality of the synthetic data being
generated. These novel contributions demonstrate a significant improvement to
the current state of the art for adversarial learners in qualitative measures
of temporal similarity and quantitative predictive ability of data generated
through FETSGAN.
|
Joe Beck, Subhadeep Chakraborty
|
2023-08-30T03:14:02Z
|
http://arxiv.org/abs/2308.15730v2
|
# Fully Embedded Time-Series Generative Adversarial Networks
###### Abstract
Generative Adversarial Networks (GANs) should produce synthetic data that fits the underlying distribution of the data being modeled. For real valued time-series data, this implies the need to simultaneously capture the _static_ distribution of the data, but also the full _temporal_ distribution of the data for any potential time horizon. This temporal element produces a more complex problem that can potentially leave current solutions under-constrained, unstable during training, or prone to varying degrees of mode collapse. In _FETSGAN_, entire sequences are translated directly to the generator's sampling space using a _seq2seq_ style adversarial autoencoder (AAE), where adversarial training is used to match the training distribution in both the feature space and the lower dimensional sampling space. This additional constraint provides a loose assurance that the temporal distribution of the synthetic samples will not collapse. In addition, the First Above Threshold (FAT) operator is introduced to supplement the reconstruction of encoded sequences, which improves training stability and the overall quality of the synthetic data being generated. These novel contributions demonstrate a significant improvement to the current state of the art for adversarial learners in qualitative measures of temporal similarity and quantitative predictive ability of data generated through _FETSGAN_.
## 1 Introduction
Generative modeling is the field of research concerned with producing new and unique data that is similar to the data that was used to produce the model. More specifically, we can say that this similarity is defined in terms of the ability to model the underlying distribution represented by the training data. In the case of sequential vector \(x_{1:T}\) with length \(T\), the data distribution is characterized by the temporal distribution \(p(x_{1},...,x_{T})\). Even with relatively simple datasets where the vector \(x_{t}\) is low dimensional, compounding dependencies in the temporal distribution increase with \(T\) until it becomes difficult to measure or even visualize the similarity or differences between the temporal distributions of the training data and the generated data. Generative Adversarial Networks (GANs) have demonstrated exceptional ability in modeling complex distributions such as these. However, GANs are notoriously difficult to train, with instability often preventing convergence and the final generative models featuring some degree of mode collapse, where only a portion of the full target distribution is represented in the synthetic samples.
Like much of the work surrounding GANs, the novel process presented here provides additional constraints to the adversarial learning process that regularizes learning, resulting in greater stability during training, higher quality data, and less susceptibility to mode collapse, specifically in the temporal distributions. The architecture presented is a modification of RCGAN[1] or C-RNN-GAN[2] that features a _seq2seq_ style AAE as encoder and decoder for data generation [3], [4]. The complete model is visualized in Fig. 1. There are three primary benefits to using an adversarial autoencoder. First, there is an additional constraint that matches the posterior distribution of the encodings to the prior distribution of the samples at inference time, further combating mode collapse beyond feature-level adversarial training. Second, the _seq2seq_ encoder is forced to summarize the entire sequence at once, allowing it to capture relevant time dependencies of arbitrary length. This can be compared to the approach in the TimeGAN, where the regularizing effect of the teacher-forced supervised losses of the encodings are only applied one time-step into the future [5]. Finally, the posterior encodings of adversarial autoencoders are natively interpretable. This allows fine control over the style of the synthetic data being generated, even in the completely unsupervised learning setting of this work.
The decoder in our framework can potentially suffer from the same supervised training problems as any other autoregressive model where compounding errors over time across shared weights can cause slow or unstable training, especially during the reconstruction of long encoded sequences. Typically this is resolved with some variation of teacher-forcing [6], [7], where the network is provided the ground truth from the previous timestep, and learns to predict only one timestep into the future. This method often leaves the network with some degree of exposure bias, where the compounding error of the model's own predictions are neglected. Additionally in the context of real-valued time-series generation, statistical properties in the data may produce harsh local minimums for regression based optimization. In this work, we do not use an autoregressive decoder at all. Instead, our solution to reconstruction loss is coined First Above Threshold (FAT) loss. In this scheme, stochastic gradient descent is only applied to the model parameters at one time instance per generated sequence. This allows the network to learn progressively longer sequences during training, and can be applied to any element-wise loss function in a supervised or unsupervised manner. The work described here is an
Figure 1: The overall training scheme is shown. Above the dashed line, the RNN style architectures are detailed, showing the model outputs (red) of each network as a function of the inputs. \(h_{t}^{(d)}\) indicates a hidden state of the network, where \(d\) represents the weights associated with a specific depth and \(t\) represents that state any specific time. FC indicates a fully connected output layer, and these weights are shared for every output across time, i.e. in the case of the generator \(\hat{x}_{1:T}\). Below the dashed line, the training flow is visualized. The mechanisms for producing _FETSGAN_’s five objective functions are shown in red.
extension of RCGAN and a reformulation of TimeGAN that utilizes a supervised loss in the feature space as well as an unsupervised loss on the encodings of the data which produces improved matching of the temporal distribution of the training data. In addition, FAT loss can improve the quality of the data being generated, as well as producing a standardizing effect on the training dynamics. For our experiments, stocks data, energy usage data, metro traffic data, and a synthetic sine wave dataset are used. The sequences produced are qualitatively analysed to show significant improvement in preventing mode collapse along the temporal distribution. In addition, we demonstrate the ability to selectively sample our model at inference time to produce realistic data of a specific style. Finally, we measure the performance of our model against the primary adversarial learners in terms of predictive and discriminative scores. _FETSGAN_ shows significant improvement to these methods in all stated dimensions of analysis.
## 2 Related Work
As a generative model, _FETSGAN_ builds on the adversarial training scheme of RCGAN [1] with mixed supervised and unsupervised learning elements, along with a Recurrent Neural Network (RNN) style architecture for the Encoder, Decoder, and Discriminator, which are common for many sequence modeling tasks [8]. The work most closely matching this description is TimeGAN [5], where the key distinction is our use of an AAE for full time sequences, as opposed to the element-wise encodings in TimeGAN. There is also the application specific R-GAN which combined some time-series heuristic information like Fourier representations with a WGAN approach to produce synthetic energy consumption data [9], [10]. COT-GAN is a recent approach to this problem that defines an adversarial loss through an expansion of the Sinkhorn Divergence into the time domain [11]. Due to the inherent instability of adversarial training, [12] regularizes training with a contrastive model and a training scheme grounded in imitation learning. The experimental comparison here is limited to methods that use the most straightforward approach of applying adversarial learning to the feature space, or latent encodings of the feature space. Namely, these models are TimeGAN and RCGAN. This is partly due to the lack of working implementations of the alternative approaches, and also to the similarity in approach with these methods, as well as the fact that these simple adversarial approaches still seem to remain the preeminent method of time series data generation where such models are applied, with applications in fields such as medicine [13] and energy management [14], [15].
Beyond the direct comparisons with other models that produce realistic time-series data, _FETSGAN_ also incorporates an interpretive latent space, allowing the selective sampling of the posterior distribution at inference time to reach some desired effect, specified by the user. This bears a direct connection to the field of representation learning, where data is compressed in a meaningful way in order to accomplish some downstream task. This has been accomplished on real valued time series data, as data is embedded in RNN style architectures for the purpose of forecasting [16], supervised learning [17], and data imputation [18]. Perhaps the most well-known interpretive generative models are variational autoencoders (VAEs) [19]. The alternative AAE used here has a close resemblance to this approach, as described in [3], with the primary benefit in our use case being the ability to choose an arbitrary prior distribution instead of a standard Gaussian. The simplistic AAE used in this work has also been extended to utilize Wasserstein loss [20] in the image-generation space, demonstrating some improved stability that is typical of WGANs.
A theme of this work and all recent work on the creation of synthetic time-series is the regularization or complete abandonment of adversarial training as a means to produce more stability, and thus higher quality data. Obviously, this challenge is not unique to the time-series domain, and inspiration can be drawn from any generation process that utilizes adversarial learning, particularly image creation. Given the _seq2seq_ translation element of _FETSGAN_, it stands to reason inspiration could be drawn from image-to-image translation techniques. We can see that the regularization effect of reconstruction loss or cycle loss is present in many adversarial approaches [21], [22]. WGANs [9] and LSGANs [23] focus primarily on the output layer of the discriminator and the loss function to produce a regularizing effect on reducing exploding or vanishing gradients. Spectral normalization [24], particularly in the weights of the discriminator, assures Lipchitz continuity in the gradients, further ensuring training stability. While not directly applicable to the RNN architecture itself, batch normalization in the discriminator has also demonstrated an ability to speed up adversarial training [25]. Spectral normalization is utilized in the linear layers of the our discriminator's training process,
while reconstruction loss bares a resemblance to the cycle consistency loss that has a regularizing effect on training.
## 3 Proposed Method
In this section, the methodology of the proposed method is described in full detail. Additional motivation for the work is also provided.
### Problem Formulation
Consider the random vector \(X\in\mathcal{X}\), where individual instances are denoted by \(x\). We operate in a discrete time setting where fixed time intervals exist between samples \(X_{t}\), forming sequences of length \(T\), such that \((X_{1},...,X_{T}):=X_{1:T}\in\mathcal{X}^{T}\). We note that \(T\) may be a constant value or may be a random variable itself, and that the proposed methodology does not differ in either case. The representative dataset of length \(N\) is given by \(\mathcal{D}=\{x_{n,1:T}\}_{n=1}^{N}\). For convenience, the subscript \(n\) is omitted in further notation.
The data distribution \(p(X_{1:T})\) describes the sequences, such that any possible conditional \(p(X_{t}\mid X_{t-i})\) for \(t\leq T\) and \(t-i\geq 0\) is absorbed into \(p(X_{1:T})\). Thus, the overall learning objective is to produce the model distribution \(\hat{p}(X_{1:T})\),
\[\min_{\hat{p}}D(p(X_{1:T})\mid\mid\hat{p}(X_{1:T}\mid)) \tag{1}\]
where \(D\) is some measure of divergence between the two distributions. We note here that RCGAN [1] applies adversarial loss that minimizes this divergence directly. In the following works, higher quality data is generated by supplementing this loss function with a supervised loss that is more stable [5], or by avoiding adversarial loss altogether [12]. While the adversarial autoencoder in Section 3.2 is intended to supplement and further stabilize training as well, we also highlight the additional ability of the FAT operator to stabilize the minimization of this objective on reconstructions in Section 3.3.
### Adversarial Autoencoder
In order to represent complex temporal relationships, we would like to leverage the ability to encode the entire time-series as a low dimensional vector in a _seq2seq_ style model. We introduce the random variable \(Z\in\mathcal{Z}\) as an intermediate encoding for producing \(\hat{p}(X_{1:T})\). We also introduce \(\eta_{1:T}\) as a random noise vector from a known distribution \(p_{\eta}\) sampled independent of time. Let \(p_{z}(Z)\) be a known prior distribution, the encoder function \(e:(\mathcal{X}^{T},\eta^{T})\rightarrow\mathcal{Z}\) have the encoding distribution \(q(Z\mid X_{1:T},\eta_{1:T})\), and the generator (or decoder) function \(g:(\mathcal{Z},\eta^{T})\rightarrow\mathcal{X}^{T}\) have the decoding distribution \(\hat{p}(X_{1:T}\mid Z,\eta_{1:T})\).
Also, we can define an ideal mapping function that maps sequences to vectors \(M:(\mathcal{X}^{T},\eta^{T})\rightarrow\mathcal{M}\), such that \(M_{X}=M(X_{1:T},\eta_{1:T})\) and \(p_{M}(M_{X})=p(X_{1:T})p_{\eta}(\eta_{1:T})\). The aggregated posterior can now be defined,
\[q(Z_{X})=\int_{M_{X}}q(Z\mid X_{1:T},\eta_{1:T})p_{M}(M_{X})dM_{X} \tag{2}\]
Similar to Eq. (1), encoder training occurs by matching this aggregated posterior distribution \(q(Z_{X})\) to an arbitrary prior distribution \(p_{Z}(Z)\).
\[\min_{q}D(p_{z}(Z)\mid\mid q(Z_{X})) \tag{3}\]
The posterior distribution \(q(Z_{X})\) is equivalent to the universal approximator function in [3], where the noise \(\eta_{t}\) exists to provide stochasticity to the model in the event that not enough exists in the data \(\mathcal{D}\) for \(q(Z_{X})\) to match \(p_{z}(Z)\) deterministically.
The generator is trained through reconstructing the data distribution \(p(X_{1:T})\) while being conditioned on the encodings.
\[\min_{\hat{p}}D(p(X_{1:T})\mid\mid\hat{p}(X_{1:T}\mid Z_{X},\eta_{1:T}))) \tag{4}\]
If divergence is minimized in Eq. (3) and Eq. (4), then the encoder is able to perfectly replicate the prior distribution, and the generator is perfectly able to reconstruct \(X_{1:T}\), resulting in a complete model of \(p(X_{1:T})\) when the prior distribution \(p_{z}(Z)\) is sampled and decoded at inference time.
### First Above Threshold (FAT) Operator
In our formulation, the generator model minimizes two measures of divergence. First, there is the direct matching of the target distribution through adversarial training, characterized by Eq. (1). Then, there is the secondary reconstruction loss from the intermediate encodings, characterized by Eq. (4). Reconstructions of time series data are denoted \(\bar{x}_{1:T}=g(e(x_{1:T}))\). The simplest loss function to enforce reconstruction of the original data might be,
\[\mathcal{L}=\mathbb{E}_{x_{1:T}\sim p}(\sum_{t}\|x_{t}-\bar{x}_{t}\|^{2}) \tag{5}\]
where this objective is minimized in tandem by the generator and the encoder. This may prove to be challenging as our generator model is not an auto regressive model. It takes the intermediate encodings as input at every timestep, as shown in Fig. 1. This reduces the possibility of a vanishing gradient to the encoder itself, and also prevents the generator from forgetting the initial encoding for long sequences. This does, however, make learning through reconstruction more difficult due to the inability to incorporate teacher forcing methods. In addition, time-series reconstruction with real-valued data faces the optimization problem of large local minimums, where the model may collapse into producing only mean values across any one time-series or the mean of the entire dataset. To alleviate these problems, we propose the solution of only applying reconstruction loss at _one_ time instance \(t=\tau\) in the sequence, instead of the entire time series at once. Thus, the reconstruction loss,
\[\mathcal{L}_{recon}=\mathbb{E}_{x_{1:T}\sim p}(\|x_{\tau}-\bar{x}_{\tau}\|^{2}) \tag{6}\]
trains the generator in a supervised manner. The question remains which time instance \(\tau\) to choose. We propose a simple solution. Prior to training, we define a real-valued threshold \(\epsilon\), such that any error that has compounded to produce the reconstruction \(\bar{x}_{t}\) is _acceptable_ so long as \(\|x_{t}-\bar{x}_{t}\|^{2}<\epsilon\). In this way, time-series reconstructions are progressively learned from short-term to long-term, instead
Figure 2: For the encoding and reconstruction of \(\bar{x}_{1:T}=g(e(x_{1:T}))\), the reconstruction objectives of Eq. (5) and Eq. (6) are compared for the sines dataset. The average value for each epoch in \(200\) epochs of training is plotted. Here, training occurs under the complete model objective of Eq. (12). In the first row, we can see that the model is unable to apply adversarial learning and simultaneously learn the proper encodings to reconstruct \(x_{1:T}\) under Eq. (5). This causes the optimization to immediately fall into a local minimum. With \(\tau\) gradually increasing to progressively learn longer sequences in the second row, the \(\mathrm{FAT}_{t}\) operation facilitates minimizing reconstruction loss better than applying Eq. (5) directly.
of all at once. To this end, the First Above Threshold (FAT) operator is introduced. This operator takes as input a sequence \(l_{1:T}\) and a threshold \(\epsilon\), such that the minimum value for \(t\) is returned where \(l_{t}>\epsilon\). In the event that \(l_{t}<\epsilon\,\forall\,t\in T\), \(\mathrm{FAT}_{t}(l_{1:T},\epsilon)=\mathrm{arg\,max}_{t}(l_{1:T})\). With this newly defined operator,
\[\tau=\mathrm{FAT}_{t}(\|x_{t}-\bar{x}_{t}\|^{2},\epsilon) \tag{7}\]
defines \(\tau\), thus defining the complete form of Eq. (6). The benefits are two fold. First, a progressive learning approach stabilizes the early portion of training, as the objective function may be less likely to become stuck at a local minimum while trying to encode and reconstruct long sequences all at once. Second, updating the parameters corresponding to only one time instance at a time has a regularizing effect by providing a more granular gradient that is less likely to interfere with the adversarial training for both the encoder and generator. The combination of both of these effects are demonstrated in Fig. 2, as the training dynamics of the model are compared between using the reconstruction loss of Eq. (5) and Eq. (6). We show that the application of the objective in Eq. (6) actually minimizes the loss of Eq. (5) more effectively than applying it directly in the sines dataset. The efficacy of the \(\mathrm{FAT}_{t}\) operator can be expected to grow with longer, more complex sequences.
### Complete Model
Time series data collected in the physical world such as sensor measurements will have some degree of stochasticity. We cannot replicate this stochasticity using a reconstruction objective alone. To this end, we introduce the feature space discriminator \(d_{x}:\mathcal{X}^{T}\rightarrow\mathcal{Y}^{T}\) which maps sequences to classifications, such that \(y_{1:T}=d(x_{1:T})\) and \(\hat{y}_{1:T}=d(\bar{x}_{1:T})\). This loss is applied to the reconstructions \(\bar{x}_{1:T}\), such that the objective can be minimized both through the encoder and the generator. In the Least Squares GAN form, the objective function of the discriminator described by,
\[\mathcal{L}_{dx} =\frac{1}{2}\mathbb{E}_{x_{1:T}\sim p}(\sum_{t}\|1-y_{t}\|^{2})+ \frac{1}{2}\mathbb{E}_{x_{1:T}\sim p}(\sum_{t}\|\hat{y}_{t}\|^{2}) \tag{8}\] \[\mathcal{L}_{fx} =\frac{1}{2}\mathbb{E}_{\bar{x}_{1:T}\sim p}(\sum_{t}\|1-\hat{y}_ {t}\|^{2}) \tag{9}\]
We now introduce the encoding discriminator \(d_{z}:\mathcal{Z}\rightarrow\mathcal{Y}_{\mathcal{Z}}\) such that \(y_{z}=d_{z}(z)\) and \(\hat{y}_{z}=d_{z}(z_{x})\). Also in the LSGAN form,
\[\mathcal{L}_{dz} =\frac{1}{2}\mathbb{E}_{z\sim p_{z}}(\|1-y_{z}\|^{2})+\frac{1}{2} \mathbb{E}_{z_{x}\sim q}(\|\hat{y}_{z}\|^{2}) \tag{10}\] \[\mathcal{L}_{ez} =\frac{1}{2}\mathbb{E}_{z_{x}\sim q}(\|1-\hat{y_{z}}\|^{2}) \tag{11}\]
describe the objective functions for the discriminator and encoder, respectively.
Putting everything together, there are three measures of divergence we will minimize with our complete model. The adversarial training between the objective functions described by Eq. (8) and Eq. (9) minimize Eq. (1) directly, where \(D\) is the \(\chi^{2}\)-divergence in the LSGAN formulation. The adversarial training described by the objective functions Eq. (10) and Eq. (11) apply to the divergence of Eq. (3), also minimizing \(\chi^{2}\)-divergence of the intermediate encodings. Finally, Eq. (6) corresponds to Eq. (4), minimizing the Kullback-Leibler (KL) divergence through maximum likelihood (ML) supervised training. In total, all parameter optimization occurs through,
\[\min_{e}\min_{g}(\lambda\mathcal{L}_{recon}+\mathcal{L}_{ez}+ \mathcal{L}_{fx})\] \[\min_{d_{z}}(\mathcal{L}_{dz})\] \[\min_{d_{x}}(\mathcal{L}_{dx})\]
thus describing the complete objective of the _FETSGAN_ architecture.
### Implementation
The parameters of the model are \(\lambda\) and \(\epsilon\). Additionally, while not described in notation, the dimensionality of \(p_{z}\) and \(p_{\eta}\) are also parameters of the model. Beyond noting that the dimensions of \(p_{z}\) should
correspond with the overall complexity of the data for maximum interpretability, the model is robust against choices in these parameters. To demonstrate this, the parameters are fixed for all experiments. The primary parameters are \(\lambda=10\), and \(\epsilon=0.1\). The prior distribution \(p_{z}\) and the noise distribution \(p_{\eta}\) contain four dimensions, and both are sampled from \(p_{z},p_{\eta}\sim\mathcal{U}(-1,1)\). All models are trained with the Adam optimization strategy [26], where the learning rate for the generator, encoder and both discriminators is \(0.001\). These learning rates decay exponentially in the last \(10\%\) of epochs. The full model implementation in Pytorch and instructions for experimental reproduction are provided in the link. 1
Footnote 1: [https://github.com/jbeck9/FETSGAN](https://github.com/jbeck9/FETSGAN)
## 4 Experiments
### Experimental Setup
For our experiments, we compare the performance of our model primarily against TimeGAN [5] and RCGAN [1]. As a baseline comparison, we have also included a purely autoregressive method that was trained using only teacher forcing. The contrastive imitation [12] and casual optimal transport methods [11] are omitted at the time of writing due to a lack of available implementations for reproduction. Due to those limitations, we limit the scope of our conclusions to time series models with adversarial learning applied directly to the feature space, or low dimensional encodings of the feature space.
The efficacy of our model is shown along three dimensions. First, we demonstrate the qualitative similarity between the original data and our generated data in Section 4.2. Then, we demonstrate the unique ability of our method to interpret the prior sampling distribution as a way of providing selective samples that are similar in style to specific samples from the original dataset in Section 4.3. Finally, we demonstrate that under strenuous classification and prediction tasks, our method holds state of the art performance among adversarial learners that apply directly on the feature space for time series data in Section 4.4.
We use three primary datasets for analysis. First, we generate a one dimensional sines dataset of length \(T=100\), where the amplitude, frequency, and phase for each sequence are sampled from a uniform distribution. We also use the six-dimensional historical Google stocks dataset, containing
Figure 3: The qualitative visualization results are shown. Sines data is shown on the top row, with the dominant component of the DFT for each sequence’s frequency, amplitude, and phase (left to right) is shown as both a histogram and kernel density estimate (KDE). On the bottom row, we show a TSNE visualization for the stock dataset (left) and energy data (right), following the same procedure from [5]. In all graphs, real data \(x_{1:T}\) is shown in green, synthetic data \(\hat{x}_{1:T}\) is in orange, and the synthetic data generated by TimeGAN is shown in blue. _FETSGAN_ produces data that more closely matches the original data by a substantial margin.
stock price information from 2004 to 2019 of various lengths \(T\). Finally, we use 6 dimensions of the UCI Appliances energy dataset [27] and real-valued traffic and weather data from the UCI Metro Interstate dataset [28].
### Distribution Matching
Visualizing synthetic data generated from an adversarial learning process is important to analyze the extent of mode collapse that may have occurred. This task is tricky for time series data, as it is possible that temporal mode collapse could exist, but be obscured if the temporal dimension is flattened for analysis. In the case of the sines dataset, we can reduce the sequences to a single dimension by simply capturing the _dominant frequency_ in the sequence, using the Discrete Fourier Transform (DFT). Here, the dominant frequency is given by \(\arg\max_{f}DFT(x_{1:T})\). Since the original data consists of a sine wave with a single frequency, this provides a valid analysis of the entire sequence. The amplitude and phase of the corresponding dominant frequency are taken as well. A histogram can then be produced, comparing the distribution of frequencies, amplitudes, and phases in each dataset. For the stocks and energy datasets, we settled for a visual comparison in the flattened temporal dimension using TSNE visualization [29], repeating the procedure reported in [5]. The results of these visualizations are shown in Fig. 3. While the results for TimeGAN generally match what is shown in [5], _FETSGAN_ demonstrates a substantial improvement over TimeGAN, RCGAN, and the baseline autoregressive models in matching the underlying distribution for all datasets used.
### Selective Sampling
There is an obvious use case that at inference time, perhaps there is a need to produce a specific _style_ of data, or to sample from a specific portion of the data distribution \(p(X_{1:T})\). Because our model forces dimensionality reduction through an encoder that is forced to match \(q\) to \(p_{z}\), we can leverage spatial relationships in the latent space \(z\) to selectively sample from the prior distribution \(p_{z}\) at inference time. This allows us to produce synthetic data that retains a specific style. To demonstrate this, three sine waves were taken from the data, corresponding to sequences \(x=sin(2\pi ft)\) with \(f=2,5,8\). These sequences were then encoded as \(z_{x}=e(x)\). Finally, new sequences were generated by adding noise to these encodings, such that \(x_{\eta}=g(z_{x}+\eta)\), where \(\eta\sim\mathcal{N}(0,0.1)\). The results in Fig. 4 show that, as expected, spatial relationships are maintained in the latent space \(z\). We are able to produce synthetic sine waves that maintain the close relationship to the anchor point \(x\) they were sampled near, allowing the ability to produce synthetic sequences within an expected range of style.
### Performance Metrics
To compare quantitative performance between models, we apply two testing metrics, _discriminative score_, and _predictive score_. Discriminative score is measured by training an ad hoc RNN classifier to discriminate between real dataset and a static synthetic dataset generated by each model. The best model will have the lowest score \(0.5-pred\), corresponding to how far the predictions were below the
Figure 4: Selective sampling of the prior distribution \(p_{z}\) is shown. Sines data is shown, where \(100\) random samples \(z_{s}\) were taken near the encodings of three sine waves with frequencies \(f=2,5,8\) and then fed to the generator. Similar to 3, the corresponding histogram and KDE plot for \(DFT(g(z_{s}))\) is shown on the left. Projections of the intermediate encodings are shown on the right, demonstrating spatial interpretability.
decision boundary, where a score of \(0\) corresponds to indistinguishable data. In the case of prediction, the "Train on Synthetic, Test on Real" (TSTR) approach is used [1]. A simple RNN is trained as a forecasting model, predicting 1, 3, and 5 steps into the future, given the sequence \(x_{1:t}\) for any valid step where \(t+step\leq T\) on synthetic data. Then, MAE prediction error from the trained model is measured on the real dataset. The best model is the one which produces the lowest prediction error on real data. Whenever the model under test calls for an RNN style network, a Gated Recurrent Unit (GRU) of 64 cells and 3 layers was used. For each architecture, 3 models were trained, and sampled 5 times. Thus, 15 tests were conducted for each value in Table 1. Training for all models occurred under 1000 epochs using the Adam optimizer with a learning rate of \(0.001\), including the classification and prediction models. Variations of _FETSGAN_ are included to analyze sources of gain. _FETSGAN_-FAT removes the \(\mathrm{FAT}_{t}\) operation by replacing Eq. (6) with Eq. (5). _FETSGAN_-FD trains without the feature space discriminator \(d_{x}\). The complete model scores best in totality, and the variations of the _FETSGAN_ show statistically significant improvement over RCGAN and TimeGAN in all cases. We note that in the case of the noisy and lengthy Energy dataset, only the complete _FETSGAN_ model was able to produce data realistic enough to completely fool the ad-hoc discriminator with a score under \(0.1\) for all experiments. The number of trainable parameters for each model is also shown in Table 1, which was fixed for all experiments.
## 5 Conclusion
In this paper we introduce _FETSGAN_, a novel approach to real-valued time series generation that combines feature space adversarial learning with the adversarial autoencoder framework. In addition, we introduce the \(\mathrm{FAT}\) operator, which provides a regularizing effect on training complex temporal sequences that are produced from an intermediate encoding. Finally, the method shown here provides an interpretable latent space, allowing higher flexibility for selective sampling at inference time. We demonstrate significant improvement over current adversarial methods applied directly to the feature space or encodings thereof. In future work, we intend to leverage the accuracy and interpretability of this model on a variety of datasets to demonstrate real world utility for synthetic data to aid in various applied machine learning models in forecasting and classification.
\begin{table}
\begin{tabular}{c|l|c c c c} \hline Model & Metric & Sines & Energy & Stocks & Metro \\ \hline \multirow{4}{*}{\begin{tabular}{c} T-Forcing \\ (64,134 parameters) \\ \end{tabular} } & +1 Step Predict & \(.011\pm.004\) & \(.018\pm.002\) & \(.047\pm.003\) & \(.093\pm.002\) \\ & +3 Step Predict & \(.083\pm.010\) & \(.028\pm.002\) & \(.064\pm.006\) & \(.168\pm.004\) \\ & +5 Step Predict & \(.421\pm.220\) & \(.034\pm.003\) & \(.084\pm.014\) & \(.223\pm.013\) \\ & Dis. Score & \(.494\pm.008\) & \(.246\pm.053\) & \(.243\pm.031\) & \(.264\pm.021\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} RCGAN \\ (93,717 parameters) \\ \end{tabular} } & +1 Step Predict & \(.011\pm.002\) & \(.035\pm.007\) & \(.026\pm.001\) & \(.065\pm.002\) \\ & +3 Step Predict & \(.038\pm.010\) & \(.045\pm.007\) & \(.073\pm.020\) & \(.134\pm.003\) \\ & +5 Step Predict & \(.073\pm.020\) & \(.054\pm.008\) & \(.094\pm.026\) & \(.180\pm.003\) \\ & Dis. Score & \(.467\pm.040\) & \(.490\pm.011\) & \(.459\pm.018\) & \(.114\pm.031\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} TimeGAN \\ (337,858 parameters) \\ \end{tabular} } & +1 Step Predict & \(.046\pm.043\) & \(.021\pm.002\) & \(.039\pm.001\) & \(.146\pm.038\) \\ & +3 Step Predict & \(.133\pm.121\) & \(.026\pm.001\) & \(.048\pm.002\) & \(.271\pm.047\) \\ & +5 Step Predict & \(.167\pm.132\) & \(.031\pm.001\) & \(.053\pm.002\) & \(.330\pm.054\) \\ & Dis. Score & \(.317\pm.105\) & \(.164\pm.066\) & \(.268\pm.054\) & \(.374\pm.104\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} FETSGAN-FAT \\ (262,686 parameters) \\ \end{tabular} } & +1 Step Predict & \(.016\pm.004\) & \(.025\pm.003\) & \(.030\pm.001\) & \(.063\pm.001\) \\ & +3 Step Predict & \(.041\pm.006\) & \(.029\pm.001\) & \(.039\pm.002\) & \(.124\pm.002\) \\ & +5 Step Predict & \(.058\pm.008\) & \(.032\pm.001\) & \(.042\pm.001\) & \(.165\pm.002\) \\ & Dis. Score & \(.249\pm.125\) & \(.237\pm.114\) & \(.055\pm.032\) & \(.056\pm.031\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} FETSGAN-FD \\ (182,827 parameters) \\ \end{tabular} } & +1 Step Predict & \(.008\pm.005\) & \(.018\pm.002\) & \(.031\pm.002\) & \(.077\pm.003\) \\ & +3 Step Predict & \(.019\pm.004\) & \(.026\pm.001\) & \(.037\pm.001\) & \(.155\pm.008\) \\ & +5 Step Predict & \(.028\pm.003\) & \(.031\pm.002\) & \(.041\pm.002\) & \(.192\pm.006\) \\ & Dis. Score & \(.002\pm.007\) & \(.189\pm.041\) & \(.086\pm.031\) & \(.122\pm.066\) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} FETSGAN \\ (262,686 parameters) \\ \end{tabular} } & +1 Step Predict & \(.**007\pm.006\) & \(.**016\pm.001\) & \(.**026\pm.001\) & \(.**062\pm.001\) \\ & +3 Step Predict & \(.**017\pm.005\) & \(.**023\pm.001\) & \(.**036\pm.001\) & \(.128\pm.002\) \\ \cline{1-1} & +5 Step Predict & \(.**026\pm.003\) & \(.**028\pm.001\) & \(.**040\pm.001\) & \(.169\pm.001\) \\ \cline{1-1} & Dis. Score & \(.**001\pm.004\) & \(.**030\pm.012\) & \(.**005\pm.007\) & \(.**025\pm.014\) \\ \hline \end{tabular}
\end{table}
Table 1: Predictive & Discriminative Scores. Best scores shown in **bold**.
|
2301.02747
|
Sample-efficient Surrogate Model for Frequency Response of Linear PDEs
using Self-Attentive Complex Polynomials
|
Linear Partial Differential Equations (PDEs) govern the spatial-temporal
dynamics of physical systems that are essential to building modern technology.
When working with linear PDEs, designing a physical system for a specific
outcome is difficult and costly due to slow and expensive explicit simulation
of PDEs and the highly nonlinear relationship between a system's configuration
and its behavior. In this work, we prove a parametric form that certain
physical quantities in the Fourier domain must obey in linear PDEs, named the
CZP (Constant-Zeros-Poles) framework. Applying CZP to antenna design, an
industrial application using linear PDEs (i.e., Maxwell's equations), we derive
a sample-efficient parametric surrogate model that directly predicts its
scattering coefficients without explicit numerical PDE simulation. Combined
with a novel image-based antenna representation and an attention-based neural
network architecture, CZP outperforms baselines by 10% to 25% in terms of test
loss and also is able to find 2D antenna designs verifiable by commercial
software with $33\%$ greater success than baselines, when coupled with
sequential search techniques like reinforcement learning.
|
Andrew Cohen, Weiping Dou, Jiang Zhu, Slawomir Koziel, Peter Renner, Jan-Ove Mattsson, Xiaomeng Yang, Beidi Chen, Kevin Stone, Yuandong Tian
|
2023-01-06T23:32:07Z
|
http://arxiv.org/abs/2301.02747v3
|
Sample-efficient Surrogate Model for Frequency Response of Linear PDEs using Self-Attentive Complex Polynomials
###### Abstract
Linear Partial Differential Equations (PDEs) govern the spatial-temporal dynamics of physical systems that are essential to building modern technology. When working with linear PDEs, designing a physical system for a specific outcome is difficult and costly due to slow and expensive explicit simulation of PDEs and the highly nonlinear relationship between a system's configuration and its behavior. In this work, we prove a parametric form that certain physical quantities in the Fourier domain must obey in linear PDEs, named the **CZP** (_Constant-Zeros-Poles_) _framework_. Applying **CZP** to antenna design, an industrial application using linear PDEs (i.e., Maxwell's equations), we derive a sample-efficient parametric surrogate model that directly predicts its _scattering coefficients_ without explicit numerical PDE simulation. Combined with a novel image-based antenna representation and an attention-based neural network architecture, **CZP** outperforms baselines by \(10\%\) to \(25\%\) in terms of test loss and also is able to find 2D antenna designs verifiable by commercial software with \(33\%\) greater success than baselines, when coupled with sequential search techniques like reinforcement learning.
Machine Learning, ICML
## 1 Introduction
Natural phenomena in mathematical physics such as heat diffusion, wave propagation, electromagnetic radiation, quantum mechanics and many more, are governed by linear Partial Differential Equations (PDEs) (Treves, 1975) in the following form:
\[\frac{\partial^{n}\psi}{\partial t^{n}}=F(\psi,\nabla_{\mathbf{x}}\psi,\dots; \mathbf{h}) \tag{1}\]
where \(\psi=\psi(\mathbf{x},t)\) is a quantity (e.g., electromagnetic field) that changes over space \(\mathbf{x}\) and time \(t\), \(F\) is a _linear_ function with respect to the quantity \(\psi\) and its spatial derivatives of different orders, and \(\mathbf{h}\) is a _design vector_ that may nonlinearly determine the linear coefficients of \(F\).
Guided by linear PDEs, designing new physical systems with desired properties is the core practice of modern science and engineering. For example, by finding the shape of reflectors, directors, their relative angles, orientations and electric conductivity, one may design an antenna that can receive signals with specific radio frequency, according to Maxwell's equations.
Due to the complicated nonlinear dependency between the design vector \(\mathbf{h}\) and the system's final behavior, it often requires large-scale, high fidelity simulation of the PDEs and many years of domain expertise to find an optimal \(\mathbf{h}\). The process is expensive in terms of both simulation and engineer time as engineers often iterate on system configurations using CPU-intensive commercial software (CST, 2021; XFD). This high computational overhead is a major bottleneck for rapid experimentation with different structures; A single simulation can take dozens of seconds to several weeks depending on the systematic complexity of a device. For this reason, developing a less computationally expensive _surrogate_ model to replace explicit simulation is desirable.
In this work, we take a different path by looking at the temporal Fourier representation of the spatial-temporal quantity \(\psi\) under linear PDEs. Surprisingly, it can be proven that its Fourier representation has a specific parametric form: any of its linear combinations, as well as their ratios, can be written as a rational function of complex polynomials with respect to frequency \(\omega\), regardless of the specific form of the linear PDEs and initial conditions.
This key insight, coined as **CZP** (Constant-Zeros-Poles) framework, enables us to develop sample-efficient surrogate models for important industrial-level applications such as antenna design. Specifically, we show that the _scattering coefficients_\(S_{11}(\omega)\) of an antenna can be written analytically as the ratio of two complex polynomials of the _same_ order, in
which their global Constants, **Z**eros and **P**oles are functions of the design choice \(\mathbf{h}\) that can be predicted by neural networks. Inspired by state-of-the-art mesh-based simulation techniques, to properly represent the design choice \(\mathbf{h}\), we propose a novel image-based representation of antennas geometries that captures important boundary information that are traditionally modeled by high resolution meshes. This representation is then tokenized and sent to a transformer-based encoder (Vaswani et al., 2017) to capture the nonlinear relationship between antenna topology and constants, zeros and poles to be predicted in the **CZP** framework.
Experiments demonstrate a \(10\%\) to \(25\%\) improvement of the **CZP** framework over multiple baselines on test set loss. Using the parametric form as a domain-specific inductive bias, we achieve better test error with limited data that are expensive to obtain via commercial software. Furthermore, when coupled with an optimization procedure like reinforcement learning, **CZP** can be used to find antenna topologies that meet design specifications, according to commercial EM modeling software, with \(33\%\) greater success than baselines with only \(40K\) training samples. This shows that **CZP** not only generalizes to unseen designs, but is also less likely to produce overoptimistic regions that the optimization procedure may exploit.
## 2 Related Work
**EM surrogate modeling** In the EM based microwave circuit design, such as microwave filters, impedance-matching networks, multiplexers, etc., the equivalent-circuit, empirical, and semi-analytical models and combinations have been used as surrogate models with the links to the full-wave simulation (Rayas-Sanchez, 2004; Bakr et al., 2000). The same approaches have been rarely applied to antenna modeling, due to the fact that the radiating structures are too complex to lend themselves to analytical and/or circuit modeling. Broadly, there are two approaches to antenna surrogate modeling, coarser approximate physics-driven simulation (Zhu et al., 2007; Koziel and Ogurtsov, 2013) or data-driven methods which model the computation performed by the simulator (Koziel, 2017).
**Deep learning for solving PDEs:** Neural operators are end-to-end methods, formulated to be independent of the underlying mesh discretization and directly approximate the PDE operator between function spaces from samples (Li et al., 2020; \(\mathbf{b}\)). These approaches predict the 2D or 3D evolution of systems like Navier-Stokes or Darcy flow but the representations are not used to predict other quantities like the zeros and poles of the \(S_{11}\) scattering coefficients. An orthogonal line of work uses supervised (Pfaff et al., 2021; Bhatnagar et al., 2019; Guo et al., 2016) or sequential methods (Yang et al., 2022) for adaptively refining a mesh to model aerodynamics or fluid flow.
## 3 Parametric formula for linear PDEs in the frequency domain
In this work, we focus on finding the structure of linear PDE in the frequency domain. When solving linear PDE in the form of Eqn. 1, traditional methods discretize the space and convert the PDEs into the following ODEs (Weiland, 1977):
\[\dot{\boldsymbol{\phi}}=A(\mathbf{h})\boldsymbol{\phi} \tag{2}\]
Note that in the original continuous formulation (Eqn. 1), the quantity \(\psi\) is indexed by spatial location \(\mathbf{x}\) and thus is infinite-dimensional. After discretization used in finite difference methods, \(\boldsymbol{\phi}(t)\) is a vector of dimension \(N\) at each time \(t\), containing the value of \(\psi\) (and its spatial derivative) at specific spatial locations. The linear operator \(F\) now becomes a matrix \(A(\mathbf{h})\) of size \(N\)-by-\(N\). Each of its entry is now related to design vector \(\mathbf{h}\) and topological structure of the discretized grid cells. One important property for linear PDEs is that in its discretized form, \(A(\mathbf{h})\) is a constant and does not change with \(\boldsymbol{\phi}\).
For better understanding, here we put a concrete example of Eqn. 2. Consider a one-dimensional wave equation \(\frac{\partial^{2}\psi}{\partial t^{2}}=c^{2}\frac{\partial^{2}\psi}{\partial x ^{2}}\). Then by setting \(\boldsymbol{\phi}=[\psi(x_{1}),\dots,\psi(x_{N}),\frac{\partial\psi}{\partial t }(x_{1}),\dots,\frac{\partial\psi}{\partial t}(x_{N})]^{\top}\in\mathbb{R}^{2N}\), the wave equation can be written in the form of Eqn. 2 with
\[A=\left[\begin{array}{cc}0&1\\ c^{2}B&0\end{array}\right],\]
where \(B\in\mathbb{R}^{N\times N}\) spatially discretizes the operator \(\frac{\partial^{2}}{\partial x^{2}}\).
From the initial condition \(\phi(\mathbf{x},0)\), classic techniques (e.g., finite element methods (Weiland, 1977)) simply perform temporal integration to get the spatial-temporal signal \(\phi(\mathbf{x},t)\), from which any quantities that are relevant to the design goals can be computed.
In this work, we focus on the property of (single-sided) temporal Fourier transform \(\boldsymbol{\hat{\phi}}(\omega)\) of the spatial-temporal signal \(\boldsymbol{\phi}(t)\):
\[\hat{\boldsymbol{\phi}}(\omega):=\int_{0}^{+\infty}\boldsymbol{\phi}(t)e^{- \mathrm{i}\omega t}\mathrm{d}t \tag{3}\]
where \(\mathrm{i}\) is the imaginary unit and \(\omega\) is the frequency. A surprising finding is that, there exists parametric formula for a family of quantities without numerical integration, as presented formally in the following theorem:
**Theorem 3.1**.: _For discretized linear PDEs in the form of Eqn. 2, if \(A(\mathbf{h})\) is diagonalizable, then any spatially linear combined signals \(\mathbf{b}^{\top}\boldsymbol{\hat{\phi}}(\omega)\) in the Fourier domain is a ratio of two complex polynomials with respect to frequency \(\omega\)._
This leads to the following corollary that is useful to compute any signal from linear PDEs in the frequency domain:
**Corollary 3.2** (Parametric Formula for Linear PDEs).: _For any two linear combination signals \(\mathbf{b}_{1}^{\top}\hat{\boldsymbol{\phi}}(\omega)\) and \(\mathbf{b}_{2}^{\top}\hat{\boldsymbol{\phi}}(\omega)\) in Linear PDEs, there exists constant \(c_{0}\), \(K_{1}\) zeros \(\{z_{k}\}\) (\(1\leq k\leq K_{1}\)) and \(K_{2}\) poles \(\{p_{l}\}\) (\(1\leq l\leq K_{2}\)) so that:_
\[\frac{\mathbf{b}_{1}^{\top}\hat{\boldsymbol{\phi}}(\omega)}{\mathbf{b}_{2}^{ \top}\hat{\boldsymbol{\phi}}(\omega)}=c_{0}\prod_{k=1}^{K_{1}}\left(\omega-z_{ k}\right)\prod_{l=1}^{K_{2}}\left(\omega-p_{l}\right)^{-1} \tag{4}\]
_Note that \(c_{0}\), \(\{z_{k}\}\) and \(\{p_{l}\}\) are all complex functions of the design vector \(\mathbf{h}\) and linear coefficients \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\)._
Please check Appendix A for all proofs. Therefore, parametric forms of many useful quantities can be obtained, e.g., _frequency response_\(\phi(\mathbf{x}_{1},\omega)\) given initial condition of linear PDE, _transfer function_\(\hat{\phi}(\mathbf{x}_{1},\omega)/\hat{\phi}(\mathbf{x}_{2},\omega)\) between two spatial locations \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), etc. For any specific quantity (e.g., the scattering coefficients \(S_{11}(\omega)\) in antenna design, as mentioned below), learning a neural network that predicts constants \(c_{0}\), \(\mathbf{z}\)eros \(\{z_{k}\}\) and poles \(\{p_{l}\}\) from the design vector \(\mathbf{h}\) is our proposed **CZP** framework for linear PDEs.
## 4 Application of CZP to Antenna Design
We now apply our proposed **CZP** framework of linear PDE to Antenna Design problems. Finding antenna design that satisfies the requirement with small dimension, low power consumption and low cost may enable reduction of the physical volume and shape of devices, and leads to seamless wireless connectivity with augmented reality (AR).
In antenna design, the goal is to find a good spatial configuration of materials in a 2D/3D space, so that the overall system demonstrate a specific property in the frequency domain, e.g., strong absorption at specific frequency. The relationship between an antenna's topology and its properties is governed by the well-known Maxwell's equations which can be written in the form of linear PDEs satisfying Eqn. 1. In this case, the quantity \(\psi\) (and its discretized version \(\boldsymbol{\phi}\)) now contains electromagnetic quantities (e.g., electric/magnitude field strength and potentials, voltages and currents, etc) at each discretized grid cell.
**The Goal of Antenna Design.** Antenna engineers aim to find the right design choice \(\mathbf{h}\) so that specific antenna design targets are met over the frequency bands of interest, for example, there needs to be dips (i.e., more absorption) in \(S_{11}(\omega)\) at WiFi 2.4GHz band and WiFi 5-7GHz band for WiFi 6E, as shown in Figure 1 bottom row.
The frequency properties of the antenna is described by the logarithm of modulus of the _scattering coefficients_\(\log|S_{11}(\omega)|\), typically expressed in decibels (dB). \(S_{11}(\omega)\), as a function of frequency \(\omega\), is defined as the following (Caspers, 2012):
\[S_{11}(\omega):=\frac{Z_{\mathrm{in}}(\omega)/Z_{0}-1}{Z_{\mathrm{in}}(\omega )/Z_{0}+1} \tag{5}\]
where \(Z_{\mathrm{in}}(\omega)\) is the input impedance of the antenna (typically a complex number), determined by the design vector \(\mathbf{h}\). Figure 1 shows an example antenna design and its \(S_{11}(\omega)\). In real-world applications, often other requirements are needed, e.g., low latency, low power consumption and so on, which we leave for future work.
### An analytic formula for computing scattering coefficients \(S_{11}(\omega)\) of antenna
Thanks to the insights given by Corollary 4, we arrive at an analytic formula for \(S_{11}(\omega)\) without performing numerical integration of Maxwell's equations:
**Theorem 4.1** (Analytical Structure of Scattering Coefficients).: _If \(A(\mathbf{h})\) in discretized Maxwell's equations are diagonalizable, then \(\log|S_{11}(\omega)|\) has the parametric form:_
\[\log|S_{11}(\omega)|=\log|c_{0}(\mathbf{h})|+\sum_{k=1}^{K}\log\frac{|\omega- z_{k}(\mathbf{h})|}{|\omega-p_{k}(\mathbf{h})|} \tag{6}\]
_where the constant \(c_{0}(\mathbf{h})\), zeros \(\{z_{k}(\mathbf{h})\}_{k=1}^{K}\) and poles \(\{p_{k}(\mathbf{h})\}_{k=1}^{K}\) are complex functions of the design choice \(\mathbf{h}\)._
Note that since almost all squared matrices are diagonalizable (Horn & Johnson, 2012), the assumption is not strong. One interesting property is that due to the homogenous structure of \(S_{11}(\omega)\), its parametric form has the _same_ number of zeros (i.e., roots of the nominator) and poles (i.e., roots of the denominator), reducing one extra hyperparameters to tune. To learn these complex functions, we match the predicted \(S_{11}(\omega)\) from the formula with the ground truth one provided by existing commercial software, and train in an end-to-end manner. With this formulation, we avoid any
Figure 1: **Top:** An instance of an antenna from the five patch example in a AR device. Yellow corresponds to patches of metallic substrate and purple corresponds to the board on which the antenna sits. **Bottom:** The corresponding frequency response of the given antenna.
forward numerical integration and arrive at the quantity we want in one inference pass.
### Parameterization of design vector \(\mathbf{h}\)
In this work, we mainly focus on 2D antenna design (Fig. 1 top row). We specify our design choice vector \(\mathbf{h}\) as follows:
**Substrate.** The rectangular printed circuitry board of width \(S_{x}\) and height \(S_{y}\) on which the other components sit. The substrate has thickness \(S_{z}\) and dielectric permittivity \(\epsilon_{r}\).
**Ground plane.** A solid rectangle extending through the entire substrate in the \(x\) direction and partially in the \(y\) direction.
**Discrete port.** The port location is the coordinate \(p_{x}\), \(p_{y}\) and is dependent on one of the front-side metallic patches.
**Front-side metallic patches.** The antenna contains \(M\) rectangular metallic patches which can freely move within the substrate area or pre-determined ranges. The \(m\)-th patch \(p_{m}\) is defined by its width and height \(s_{m,x}\), \(s_{m,y}\) and, the coordinate of the bottom left corner \(l_{m,x}\), \(l_{m,y}\). When the boundary of a metallic patch goes beyond the substrate, the excess is simply clipped. When patches overlap, there is no increase in the thickness; they combine to make a single metallic patch that is no longer rectangular.
Combining all these specifications, we now have an overall design choice vector defined as
\[\mathbf{h}=\{S_{x},S_{y},S_{z},p_{x},p_{y},\{s_{m,x},s_{m,y},l_{m,x},l_{m,y}\}_ {m=1}^{M}\}. \tag{7}\]
**Five Patch Example.** In this work, we consider an antenna example with an FR-4 substrate that is \(30\)mm by \(6\)mm and \(5\) front-side metallic patches with fixed dimensions and location boundaries (see Appendix for details). Additionally, we assume that the only degrees of freedom are the locations \(\{s_{m,x},s_{m,y}\}\) of each of the \(5\) patches as defined by the coordinates of the bottom left corner. We constrain the problem as such for experimental simplicity and acknowledge this is a simplified setting with respect to production-tier antenna optimization. However, the proposed surrogate model is agnostic to the assumption on patches and the optimisation procedure can be easily extended to variable patches of varying dimensions.
## 5 Network Architecture
In this section, we discuss the details of the neural network models used to predict the constant, zeros and poles. Specifically, we propose a novel image representation for a 2D antenna inspired by the mesh representations commonly used by EM simulators. Then, following the analysis in the previous section, we introduce our image-based transformer architecture which predicts the zeros and poles of scattering coefficients.
### Image representation
Mesh-based finite element methods underpin many of the available simulation tools in electromagnetics and other fields (Pardo et al., 2007). The mesh converts the underlying PDEs of the system into an ODE solvable by finite element methods (Weiland, 1977). Mesh representations use the fact that an antenna's resonance characteristics are directly related to its local and global topological structure. This motivates the use of images for learning a surrogate model as it contains the same local and global spatial information. A model would have to cope with a naive representation (i.e. the coordinates of front-side metallic patches) by learning these spatial relationships.
A critical component of successful meshing is to generate non-uniform, adaptive meshes which allocate high resolution, dense meshing to areas in which the quantity \(\mathbf{\phi}\) may change rapidly (e.g., at sharp corners). Adaptive meshing enables the simulation of systems unsolvable by traditional discretization methods (Pfaff et al., 2021). Guided by this, we posit that an image representation of an antenna should provide the key regions (i.e., boundaries and corners of substrate) explicitly so that a neural network does not need to spend unnecessary computation learning these features.
We propose a three channel image representation. The first two channels provide the boundary locations in the \(x\) and \(y\) directions where pixel values \(v\in[0,1]\) are floating point to represent the distance to the nearest pixel in the \(x\) or \(y\) directions, respectively. For example, given the bottom left \((x_{bl},y_{bl})\) and top right \((x_{tr},y_{tr})\) floating-number coordinates of a rectangular patch, we compute the pixel indices
Figure 2: Three channel image representation. **Top:** Boundary values represent distance to nearest pixel in the \(x\)-direction. **Middle:** Boundary values represent distance to nearest pixel in the \(y\)-direction. **Bottom:** Binary interior of the antenna. This channel does not contain boundaries.
as the floor,
\[\bar{x}_{bl}=\lfloor x_{bl}\rfloor,\ \bar{y}_{bl}=\lfloor y_{bl}\rfloor,\ \bar{x}_{tr}=\lfloor x_{tr}\rfloor,\ \bar{y}_{tr}=\lfloor y_{tr}\rfloor.\]
Then, we compute the values
\[v_{l}=1-(x_{bl}-\bar{x}_{bl}),\ v_{r}=x_{tr}-\bar{x}_{tr}\] \[v_{b}=1-(y_{bl}-\bar{y}_{bl}),\ v_{t}=y_{tr}-\bar{y}_{tr}\]
where \(v_{l}\), \(v_{r}\),\(v_{b}\),\(v_{t}\) correspond to the left, right, bottom and top boundary values, respectively. The left/bottom boundary is subtracted from \(1\) where as the right/top is not because the floor function has a subtly different semantic meaning between these cases; Without the loss of generality, the floor of the left/bottom boundary _is not_ contained inside the interior of the patch whereas the floor of the right/top _is._ We chose this design as it enables sensible image dimensions (i.e. \(60\times 300\) for a \(6\)mm \(\times 30\)mm image with a resolution of \(10\) pixels to \(1\)mm) however others are possible. Finally, note, separating the \(x\) and \(y\) boundaries into two channels enables explicit representation of the corners of patches.
Finally, a third channel provides the _interior_ of the antenna as a binary image where \(v=1\) for all index pairs \(x,\ y\) such that \(x\in[\bar{x}_{bl}+1,\bar{x}_{tr}-1],\ y\in[\bar{y}_{bl}+1,\bar{y}_{tr}-1]\) and \(v=1\). Please see Algorithm 1 in Appendix B for pseudocode of the process and Figure 2 for an example image. Some details are omitted such as patch dimensions which go beyond the board or overlapping patches but these are straightforwardly handled via clipping and masking.
### Surrogate Model
In this section, we propose an architecture for a surrogate model which predicts the zeros and poles directly from the image representation which is then used to compute scattering coefficients. The architecture is based on the Visual Transformer (Wu et al., 2020) which is motivated by the insight that local, spatial components such as boundaries between substrate of the antenna should be tokenized and then used by a transformer (Vaswani et al., 2017) to compute the global characteristics.
Given the input image \(\mathbf{I}=\mathbf{I}(\mathbf{h})\in\mathbb{R}^{3\times HW}\) as a function of design choice \(\mathbf{h}\), we first augment it with two additional channels of linearly spaced \(x\) and \(y\) coordinates (Liu et al., 2018), to yield augmented image \(\hat{\mathbf{I}}\in\mathbb{R}^{5\times HW}\). This is because the specific location of antenna components, in addition to its topology, determines the corresponding frequency response. Then, a CNN takes \(\hat{\mathbf{I}}\) as an input and generates feature maps \(\mathbf{X}\in\mathbb{R}^{HW\times C}\) where \(H\), \(W\) and \(C\) are the height, width and channel dimension, respectively. A filter-based tokenizer (Zhang et al., 2019) generates \(L\) visual tokens \(\mathbf{T}\in\mathbb{R}^{L\times C}\) by mapping each pixel via a point-wise convolution to \(L\) groups with matrix \(\mathbf{W}\in\mathbb{R}^{C\times L}\) and computes a softmax in the pixel dimension
\[\mathbf{A}=\mathrm{Softmax}_{HW}(\mathbf{X}\mathbf{W})\]
where \(\mathbf{A}\in\mathbb{R}^{HW\times L}\) is referred to as an attention map. Visual tokens \(\mathbf{T}\) are computed via \(\mathbf{T}=\mathbf{A}^{T}\mathbf{X}\) which is the weighted average of pixels in the original feature map \(\mathbf{X}\). Intuitively, the tokens \(\mathbf{T}\) capture semantics such as relative boundary and corner locations and from this the transformer computes the global characteristic of the antenna configuration. Please see Figure 9 in Appendix D for a subset of the learned attention maps for a specific antenna instance which demonstrate this.
After that, \(\mathbf{T}\) is then passed through a multi-layer transformer encoder (Vaswani et al., 2017), flattened and passed through a fully connected layer and a non-linearity. From this representation, three separate complex-valued fully connected layers predict the constant, zeros and poles. Concretely, let \(C_{\theta},\ Z_{\theta},\ P_{\theta}\) be linear layers parameterized by \(\theta\). Then,
\[\mathbf{v}=\mathrm{FC}(\mathrm{Transformer}(\mathbf{T}))\] \[c_{0}(\mathbf{h}):=C_{\theta}(\mathbf{v}),\ \mathbf{z}(\mathbf{h}):=Z_{\theta}( \mathbf{v}),\ \mathbf{p}(\mathbf{h}):=P_{\theta}(\mathbf{v})\]
where \(\mathbf{h}\) is the design choice and \(c_{0}(\mathbf{h}),\ \mathbf{z}(\mathbf{h}),\ \mathbf{p}(\mathbf{h})\) are the constant and vectors (of length \(K\)) of zeros and poles, respectively, used to compute the frequency response as per Equation 6. We refer to this architecture which outputs the constant, zeros and poles as **CZP** models / architectures, or just **CZP** as abbreviation.
### Model training
When training **CZP** models, we do not have direct supervision to \(c_{0}(\mathbf{h})\), zeros \(\mathbf{z}(\mathbf{h})\), and poles \(\mathbf{p}(\mathbf{h})\), but only the \(S_{11}(\omega)\) provided with CST Microwave Studio (CST, 2021) as ground truth. Therefore, we leverage Eqn. 6 to compute estimated \(S_{11}(\omega)\) with \(c_{0}\), \(\mathbf{z}\), and \(\mathbf{p}\), so that it can match the ground truth. We then train the model via back-propagation in an end-to-end manner to minimize the Mean Squared Error (MSE), for frequencies in the range \([0.2-7.0]\)GHz at increments of \(0.1\) (i.e., 69 dimensions). We use a shrinkage loss (Li et al., 2018) variant of MSE as we found that with vanilla MSE, the model had higher error on the crucial parts of the scattering coefficients (i.e., the resonances).
## 6 Experiments
In this section, we demonstrate the impact of our architectural choices and image representation on the five patch example antenna discussed in Section 4.2. Specifically, we show that:
* Our proposed image representation is a significant improvement over reasonable coordinate-based inputs as well as a naive binary image input.
* The **CZP** formulation outperforms raw prediction when using the same transformer architecture proposed in Section 5 and the proposed image representation.
* The transformer architecture outperforms a CNN for the image representation and an MLP for coordinate based input.
* **CZP** generalizes well to unseen antenna designs, not only on a held-out dataset, but also as a surrogate model for designs proposed by the reinforcement learning (RL) based search procedure, as verified to meet specific resonance requirements by commercial softwares.
### Surrogate Modeling
We use \(48K\) total samples, uniformly sampled and simulated with CST Microwave Studio (CST, 2021) where each sample takes between \(90\) and \(120\) seconds to simulate. \(90\%\) of samples are used for training and \(10\%\) are used for testing. From the training set, \(10\%\) are randomly sampled and used for validation. Each experiment is run for 3 random seeds. Appendix C provides all experimental hyperparameters.
Figure 3 illustrates the first set of experiments in which we demonstrate the effectiveness of (1) our novel image representation and (2) the proposed **CZP** when using the transformer architecture. To show (1), we compare against a coordinate-based method which concatenates the normalized bottom-left \(x,y\) coordinate of each patch with a one-hot vector to distinguish between patches. When using the transformer architecture, this generates \(5\) tokens, each with dimension \(7\). Before being processed by the transformer, each token is projected into a \(256\) dimensional vector by a \(2\)-layer MLP with hidden layer of width \(256\). Additionally, to demonstrate (2), for both coordinate and image input, we compare against directly predicting the raw \(69\) dimensional frequency response with a fully connected layer, referred to as Raw in figures. Additionally, we ablate over different degree \(K\) of **CZP** with values \(8\), \(12\), \(16\), \(20\) and the number of attention layers \(L\) with values \(8\), \(6\), \(4\), \(2\).
First, within each configuration, images improve over its coordinate counterpart by a minimum of \(9.6\%\) with \(L=4\) and raw prediction and a maximum of \(26.8\%\) with \(L=2\) and \(K=12\). Second, for the image representation, **CZP** improves over raw prediction by a minimum of \(9.2\%\) with \(L=8\) and \(K=12\) and a maximum of \(28.0\%\) with \(L=2\) and \(K=12\). For the coordinate representation, **CZP** improves over raw prediction by a minimum of \(4.6\%\) with \(L=8\) and \(K=12\) and a maximum of \(25.1\%\) with \(L=2\) and \(K=8\). Finally, increasing the transformer depth from 2 to 8 layers improves raw prediction for image and coordinate representations by \(29.5\%\) and \(35.9\%\), respectively. Increased depth improves the **CZP** an average of \(15.4\%\) and \(20.9\%\) for image and coordinate representations, respectively.
From these statistics, we can extract the following insights which support the **CZP** architecture and image representation as powerful inductive biases; (1) With fewer transformer layers, **CZP** yields greater improvement over raw prediction and (2) **CZP** and image representation benefit _the least_ from increasing the complexity of the model and, at the opposite extreme, raw prediction and coordinate representation benefit the most. These two points show that without these inductive biases, deeper models are required as shallower models are likely to fall into local minima.
In Figure 4, we provide results for \(4\) other baselines to show the impact of the transformer and image representation over reasonable alternatives e.g., a fully connected MLP with coordinate input, a CNN with our image input, the transformer with a naive single-channel binary image input, and the Fourier Neural Operator (FNO) (Li et al., 2020) developed to solve other PDEs such as Navier-Stokes with image input. In the last row we reproduce the results of the 8-layer transformer with image input from Figure 3. The 8 layer transformer is a \(40\%+\) improvement on these baselines.
Finally, in Figure 5, we provide a data ablation with raw prediction and **CZP**\(K=20\). The trend of **CZP** out performing raw prediction holds in this setting as well although the differences in test loss are small. However, in the next section, we show that our model greatly outperforms the baselines when used for optimization when trained with less data, more robust for unseen designs. For other qualitative results such as attention map visualizations, please see Appendix D.
### Optimization
In this section, we demonstrate the utility of the proposed model by showing it can be used by an optimization procedure to find antenna configurations that have specific resonance characteristics. This is a significant test of the generalization and robustness of the model since (1) an antenna with the desired resonances is _not_ contained in the training set and (2) an optimization procedure can very easily find adversarial configurations to exploit the weaknesses of the surrogate model (Yuan et al., 2017). We hypothesize that **CZP** will be far more robust than raw prediction to these kinds of samples because it is by design smooth (i.e., a ratio of two polynomials) whereas raw prediction has no built in bias encouraging this property. Please see Figure 8 in Appendix D for qualitative intuition regarding this. In this section, we provide results which demonstrate that our proposed model, when used by an optimization procedure, has a significantly higher success rate and is more robust to dataset size than the baseline.
We frame antenna design as a reinforcement learning (RL) (Sutton & Barto, 1998) problem where an agent is
tasked with sequentially placing each of the 5 patches such that the frequency response of the final antenna meets the resonance characteristics. Recall from Section 4, this means that the corresponding \(S_{11}\) is below a certain threshold at specific frequency ranges. In this problem, the frequency ranges are \(2.4\) GHz-\(2.5\) GHz and \(5.1\) GHz-\(7.0\)GHz and the target thresholds are \(t_{[2.4-2.5]}=-6.0\) dB and \(t_{[5.1-7.0]}=-6.0\) dB, the spectrum for WiFi 6E.
Formally, we define the state and action of the Markov Decision Process (MDP) (Puterman, 1994) as:
* **State:** A one-hot identifier and \((x,y)\) coordinates of the bottom left corner of the patches which have been placed and a one-hot vector for the next patch to be placed.
* **Action:**\((x,y)\) coordinates of the bottom-left corner of the next patch to be placed.
After all patches have been placed, the coordinates are converted to the image representation and the surrogate model predicts the frequency response \(\log|S_{11}(\omega)|\). From the final \(\log|S_{11}(\omega)|\), we compute the following reward components for each resonance target.
\[r_{[2.4-2.5]}=\min(t_{[2.4-2.5]}-\log|S_{11}(\omega)|_{[2.4-2.5]}) \tag{8}\] \[r_{[5.1-7.0]}=\min(t_{[5.1-7.0]}-\log|S_{11}(\omega)|_{[5.1-7.0]}) \tag{9}\]
where the subscripts correspond to list slicing. The sum \(r=r_{[2.4-2.5]}+\min(1.0,r_{[5.1-7.0]})\) is then the reward given at the final timestep and at all previous timesteps the reward is zero. Note, we prevent the second reward component from being greater than \(1.0\) because in experiments the higher band (\(5.1\) GHz-\(7.0\) GHz) seemed to be easier to optimize and often led to local minima that did not optimize the lower band (\(2.4\) GHz-\(2.5\) GHz). To optimize, we use the implementation of Soft Actor Critic (SAC) (Haarnoja et al., 2018) from Stable-Baselines3 (Raffin et al., 2021) and build the environment using the Gym API (Brockman et al., 2016). Default hyperparameters are used except we perform two updates at the end of each episode as opposed to one or more updates per step.
Figure 4: Mean and standard deviation of the test loss over 3 seeds for ablations of architectural components of the proposed model and baselines. Results reported are for raw prediction and **CZP** with degree \(K=8,12,16,20\) for the following configurations: 6-layer MLP with coordinate input, 5 layer CNN with image input, and 8 layer transformer with binary image input and 4-layer FNO with image input.
Figure 5: Mean and standard deviation of the test loss over 3 seeds for **CZP**\(K=20\) and raw prediction with the 8-layer transformer architecture and image input with randomly sampled subsets of the training data for portions \(25\%\), \(50\%\) and \(75\%\). **CZP** has a lower test loss.
Figure 3: Mean and standard deviation of the test loss over 3 seeds with the transformer architecture for _image_ and _coordinate_ input representations and \(L=8,6,4,2\) attention layers. Results are reported for raw frequency prediction and the **CZP** architecture with degree \(K=8,12,16,20\). In all configurations, the image representation outperforms coordinates and **CZP** outperforms raw prediction.
For these experiments, we use the **CZP**\(K\!=\!20\) and raw prediction architectures with an 8-layer transformer as these achieved the lowest test losses. For each of the 3 seeds for each architecture trained in the previous section, we run 3 seeds of optimization for a total of 9 experiments per configuration. In each experiment, we deploy the SAC agent for \(25\)K total episodes or \(125\)K total timesteps (since the agent places 1 of 5 patches each step). We also investigate the robustness of this process to dataset size which is critical in antenna design as sample collection is expensive.
Generally, SAC is able to find configurations which meet the requirements with both architectures and in Figure 6 we provide examples. The top row provides the found antenna configuration and the bottom row the frequency responses predicted by the model (red) and CST (green).
However, in terms of _success rate_ (i.e., how many configurations or optimization runs actually produce an antenna which meets the constraints), **CZP** significantly out performs raw prediction. Specifically, in Figure 7, we provide the percentage of the top 3 configurations (i.e., 3 per seed for a total of 27) found over all seeds which meet the constraints and also the percentage of runs where _any_ of the top 3 meet the constraints. Additionally, we perform a data ablation to show that our **CZP** model is more robust to less data demonstrating its strength as an inductive bias.
## 7 Conclusion
In this work, we theoretically derived a novel parametric form physical quantities must obey in linear PDEs in terms of complex-valued zeros and poles. Based on this, we proposed the the **CZP** framework, which uses a neural network to predict these zeros and poles. Applying this to the problem of industrial antenna design, we show that an antenna's frequency response can be predicted by our **CZP** model and propose an efficient novel image representation of an antenna from which to do so. We then demonstrated experimentally that **CZP** and the image representation are significant advances through architecture and data ablation studies. Finally, we showed that our **CZP** model has significantly higher utility in terms of success rate for optimization of antenna design than baselines.
Although the results are significant, the problem investigated in this work is still relatively simple compared to production level antenna systems. Future work will involve solving more complicated 2D problems as well as generalizing to 3D antenna. Additionally, in this line of work, we plan to explore other tokenization schemes that are as information rich as images but are more computationally efficient since images require convolutions to featurize. Lastly, future work will involve more applications involving frequency responses via solving linear PDEs.
Figure 6: Two successful antenna configurations (top row) and corresponding frequency responses (bottom row) predicted by the model (red) and computed by CST (green) found by optimization via RL for **(a) CZP** and **(b)** Raw.
Figure 7: Success rate for the \(\%\) of top 3 configurations and if any of the top 3 configurations meet design requirements. Experiments performed for 3 seeds for RL for each of the 3 seeds of **CZP**\(K\!=\!20\) and raw prediction with the 8-layer transformer architecture and image input from the previous section. Experiments conducted also with randomly sampled subsets of the training data for portions \(25\%\), \(50\%\) and \(75\%\). **CZP** is more robust than raw prediction to the optimization procedure and to dataset size.
|
2305.01961
|
Design and Control of a Micro Overactuated Aerial Robot with an Origami
Delta Manipulator
|
This work presents the mechanical design and control of a novel small-size
and lightweight Micro Aerial Vehicle (MAV) for aerial manipulation. To our
knowledge, with a total take-off mass of only 2.0 kg, the proposed system is
the most lightweight Aerial Manipulator (AM) that has 8-DOF independently
controllable: 5 for the aerial platform and 3 for the articulated arm. We
designed the robot to be fully-actuated in the body forward direction. This
allows independent pitching and instantaneous force generation, improving the
platform's performance during physical interaction. The robotic arm is an
origami delta manipulator driven by three servomotors, enabling active motion
compensation at the end-effector. Its composite multimaterial links help reduce
the weight, while their flexibility allow for compliant aerial interaction with
the environment. In particular, the arm's stiffness can be changed according to
its configuration. We provide an in depth discussion of the system design and
characterize the stiffness of the delta arm. A control architecture to deal
with the platform's overactuation while exploiting the delta arm is presented.
Its capabilities are experimentally illustrated both in free flight and
physical interaction, highlighting advantages and disadvantages of the
origami's folding mechanism.
|
Eugenio Cuniato, Christian Geckeler, Maximilian Brunner, Dario Strübin, Elia Bähler, Fabian Ospelt, Marco Tognon, Stefano Mintchev, Roland Siegwart
|
2023-05-03T08:23:33Z
|
http://arxiv.org/abs/2305.01961v1
|
# Design and Control of a Micro Overactuated Aerial Robot
###### Abstract
This work presents the mechanical design and control of a novel small-size and lightweight Micro Aerial Vehicle (MAV) for aerial manipulation. To our knowledge, with a total take-off mass of only \(2.0\,\mathrm{kg}\), the proposed system is the most lightweight Aerial Manipulator (AM) that has 8-DOF independently controllable: 5 for the aerial platform and 3 for the articulated arm. We designed the robot to be fully-actuated in the body forward direction. This allows independent pitching and instantaneous force generation, improving the platform's performance during physical interaction. The robotic arm is an origami delta manipulator driven by three servomotors, enabling active motion compensation at the end-effector. Its composite multimaterial links help reduce the weight, while their flexibility allow for compliant aerial interaction with the environment. In particular, the arm's stiffness can be changed according to its configuration. We provide an in depth discussion of the system design and characterize the stiffness of the delta arm. A control architecture to deal with the platform's overactuation while exploiting the delta arm is presented. Its capabilities are experimentally illustrated both in free flight and physical interaction, highlighting advantages and disadvantages of the origami's folding mechanism.
## I Introduction
Nowadays, the interest for aerial platforms able to perform manipulation tasks is constantly increasing [1]. Many inspection applications require specifically trained operators working at elevated locations performing interaction and manipulation tasks. On the other hand, the use of AMs would reduce costs and operation time, improving safety as well. The use of multi-directional thrust platforms has been proved fundamental to perform Aerial Physical Interaction (APhI) tasks, being able to independently exert forces and torques. Different configurations of multi-directional thrust systems [2] have already been tested in the past: examples are tricopters [3], quadcopters [4, 5], hexacopters [6] and even octocopters [7].
By enhancing the mobility of a MAV with the dexterity of a robot manipulator, new possibilities are unlocked [8]. Among all possible robotic arms, delta manipulators are of particular interest for aerial manipulation because most of their weight is at the base, reducing the inertia and thus the reaction forces on the aerial platform during the motion. Additionally, they possess very fast dynamics in their three translational degrees of freedom (DOF), allowing them to compensate possible base position offsets or oscillations. This made them a popular choice, both for APhI with quadcopters [9, 10, 11], or end-effector tracking with an omnidirectional platform [12]. However, due to the amount of required actuators, joints and linkages, the addition of actively driven end-effectors often results in a large and heavy system. The work in [13] gives an overview of several AM designs. In particular, it shows that \(60\)% of the reviewed platforms weight more than \(2.0\,\mathrm{kg}\). Considering only platforms with manipulators having at least 3 actuated DOF, the lightest setup is based on a standard quadrotor and weights \(1.9\,\mathrm{kg}\)[9]. Instead, with only \(100\,\mathrm{g}\) more (total weight of \(2.0\,\mathrm{kg}\)), we propose a novel overactuated platform, capable of independent pitching, with double the payload.
Apart from the weight, compliance plays an important role in the contact stability during APhI, as already shown in [14, 15]. Despite its importance, the current state-of-the-art platforms still employ rigid-link delta manipulators, sometimes adding small spring elements at the end-effector [9, 11]. This further increases the complexity and weight of the mechanical structure. On the other hand, origami manufacturing allows for the lightweight construction of complex 3D structures through folding composites
Fig. 1: The overactuated MAV with the origami delta manipulator. The body \(\mathcal{F}_{B}\), world \(\mathcal{F}_{W}\), delta base \(\mathcal{F}_{D}\) and end-effector \(\mathcal{F}_{E}\) frames are shown. The \(\mathbf{x}\), \(\mathbf{y}\), \(\mathbf{z}\) axes are represented in red, blue, green, respectively. The arm tilt angles around the body \({}_{B}\mathbf{y}\) axis are indicated with \(\alpha_{0}\), \(\alpha_{1}\).
of rigid and flexible layers, generating links and joints with inherent flexibility [16]. Specifically for delta robots, origami manufacturing facilitates ease of monolithic construction or miniaturization, such as for haptic user interfaces [17] or centimeter [18] and millimeter scale [19] delta robots.
Despite the compliance and reduced weight of origami manipulators which make them well-suited for aerial applications, their use for aerial manipulation have remained mostly unexplored. In [20], a one DOF unarticulated origami arm was used as an extensible gripper on a MAV, by storing the arm flat during take-off and extending it during flight.
In this work we describe the design and control of a novel small-size, lightweight, and overactuated AM, representing a highly versatile platform for inspection tasks. Its core elements are a tri-tiltrotor MAV with 5 DOFs and an origami delta arm providing additional 3 DOFs (see Fig. 1). The entire system has a take-off mass of \(2.0\,\mathrm{kg}\) and its longest side spans a length of only \(56\,\mathrm{cm}\). To the best of the authors' knowledge, this represents the lightest 8 DOFs AM in the state-of-the-art. We demonstrate the use of an inherently compliant origami delta manipulator on an aerial robot, which allows for precise motion compensation tasks (as for rigid delta arms), while providing additional compliance during interaction. In particular, we first compute the exact delta kinematics, taking into account the non-idealities of the universal joints approximation. Then, we experimentally characterize the arm compliance and show how this affects the maximum force that the AM can exert on the environment. Moreover we also discuss possible unwanted foldings of the origami joints (which we refer to as _critical configurations_) and provide some insights on how to improve the prototype's robustness.
## II System design
We design the system with the following goals in mind: (i) Small size and light weight platform, to allow operations in confined areas, while increasing safety when operating close to humans. (ii) Power efficiency for long flight times. (iii) Versatility and suitability for inspection tasks, including the ability to accurately touch an arbitrarily oriented surface at a desired location. In order to achieve these goals, we design an overactuated platform augmented with an articulated end-effector. In our case, the overactuation has two main advantages: (i) it allows instantaneous force compensation in the forward interaction direction, and (ii) it allows to hover and perform interaction tasks at different pitching angles. An overview of the AM is illustrated in Fig. 1.
### _Aerial platform_
We use the North-East-Down (NED) convention to describe the body frame \(\mathcal{F}_{B}=\{{}_{B}O,{}_{B}\boldsymbol{x},{}_{B}\boldsymbol{y},{}_{B} \boldsymbol{z}\}\), fixed to the Center of Mass (CoM) of the MAV, as well as the inertial world frame \(\mathcal{F}_{W}=\{{}_{W}O,{}_{W}\boldsymbol{x},{}_{W}\boldsymbol{y},{}_{W} \boldsymbol{z}\}\), as illustrated in Fig. 1. The chosen multirotor configuration consists of two groups of coaxial rotors, which can tilt around the \({}_{B}\boldsymbol{y}\) axis, and a rear motor with a bidirectional propeller. The propellers and tilt arms have complete control authority over the generated body torques, as well as forces along the \({}_{B}\boldsymbol{z}\) and \({}_{B}\boldsymbol{x}\) axes. Since forces along \({}_{B}\boldsymbol{y}\) cannot be generated with the chosen configuration (they are always zero), the linear and angular dynamics in this direction are coupled.
The MAV is intentionally built to fit inside small manholes and to operate in closed environments. Without propeller guards it measures \(55\,\mathrm{cm}\) in length and \(56\,\mathrm{cm}\) in width. The propellers of the two main rotor groups are 9"x4.7 while the rear 3D propeller measures 8"x4.5. It weights \(1.7\,\mathrm{kg}\) and can transport a maximum additional payload of \(1\,\mathrm{kg}\). The main frame and all the other structural parts are printed with Nylon PA12 using a HP Multi Jet Fusion 3D printer, apart from motor arms and tail that are made of carbon tubes. The biggest contribution on the weight comes from a \(5000\,\mathrm{mAh}\) 4S battery (\(540\,\mathrm{g}\)), which experimentally reflected in around 18 minutes of flight time without additional payload. Another experimental test with \(0.8\,\mathrm{kg}\) of payload, for a total weight of \(2.5\,\mathrm{kg}\), resulted in 7 minutes of flight time. A summary of the weight contributions of the different components is in table I. A voltage buck converter provides \(5\,\mathrm{V}\) to power the Pixhawk flight controller, onboard computer and delta arm, while another provides \(7.5\,\mathrm{V}\) for the two Dynamixels XL-320 moving the arms. Table II gives an overview of the platform's electrical components.
### _Origami-based delta manipulator_
The design of the origami manipulator is shown in Fig. 2. Delta parallel arms consist of three identical legs connecting the moving end-effector plate to the fixed base platform. Each of the legs is driven by a one DOF rotary servomotor connected to the base platform. The legs consist of proximal and distal limbs, the latter formed by parallel bars, creating a quadrilateral parallelogram-like structure. This results in pure translation motions between base and end-effector [21].
In ideal delta manipulators, the joints between the proximal limbs, the parallelogram linkages and the end-effector plate are universal joints. For the origami manipulator, the
\begin{table}
\begin{tabular}{|c|c|c|} \hline Component & Name & Qty. \\ \hline \hline Motor & KDE2315XF-885 & 5 \\ ESC & Tekk302 F3 & 5 \\ Origami servo & Dynamixel XL-330 & 3 \\ Tilrotor servo & Dynamically XL-320 & 2 \\ Buck Converter & Henge 8A UBEC & 2 \\ Flight controller & Pixhawk Cube Grey & 1 \\ Battery & Zop 4S 5000mAh & 1 \\ PDB & Pixhawk 4 Mini PDB & 1 \\ RC Receiver & Jeti PPM Receiver & 1 \\ Onboard computer & Raspberry Pi 4B & 1 \\ \hline \end{tabular}
\end{table} TABLE II: Electrical components of the AM.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Part & MAV (g) & Delta (g) \\ \hline \hline Battery & 540 & - \\ Motors & 375 & 135 \\ Structure & 345 & 130 \\ Electronics & 230 & 25 \\ Others & 150 & 10 \\ Bumpers & 60 & - \\ \hline \hline Total & 1700 & 300 \\ \hline \end{tabular}
\end{table} TABLE I: Weight (in grams) of the AM components.
universal joints are approximated using the solution proposed in [17, 19]. The side linkages of the parallelogram are folded upwards and an additional fold is added close to both ends of the linkages (Fig. 2(A)). This results in perpendicular revolute joints at the knee (R2, R3 and R4) and ankle (R5, R6 and R7) of the parallelogram, approximating the universal joints. Unlike conventional delta robots, here the rotation axes do not coincide, resulting in a different kinematic model. Even if new origami designs were proposed in order to remove the universal joint approximation [18], we prefer to avoid having one monolithic arm structure. The origami limbs are attached to the 3D printed motor-interface and end-effector plate with screws and alignment pins, which allow easy replacement in case of breaking of an arm's element [22]. Moreover we will derive the exact arm kinematics even in the presence of non-universal joints.
The origami structures are made of a three layer, multi-material composite. The top and bottom layers are made of fiberglass (FR-4-HF, 0.3mm), which provides the necessary stiffness. The middle Kapton layer with adhesive on both sides (DuPont Pyralux LF0111, 0.05mm) adds the necessary flexibility at the joints and bonds layers together. Each layer is individually laser-cut with a \(CO_{2}\) laser (Trotec Speedy 360), stacked, and then aligned by pins and holes. The layers are then bonded together by the adhesive on the surfaces of the Kapton layer using a hydraulic heat press (Fontjine LabManual 300). The upper and lower limbs are built separately and then screwed together at the knee joints, reducing material waste and simplifying the pattern. The other structural parts were printed with ABS.
The use of foldable joints instead of mechanical joints gives a compliant behavior to the delta robot. This inherent flexibility depends not only on the design parameters (e.g., joint geometry and material properties), but also on the configuration assumed by the manipulator. This allows adjusting the compliance of the manipulator to the requirements of the task. For example, a soft configuration is conducive to safer interactions, while a stiffer configuration may be preferred to achieve greater accuracy and repeatability. In Sec. IV we characterize the axial stiffness of the manipulator and show how it influences a simple interaction task.
## III Control design
In this section we present the control design of the AM, which is schematically presented in Fig. 3. First, we develop a pose control law for the overactuated MAV modifying the geometric controller for quadrotors in [23]. Secondly, we derive the Inverse Kinematics (IK) for the origami delta arm, coupling it to the MAV's pose.
### _Aerial platform pose control_
Consider the inertial world frame \(\mathcal{F}_{W}\) and the body frame \(\mathcal{F}_{B}\) attached to the MAV in its CoM. We define \({}_{W}\boldsymbol{p}\) as the position of the body frame's origin in \(\mathcal{F}_{W}\) and \(\mathbf{\boldsymbol{R}}_{BW}\in\mathrm{SO}(3)\) as the rotation matrix from \(\mathcal{F}_{W}\) to \(\mathcal{F}_{B}\). The position and attitude dynamics of the MAV are then given by
\[m\left({}_{B}\dot{\boldsymbol{v}}+{}_{B}\boldsymbol{\omega} \times{}_{B}\boldsymbol{v}\right)={}_{B}\boldsymbol{f}_{g}+{}_{B}\boldsymbol{ f}_{c} \tag{1a}\] \[\boldsymbol{J}\,{}_{B}\dot{\boldsymbol{\omega}}+{}_{B} \boldsymbol{\omega}\times\boldsymbol{J}\,{}_{B}\boldsymbol{\omega}=\boldsymbol {\tau}_{c}, \tag{1b}\]
where \(m\in\mathbb{R}\) is the total mass of the platform, \(\boldsymbol{J}\in\mathbb{R}^{3\times 3}\) is the inertia matrix in \(\mathcal{F}_{B}\), \(\boldsymbol{f}_{g}\in\mathbb{R}^{3}\) is the gravity force vector, \(\boldsymbol{v},\boldsymbol{\omega}\in\mathbb{R}^{3}\) are the platform's linear and angular velocity, and \(\boldsymbol{f}_{c},\boldsymbol{\tau}_{c}\in\mathbb{R}^{3}\) are the force and torque commands, respectively. Since the system cannot produce instantaneous thrust along its body \({}_{B}\boldsymbol{y}\) axis, we employ a cascaded control structure with an outer loop position controller and an inner loop attitude controller. Consider the position and velocity errors of the linear dynamics as
\[{}_{B}\boldsymbol{e}_{p} =\boldsymbol{R}_{BW}\left({}_{W}\boldsymbol{p}^{\text{ref}}-{}_{ W}\boldsymbol{p}\right) \tag{2a}\] \[{}_{B}\boldsymbol{e}_{v} =\boldsymbol{R}_{BW}\,{}_{W}\boldsymbol{v}^{\text{ref}}-{}_{B} \boldsymbol{v}, \tag{2b}\]
where \(\boldsymbol{e}_{p},\boldsymbol{e}_{v}\in\mathbb{R}^{3}\) are the position and velocity errors, respectively, and the quantities \((\cdot)^{\text{ref}}\) are generated by a suitable trajectory planner. Based on (2) we define the Proportional-Derivative (PD) control law as
\[{}_{B}\boldsymbol{f}_{c}=\boldsymbol{K}_{p}\,{}_{B}\boldsymbol{e} _{p}+\boldsymbol{D}_{p}\,{}_{B}\boldsymbol{e}_{v}-{}_{B}\boldsymbol{f}_{g}+\\ +m\left(\boldsymbol{R}_{BW}\,{}_{W}\dot{\boldsymbol{v}}^{\text{ ref}}+{}_{B}\boldsymbol{\omega}\times{}_{B}\boldsymbol{v}\right). \tag{3}\]
For the attitude control loop, consider the reference orientation given by \(\boldsymbol{R}_{W}^{\text{ref}}=\left[{}_{B}\boldsymbol{x}^{\text{ref}},{}_{B} \boldsymbol{y}^{\text{ref}}\right]\,{}_{B}\boldsymbol{y}^{\text{ref}}\,{}_{B} \boldsymbol{z}^{\text{ref}}\in\mathrm{SO}(3)\) that
Fig. 3: Control scheme of the Aerial Manipulator.
Fig. 2: Design of the origami-based delta manipulator with the main components highlighted. The details (A) and (B) show the approximation of the universal joints using perpendicular revolute joints. The dashed lines indicate the rotation axes of each joint.
contains the pitch and yaw angle references of \(\mathcal{F}_{B}\) w.r.t. \(\mathcal{F}_{W}\) (note that, given the platform's actuation, only yaw and pitch can be tracked individually).
We then construct the attitude-loop target orientation, denoted by \(\mathbf{R}^{d}_{WB}\in\mathrm{SO}(3)\), as follows. We first define a new command vector \({}_{B}\mathbf{\bar{f}}_{c}=\left[\begin{smallmatrix}0&{}_{B}f_{c,y}&{}_{B}f_{c,z} \end{smallmatrix}\right]^{\top}\), from which the commanded force along \({}_{B}x\) has been removed. We then rotate it into \(\mathcal{F}_{W}\) and compute the desired body-frame z-axis, expressed in the world frame, as \({}_{B}\mathbf{z}^{d}\coloneqq\frac{w\mathbf{\bar{f}}_{c}}{\|w\mathbf{\bar{f}}_{c}\|}\). Lastly, we compute \({}_{B}\mathbf{y}^{d}=\frac{\mu\mathbf{z}^{d}\times_{B}\mathbf{x}^{\text{eff}}}{\|\mathbf{z}^{ d}\times_{B}\mathbf{x}^{\text{eff}}\|}\) and obtain the desired rotation matrix for the inner attitude control loop as
\[\mathbf{R}^{d}_{WB}=\left[{}_{B}\mathbf{x}^{\text{ref}}\quad_{B}\mathbf{y}^{d}\quad\frac{ \mu\mathbf{x}^{\text{eff}}\times_{B}\mathbf{y}^{d}}{\|\mathbf{z}^{d}\times_{B}\mathbf{y}^{d}\| }\right]. \tag{4}\]
Note how the desired rotation matrix in (4) preserves the reference pitch and yaw angles, while exploiting the roll dynamics to pursue the position tracking task. We now define the inner attitude loop control errors as
\[{}_{B}\mathbf{e}_{R} =\frac{1}{2}\left[\mathbf{R}^{d}_{WB}{}^{\top}\mathbf{R}_{WB}-\mathbf{R}^{ \top}_{WB}\mathbf{R}^{d}_{WB}\right]^{\vee}, \tag{5a}\] \[{}_{B}\mathbf{e}_{\omega} ={}_{B}\mathbf{\omega}-\mathbf{R}_{WB}\,{}_{W}\mathbf{\omega}^{\text{ref}}, \tag{5b}\]
where \((\cdot)^{\vee}:\mathfrak{so}(3)\rightarrow\mathbb{R}^{3}\) is the wee operator which transforms a skew-symmetric matrix to a vector. Then, the control torque command \({}_{B}\mathbf{\tau}_{c}\in\mathbb{R}^{3}\) can be computed as
\[{}_{B}\mathbf{\tau}_{c} =\mathbf{K}_{R}\mathbf{e}_{R}+\mathbf{D}_{\omega}\mathbf{e}_{\omega}+\mathbf{\omega} \times\mathbf{J}\mathbf{\omega}+ \tag{6}\] \[\quad-\mathbf{J}\left[\mathbf{\omega}\times\mathbf{R}^{\top}_{WB}\mathbf{R}^{ \prime\prime}_{WB}\mathbf{\omega}^{\text{ref}}-\mathbf{R}^{\top}_{WB}\mathbf{R}^{d}_{WB} \mathbf{\omega}^{\text{ref}}\right]\]
where \(\mathbf{K}_{R},\ \mathbf{D}_{\omega}\in\mathbb{R}^{3\times 3}\) are diagonal and positive gain matrices. All quantities are expressed in \(\mathcal{F}_{B}\) and the \({}_{B}(\cdot)\) subscript has been omitted for brevity.
_Actuator allocation:_ In order to compute the actuator commands from the force and torque control commands, we compute the _actuator allocation_ as follows. We define \(T_{12}\) and \(T_{34}\) as the thrusts produced by the respective motor groups, and \(\alpha_{0}\) and \(\alpha_{1}\) as the tilt angles of the two arms, as showed in Fig. 1. Furthermore, \(l_{1}\) is the distance from the center of the two frontal motor groups to the origin of \(\mathcal{F}_{B}\), and \(l_{2}\) the distance from the center of the tail motor to the to origin of \(\mathcal{F}_{B}\). Considering the MAV geometry, the actuator allocation is given by the equations
\[\begin{bmatrix}\mathbf{f}^{r}_{c}\\ \mathbf{\tau}_{c}\end{bmatrix}=\underbrace{\begin{bmatrix}1&0&-1&0&0\\ 0&-1&0&-1&-1\\ 0&l_{1}&0&-l_{1}&0\\ 0&0&0&0&-l_{2}\\ l_{1}&0&l_{1}&0&-k_{d}\end{bmatrix}}_{\mathbf{A}}\underbrace{\begin{bmatrix}T_{12} \sin(\alpha_{1})\\ T_{12}\cos(\alpha_{1})\\ T_{34}\sin(\alpha_{0})\\ T_{34}\cos(\alpha_{0})\\ T_{5}\end{bmatrix}}_{\mathbf{u}}, \tag{7}\]
with \(\mathbf{f}^{r}_{c}=\left[{}_{B}f_{c,x}&{}_{B}f_{c,z}\right]^{\top}\) containing only command forces along \({}_{B}\mathbf{x}\) and \({}_{B}\mathbf{z}\). Given a set of control forces and torques (i.e. _wrench_), we compute the input vector \(\mathbf{u}\) as
\[\mathbf{u}=\mathbf{A}^{-1}\begin{bmatrix}\mathbf{f}^{r}_{c}\\ \mathbf{\tau}_{c}\end{bmatrix}, \tag{8}\]
and solve for the individual actuator commands:
\[\begin{split} T_{12}=\sqrt{u_{0}^{2}+u_{1}^{2}};\quad T_{34}= \sqrt{u_{2}^{2}+u_{3}^{2}};\quad T_{5}=u_{5}\\ \alpha_{0}=\mathrm{atan2}(u_{1},u_{2});\quad\alpha_{1}=\mathrm{atan2}(u_{3},u_ {4}).\end{split} \tag{9}\]
From there the rotational speeds of the motors can be calculated by using the motor coefficients \(k_{f}\), \(k_{f,\text{rear}}\in\mathbb{R}\) and assuming a quadratic relationship
\[\begin{split}\omega_{1}=\omega_{2}=\sqrt{\frac{T_{12}}{2k_{f}}}; \quad\omega_{3}=\omega_{4}=\sqrt{\frac{T_{34}}{2k_{f}}}\\ \omega_{5}=\mathrm{sign}(T_{5})\sqrt{\frac{\|T_{5}\|}{k_{f,\text{rear }}}}.\end{split} \tag{10}\]
Note that the sign of the rear thrust \(T_{5}\) must be specifically taken into account, since the rear motor is bidirectional.
Also, with \(k_{f}=8.1\times 10^{-6}\,\mathrm{N\,s^{2}}\), \(k_{f,\text{rear}}=4.05\times 10^{-6}\,\mathrm{N\,s^{2}}\) and maximum rotor speed \(\omega_{\text{max}}=1143\,\mathrm{rad}\), the platform achieves a maximum total thrust of \(T_{\text{max}}=4.85\,\mathrm{kg}\), which corresponds to a thrust-to-weight ratio of \(2.43\), considering the full AM. This relatively high ratio allowed never entering into actuator saturation in the proposed experiments.
### _Origami-based delta manipulator inverse kinematics_
In this section we describe an IK approach to find the manipulator joint angles \(\theta_{i}\) as function of the end-effector target position \({}_{D}\mathbf{p}^{d}_{e}\). We express the position of the end-effector frame \(\mathcal{F}_{E}=\{_{E}O,{}_{E}\mathbf{x},{}_{E}\mathbf{y},{}_{E}\mathbf{z}\}\) in the delta frame \(\mathcal{F}_{D}=\{_{D}O,{}_{D}\mathbf{x},{}_{D}\mathbf{y},{}_{D}\mathbf{z}\}\), which is fixed to the base plate of the arm, as in Fig. 4. We exploit the solution for conventional delta robots [24] with some adjustments to account for the kinematic differences of the origami adaptation.
Consider the geometric description of the origami delta arm in Fig. 4. The solution for a conventional delta robot is given by finding the intersection point \(P_{k,i}\) of a circle around the hip joint \(P_{h,i}\) with radius \(l_{p}\) and a sphere around the ankle joint \(P_{a,i}\) with radius \(l_{d}=l_{m}+2l_{e}\). However, since in the origami design the universal joints do not have aligned rotation axes, the length of the distal link \(l_{d}\) does not remain constant. Therefore, we replace \(l_{d}\) with an expression \(l_{d,i}({}_{D}\mathbf{p}_{e})\), which depends on the dimensions \(l_{m}\) and \(l_{e}\) as before, but is a function of the end-effector position as well.
The kinematic relation for a generic leg \(i=\{1,2,3\}\) is
\[\|\mathbf{l}_{d,i}\|^{2}=\|_{D}\mathbf{p}_{e}+{}_{D}\mathbf{p}_{ai}-{}_{D}\mathbf{p}_{hi}-{}_{D}\mathbf{l} _{ki-hi}(\theta_{i})\|^{2}, \tag{11}\]
with \({}_{D}\mathbf{l}_{ki-hi}=\left[0\;l_{p}\cos\theta_{i}\;l_{p}\sin\theta_{i}\right]^{\top}\) depending on the joint angle. From geometric considerations on the distal link
Fig. 4: Schematics of the origami arm.
parallelogram, we compute its true length as
\[l_{d,i}(_{D}\mathbf{p}_{e})=\sqrt{p_{e,\parallel}^{2}+\left(\sqrt{l_{m}^{2}-p_{e, \parallel}^{2}}+2l_{e}\right)^{2}}, \tag{12}\]
with \(p_{e,\parallel}\) the component of \({}_{D}\mathbf{p}_{e}\) parallel to the \(P_{hi}\) joint axis. Then, by combining (11) and (12), we get
\[0 =E_{i}\cos\theta_{i}+F_{i}\sin\theta_{i}+G_{i}, \tag{13a}\] \[E_{i} =2l_{p}(r_{DE}-_{DE}p_{E,y}),\] (13b) \[F_{i} =-_{D}p_{E,z}l_{p},\] (13c) \[G_{i} =_{D}p_{E,x}^{2}+_{D}p_{E,y}^{2}+_{D}p_{E,z}^{2}+r_{DE}^{2}+\] \[+l_{p}^{2}-_{D}p_{E,y}r_{DE}-l_{d,i}^{2}, \tag{13d}\]
with \(r_{DE}=r_{D}-r_{E}\). Finally, in order to compute the desired joint angles \(\theta_{i}\) for the end-effector to reach a target position \({}_{D}\mathbf{p}_{e}^{d}\), we set \({}_{D}\mathbf{p}_{e}={}_{D}\mathbf{p}_{e}^{d}\) and solve (13) as1
Footnote 1: We employed the tangent half-angle substitution to solve this equation.
\[\theta_{i}(_{D}\mathbf{p}_{e}^{d})=2\tan^{-1}\left(\frac{-F_{i}+\sqrt{E_{i}^{2}+F_ {i}^{2}-G_{i}^{2}}}{G_{i}-E_{i}}\right). \tag{14}\]
The desired joint angles are then tracked by the servomotors' integrated PID controllers.
### _Kinematic coupling law_
To control both the MAV body and the delta arm, we couple the two frames \(\mathcal{F}_{B}\) and \(\mathcal{F}_{E}\) kinematically. This aims to compensate any oscillations occurring in the body pose tracking error \({}_{B}\mathbf{e}_{p}\). To this end, we adapt the end-effector position reference \({}_{D}\mathbf{p}_{e}^{\text{ref}}\) according to \({}_{B}\mathbf{e}_{p}\), generating the instantaneous end-effector target \({}_{D}\mathbf{p}_{e}^{d}\).
\[{}_{D}\mathbf{p}_{e}^{d}=_{D}\mathbf{p}_{e}^{\text{ref}}+\mathbf{R}_{DB}\ {}_{B}\mathbf{e}_{p}+\mathbf{R}_{DB}\mathbf{R}_{WB}^{ \text{ref}}{}^{\top}\mathbf{R}_{WB}\ {}_{B}\mathbf{p}_{BD}, \tag{15}\]
where \({}_{B}\mathbf{p}_{BD}\in\mathbb{R}^{3}\) is the distance between the origins of the \(\mathcal{F}_{D}\) and \(\mathcal{F}_{B}\) expressed in \(\mathcal{F}_{B}\) and \(\mathbf{R}_{WB}^{\text{ref}}{}^{\top}\mathbf{R}_{WB}\) accounts for rotational errors. The instantaneous target \({}_{D}\mathbf{p}_{e}^{d}\) is then used to compute the desired joint angles \(\theta_{i}\).
## IV Experimental results
In this section, we focus specifically on three aspects: (i) The end-effector position tracking performance during free flight, (ii) a characterization of the origami manipulator stiffness depending on its configuration, and (iii) the system characteristics during interaction, particularly the achievable interaction forces with different manipulator stiffness.
### _Manipulator kinematic compensation_
In this experiment, we evaluate the positioning accuracy of the AM's end-effector. To perform this analysis, we command a constant end-effector reference pose and we track it only using the MAV, with the origami arm in a fixed configuration. After recording a sufficient number of samples, the delta arm is actively commanded to compensate for the floating base displacements. The end-effector position is obtained from the arm's forward kinematics. The performances in the two scenarios are in Fig. 5. The median and interquartile range of the tracking error's norm is more than halved in active compensation with respect to the fixed arm case.
### _Origami manipulator stiffness characterization_
Here we model the manipulator's stiffness in different positions of the end-effector frame \(\mathcal{F}_{E}\) with respect to the delta base frame \(\mathcal{F}_{D}\). Specifically, the origin of \(\mathcal{F}_{E}\) was always kept along the vertical direction \({}_{D}\mathbf{z}\). These measurements were taken by attaching the manipulator on a load cell and pressing down the end-effector plate in one millimeter increments along the \({}_{D}\mathbf{z}\) direction. Both end-effector position displacements \(\delta_{z}\in\mathbb{R}\) and push force \(F_{z}\in\mathbb{R}\) were recorded for each chosen end-effector height value \({}_{D}p_{E,z}\). In the end, a linear spring model was fitted for different height values as \(F_{z}=k_{s}\delta_{z}\), with \(k_{s}(_{D}p_{E,z})\) the estimated end-effector stiffness. The different stiffness fittings are visible in Fig. 5(a). In particular, different data points have been collected at heights in the range \({}_{D}p_{E,z}\in[80,\ 195]\,\mathrm{mm}\), interpolating the resulting stiffness with a second order polynomial \(k_{s}(_{D}p_{E,z})=c_{0}+c_{1}\,_{D}p_{E,z}+c_{2}\,_{D}p_{E,z}^{2}\), where \(c_{0}\),\(c_{1}\),\(c_{2}\in\mathbb{R}\) are the identified coefficients. We choose a second-order curve to balance complexity and fitting error. The resulting interpolation is shown in Fig. 5(b), with a stiffness change in the range \(k_{s}\in[80\ 290]\,\mathrm{N}\,\mathrm{m}^{-1}\). Note that the stiffness is greater when the arm is fully extended, whereas it is most compliant with the arm retracted. Having a mechanically variable stiffness arm represents an advantage when it is not possible to implement a software impedance control action. For instance, this is the case when the mathematical model of the MAV is not known with enough precision for an impedance controller, or when only position control is available without an external force and torque (F/T) sensor or
Fig. 5: Position error norm at the end-effector with and without active compensation. The two distributions are positively skewed with medians \(0.02\,\mathrm{m}\) and \(0.05\,\mathrm{m}\), respectively.
Fig. 6: Origami delta arm stiffness identification.
estimator to implement an admittance control scheme. Note that the stiffness characterization is performed considering a centered end-effector since we assume to only interact with the environment in this condition. A complete characterization, although possible, would require much more experimental data and is beyond the scope of this work. Here we aim at providing preliminary results on how the compliance influences the interaction contact forces, whereas a full exploitation will be addressed in future works.
### _Physical interaction_
To study the different behaviors and exerted forces depending on the commanded stiffness configuration of the manipulator, a physical interaction experiment was conducted as shown in Fig. 7. We commanded the AM to approach a surface and push against it with the origami manipulator configured at different stiffness levels. The surface was connected to a F/T sensor to provide ground truth data of the pushing forces. Once in contact, the origami arm's stiffness \(k_{s}\) was increased up to \(290\,\mathrm{N}\,\mathrm{m}^{-1}\). This resulted in a subsequent increase of the pushing force, as in Fig. (a)a. The origami manipulator was able to sustain a peak force of \(4\,\mathrm{N}\) before the structure folded into a critical configuration. Similarly, another experiment was conducted with the lowest stiffness allowed by the manipulator, while pushing further with the aerial base, in Fig. (b)b. Here, the AM was exerting a force of \(2\,\mathrm{N}\) when the critical folding occurred.
The _critical folding configurations_ are phenomenons due to an unnatural bending of the origami joints, from which the AM can't autonomously recover, as in Fig. 9. We refer to _knee_ or _ankle critical folding_ as the ones caused by the bending of one of the knee or ankle joints, respectively. The former is likely to happen when pushing too strongly on the fully extended origami arm, i.e., in a high-stiffness configuration. The latter happens when pushing too much on the retracted origami arm, in a low-stiffness configuration. Manipulator configurations at the center of the stiffness spectrum were generally less prone to fold into critical configurations. Identifying these particular configurations is of primary importance to describe the arm's feasible workspace and exerted forces, to avoid criticalities in more complex tasks, where reliability plays a fundamental role. In particular, future designs of the origami manipulator will feature mechanic stoppers at joints \(R5\) to prevent the _ankle critical folding_, while increasing the robustness of joints \(R3\) and \(R4\) will prevent the _knee critical folding_.
## V Conclusions
We realized a small and lightweight AM for inspection purposes. We first described the construction process of both the aerial platform and the origami arm and how a very low weight can be reached with a careful choice of design and building materials. We then derived a pose controller for the body and an IK controller for the manipulator, coupling them to achieve accurate pose tracking of the end-effector. We showed how the addition of the origami manipulator increases the end-effector tracking performance in free-flight and how the manipulator compliance can be adjusted during APHI, affecting the generated interaction forces. We validated the use of inherently compliant manipulators as opposed to the rigid counterparts with additional spring elements, which would increase the system's weight and complexity. In the end, we also analyzed its limitations when it comes to undesirable arm foldings. We believe that in future work, adjusting the manipulator compliance in such a high range will be a key element in more complex interaction scenarios, increasing both the robustness and safety of APhI tasks. We will also further address the problem of the critical folding configurations, leading to a more robust mechanical design.
## Acknowledgment
We thank Christoph Gaupp for his help on building the MAV platform prototype.
Fig. 8: Interaction force (blue) generated with different stiffness configurations (red). The dashed vertical line represents the time instant of the origami critical folding. The transparent blu shadow represents the unfiltered force measurements. The solid blue line is a filtered version for the graph’s clarity.
Fig. 7: APhl experiment with the AM pushing on a surface connected to a F/T sensor.
Fig. 9: Knee (left) and ankle (right) critical foldings.
|
2310.14664
|
Data Pruning via Moving-one-Sample-out
|
In this paper, we propose a novel data-pruning approach called
moving-one-sample-out (MoSo), which aims to identify and remove the least
informative samples from the training set. The core insight behind MoSo is to
determine the importance of each sample by assessing its impact on the optimal
empirical risk. This is achieved by measuring the extent to which the empirical
risk changes when a particular sample is excluded from the training set.
Instead of using the computationally expensive leaving-one-out-retraining
procedure, we propose an efficient first-order approximator that only requires
gradient information from different training stages. The key idea behind our
approximation is that samples with gradients that are consistently aligned with
the average gradient of the training set are more informative and should
receive higher scores, which could be intuitively understood as follows: if the
gradient from a specific sample is consistent with the average gradient vector,
it implies that optimizing the network using the sample will yield a similar
effect on all remaining samples. Experimental results demonstrate that MoSo
effectively mitigates severe performance degradation at high pruning ratios and
achieves satisfactory performance across various settings.
|
Haoru Tan, Sitong Wu, Fei Du, Yukang Chen, Zhibin Wang, Fan Wang, Xiaojuan Qi
|
2023-10-23T08:00:03Z
|
http://arxiv.org/abs/2310.14664v2
|
# Data Pruning via Moving-one-Sample-out
###### Abstract
In this paper, we propose a novel data-pruning approach called moving-one-sample-out (MoSo), which aims to identify and remove the least informative samples from the training set. The core insight behind MoSo is to determine the importance of each sample by assessing its impact on the optimal empirical risk. This is achieved by measuring the extent to which the empirical risk changes when a particular sample is excluded from the training set. Instead of using the computationally expensive leaving-one-out-retraining procedure, we propose an efficient first-order approximator that only requires gradient information from different training stages. The key idea behind our approximation is that samples with gradients that are consistently aligned with the average gradient of the training set are more informative and should receive higher scores, which could be intuitively understood as follows: if the gradient from a specific sample is consistent with the average gradient vector, it implies that optimizing the network using the sample will yield a similar effect on all remaining samples. Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios and achieves satisfactory performance across various settings.
+
Footnote †: Equal contribution.
## 1 Introduction
The recent advances in AI have been largely driven by the availability of large-scale datasets [31; 34; 41; 3; 37; 34; 41; 10], which enable the training of powerful models [51; 3; 45; 6; 1; 4; 36]. However, such datasets also pose significant challenges in terms of computational and storage resources. It is important to note that these datasets may contain redundant or noisy samples that are either irrelevant or harmful to the model's performance. Data pruning techniques aim to address these issues by removing such samples and retaining a smaller, more compact core set of training samples [11; 28; 42; 27; 48; 38; 15]. This can not only reduce the costs of model training and data storage but also maintain the performance of the model compared to the original dataset.
Existing approaches can be broadly categorized into three major groups: pruning by importance criteria [11; 42; 28; 27; 26; 46], coverage or diversity-driven methods [48; 29; 33; 15], and optimization-based methods [38; 8; 23; 24; 21; 44]. Among these, the first group of studies is the most effective and popular. These studies assume that hard samples are critical and informative core-set samples, and thus, they design difficulty-based metrics to assess sample importance. Such metrics include prediction entropy [11], forgetting [28] or memorization [46] score, gradient norm [27], E2LN (variance of prediction) [27], self-supervised prototype distance [42], diverse ensembles [26], and others.
**Limitations and Motivations.** The methods we discussed have some major drawbacks: (i). Hard samples are not necessarily important samples. For example, noisy samples [48] and outliers [39] often lead to high losses, which makes it difficult for importance criteria [11; 27] to distinguish them from truly important samples. (ii). Training dynamics are rarely considered. The mainstream methods [27; 46; 26; 11; 42; 15; 48; 38] do not possess the awareness of training dynamics as they generally utilize a converged surrogate network for data selection. This may favor samples that are difficult or influential in the later stages of training, but not necessarily in the earlier stages or the whole training process [50; 12].
**Our Method.** In this paper, we propose a new data pruning algorithm, which involves the newly proposed Moving-one-Sample-out (MoSo) score with an efficient and error-guaranteed estimator. To address the first limitation, MoSo utilizes _the change of the optimal empirical risk when removing a specific sample from the training set_ to measure sample importance instead of only focusing on sample difficulty. By doing so, MoSo can better separate important samples from harmful noise samples, as the former tends to lower the empirical risk, while the latter may increase it. However, MoSo is too costly to compute exactly as it needs brute force leaving-one-out-retraining. Therefore, we propose an efficient first-order approximator with linear complexity and guaranteed approximation error. The proposed approximation is simple: samples whose gradient agrees with gradient expectations at all training stages will get higher scores, which could be intuitively understood as follows: if the gradient from a specific sample is consistent with the average gradient vector, it implies that optimizing the network using the sample will yield a similar effect on all remaining samples. The second limitation is addressed since MoSo comprehensively considers information from different training stages.
We evaluate our MoSo on CIFAR-100 [5], Tiny-ImageNet [49], and ImageNet-1K [31] under different pruning ratios. As shown in Figure 1, our MoSo significantly surpasses the previous state-of-the-art methods, especially for high pruning ratios. Besides, experimental results demonstrate that the coreset selected by our MoSo under one network (such as ResNet) can generalize well to other unseen networks (such as SENet and EfficientNet) (refer to Figure 3(a) and Figure 3(b)). Additionally, we study the robustness of our MoSo on datasets with synthetic noisy labels (refer to Figure 3(c) and Figure 3(d)). It can be seen that our MoSo performs best on average, and surpasses the previous methods based on difficulty-based importance criteria by a large margin.
## 2 Related Work
Finding important samples is not only the purpose of data pruning, but also the core step in many machine learning tasks and problems, like active learning [7; 52; 39; 19; 32], noisy learning [9], and continuous learning [16]. In data-efficient deep learning, there are also some related topics like data distillation [13; 40; 53] and data condensation [17; 47; 20]. Unlike data pruning, they focus on synthesizing a small but informative dataset as an alternative to the original large-scale dataset. Existing data selection/pruning approaches could be broadly divided into several categories, pruning by importance criteria [11; 42; 28; 27; 46], coverage or diversity driven methods [48; 29; 33; 15], optimization-based methods [38; 23; 24; 25; 44; 43; 8].
**Pruning by importance criteria.** This group of studies is the most popular. Generally, they assume that hard samples are critical and informative core-set samples and thus design difficulty-based metrics to assess sample importance. The EL2N score [27] measures the data difficulty by computing the average of the \(\ell_{2}\)-norm error vector from a set of networks. GraNd [27] measures the importance by calculating the expectation of the gradient norm. The Forgetting score [28] counts how many times a model changes its prediction from correct to incorrect for each example during the training process. Memorization [46] assigns a score to each example based on how much its presence or absence in the training set affects the model's ability to predict it correctly. Diverse ensembles [26] gave a score to each sample based on how many models in a group misclassified it. However, hard samples are not necessarily good for model training [42]. For example, noisy samples [48] and outliers [39] often lead to high losses, which makes it difficult for importance criteria [11; 27] to distinguish them from truly important samples. As a comparison, our MoSo score measures sample importance instead of only focusing on sample difficulty by calculating _the change of the optimal empirical risk when removing a specific sample from the training set_. By doing so, it can better separate important samples from harmful noise samples, as the former tends to lower the empirical risk, while the latter may increase it.
**Coverage or diversity driven methods.** Sener et. al. [32] applied greedy k-center to choose the coreset with good data coverage. BADGE [19] is a diversity-based selection method in active learning that clusters the gradient embeddings of the current model using k-means++ and selects a subset from each cluster. CCS [15] balances the data distribution and the example importance in selecting data points. Moderate [48] chooses data points with scores near the median score. Note that some diversity-driven methods, such as CCS [15] and Moderate [48], can use any selection criterion, such as EL2N score [27], as a basis.
**Optimization-based methods.** In addition, a line of recent works proposed to select data by optimization, such as gradient matching [8; 23], bi-level optimization [24; 25], submodularity [21; 44; 43]. One of the most advanced methods, the optimization-based dataset pruning [38], builds an algorithm over the sample-wise influence function [35] to remove samples with minimal impact and guarantees generalization. However, like mainstream methods [27; 46; 26; 11; 42; 15; 48; 38], it does not account for the effect of samples on the training dynamics, as it only uses the information from the final model. This may favor samples that are difficult or influential in the later stages of training, but not necessarily in the earlier stages or the whole training process [50; 12]. In our work, the proposed method is fully training-dynamic-aware since the MoSo's approximator comprehensively considers information from different training stages.
## 3 Method
In the following, we will first present the background knowledge in Section 3.1. Following that, we will elaborate on the proposed MoSo score for assessing sample importance in Section 3.2. Furthermore, we will introduce an efficient approximator of MoSo in Section 3.3. Section 3.4 shows the along with its complexity analysis and error guarantees.
### Background
In this work, we focus on the classification task, where \(\mathcal{S}=\{(x_{i},y_{i})|_{i=1}^{N}\}\) denotes the training set, drawn i.i.d from an underlying data distribution \(P\), with input vectors \(x\in\mathbb{R}^{d}\) and one-hot label vectors \(y\in\{0,1\}^{K}\). Let \(l(\cdot)\) denote the widely used cross-entropy loss function for classification tasks. Given a pruning ratio \(\delta\) and a parameterized deep network \(f_{\mathbf{w}}\), the data pruning task aims to find the most representative training subset \(\hat{S}\subset\mathcal{S}\) while pruning the remaining samples. This can be formulated as:
\[\hat{S}=\operatorname*{arg\,min}_{D\subset\mathcal{S}}\mathbb{E}_{z:(\mathbf{ x},y)\sim P}\Big{[}l(z,\mathbf{w}_{D}^{\mathsf{*}})\Big{]}, \tag{1}\]
where \((|\mathcal{S}|-|D|)/|\mathcal{S}|=\delta\), \(|\cdot|\) represents the cardinality of a set, and \(\mathbf{w}_{D}^{\mathsf{*}}\) indicates the optimal network parameter trained on \(D\) with the stochastic gradient descent (SGD) optimizer. The SGD optimizer updates the parameters as follows:
\[\mathbf{w}_{t}=\mathbf{w}_{t-1}-\eta_{t}\nabla\mathcal{L}(\mathcal{B}_{t}, \mathbf{w}_{t-1}), \tag{2}\]
where \(t\in\{1,...,T\}\), \(\eta_{t}\) is the learning rate at the \(t\)-th step, and \(\mathcal{B}_{t}\) represents the mini-batch, \(\nabla\) is the gradient operator with respect to network parameters, \(\mathcal{L}(\cdot)\) is the average cross-entropy loss on the given set/batch of samples.
### Definition for Moving-one-Sample-out
Here, we will describe the details of Moving-one-Sample-out (MoSo) score.
**Definition 1**.: _The MoSo score for a specific sample \(z\) selected from the training set \(\mathcal{S}\) is_
\[\mathcal{M}(z)=\mathcal{L}\Big{(}\mathcal{S}/z,\mathbf{w}_{\mathcal{S}/z}^{ \mathsf{*}}\Big{)}-\mathcal{L}\Big{(}\mathcal{S}/z,\mathbf{w}_{\mathcal{S}}^{ \mathsf{*}}\Big{)}, \tag{3}\]
_where \(\mathcal{S}/z\) indicates the dataset \(\mathcal{S}\) excluding \(z\), \(\mathcal{L}(\cdot)\) is the average cross-entropy loss on the considered set of samples, \(\mathbf{w}_{\mathcal{S}}^{\mathsf{*}}\) is the optimal parameter trained on the full set \(\mathcal{S}\), and \(\mathbf{w}_{\mathcal{S}/z}^{\mathsf{*}}\) is the optimal parameter on \(\mathcal{S}/z\)._
The MoSo score measures the importance of a specific sample \(z\) by calculating how the empirical risk over \(\mathcal{S}/z\) changes when removing \(z\) from the training set. Specifically, with a _representative
(important and with proper annotation)_ sample \(z\), retaining it can promote training and result in a lower empirical risk while removing it could be harmful to the training and result in a higher empirical risk. Hence, \(\mathcal{M}(z)>0\). On the contrary, when \(z\) is _unrepresentative_, \(\mathcal{M}(z)\approx 0\). Moreover, if the selected data point \(z\) is _harmful (e.g. noisy samples)_, the retention of \(z\) is a hindrance to the learning process on \(S/z\), so the risk would be high; removing the harmful \(z\) could result in a lower risk value. Hence, \(\mathcal{M}(z)<0\).
### Gradient-based approximator
The exact calculation of MoSo, as shown in Eq.(3), has a quadratic time complexity of \(\mathcal{O}(Tn^{2})\), considering a dataset with \(n\) samples and a total of \(T\) training iterations required to obtain the surrogate network. However, this is practically infeasible; for instance, it may take more than 45 years to process the ImageNet-1K dataset using a Tesla-V100 GPU. To address this issue, we propose an efficient first-order approximator for calculating the MoSo score, which reduces the complexity to \(\mathcal{O}(Tn)\). This approximation not only significantly decreases the computational requirements but also maintains the effectiveness of the MoSo score in practical applications.
Proposition 1.1.: _The MoSo score could be efficiently approximated with linear complexity, that is,_
\[\hat{\mathcal{M}}(z)=\mathbb{E}_{t\sim\text{uniform}[1,\dots,T)}\Big{(}\frac {T}{N}\eta_{t}\nabla\mathcal{L}(\mathcal{S}/z,\mathbf{w}_{t})^{\mathrm{T}} \nabla l(z,\mathbf{w}_{t})\Big{)}, \tag{4}\]
_where \(\mathcal{S}/z\) indicates the dataset \(\mathcal{S}\) excluding \(z\), \(l(\cdot)\) is the cross-entropy loss function and \(\mathcal{L}(\cdot)\) means the average cross-entropy loss, \(\nabla\) is the gradient operator with respect to the network parameters, and \(\{(\mathbf{w}_{t},\eta_{t})|_{t=1}^{T}\}\) denotes a series of parameters and learning rate during training the surrogate network on \(\mathcal{S}\) with the SGD optimizer. \(T\) is the maximum time-steps and N is the training set size._
The MoSo score approximator in Eq.(4) essentially represents the mathematical expectation of the inner product between the gradient with respect to network parameters considering only sample \(z\) and the gradient using the training set excluding \(z\) (denoted as \(\mathcal{S}/z\)) over \(T\) learning iterations. A sample \(z\) will be assigned a higher MoSo score if the mathematical expectation of the inner product is larger. This can be intuitively understood as follows: if the gradient \(\nabla l(z,\mathbf{w})\) from sample \(z\) is consistent with the average gradient vector \(\nabla\mathcal{L}(\mathcal{S}/z,\mathbf{w})\), it implies that optimizing the network using sample \(z\) will yield a similar effect on reducing the empirical risk as using all remaining samples. This indicates that sample \(z\) is an important and representative sample. Concurrently, according to Eq.(4), it is also assigned a high MoSo score, which aligns with the intuition.
### Theoretical analysis of tightness and complexity
First, we provide a rigorous mathematical justification to show that there is a theoretical guarantee for the error between the approximator we provide and the exact score by the brute-force leave-one-out retraining.
Proposition 1.2.: _By supposing the loss function is \(\ell\)-Lipschitz continuous and the gradient norm of the network parameter is upper-bounded by \(g\), and setting the learning rate as a constant \(\eta\), the approximation error of Eq. (4) is bounded by:_
\[|\mathcal{M}(z)-\hat{\mathcal{M}}(z)|\leq\mathcal{O}\Big{(}(\ell\eta+1)gT+ \eta g^{2}T\Big{)}, \tag{5}\]
_where \(T\) is the maximum iterations._
The proposition shows that approximation error is positively correlated with many factors such as training duration \(T\), gradient norm \(g\), and learning rate \(\eta\). In order to control the impact of approximate errors, in practice, we will not train the surrogate network to complete convergence, instead, we will only update a small number of epochs.
**Complexity analysis.** We show that Eq.(4) efficiently approximates the MoSo score with linear complexity. Specifically, calculating the overall gradient information requires a time complexity of \(\mathcal{O}(n)\). Additionally, computing the expectation of the gradient over different training iterations in Eq.(4) takes \(T\) steps, resulting in a total complexity of \(\mathcal{O}(Tn)\). In practice, we can randomly sample a few time steps rather than considering all \(T\) steps to calculate the mathematical expectation, reducing the overall complexity to be less than \(\mathcal{O}(Tn)\). Moreover, the optimized gradient calculation operator,
implemented by advanced deep learning frameworks such as PyTorch [2], further accelerates the computation, making it more feasible for practical applications.
### Data Pruning with MoSo
This section presents the pipeline for utilizing the approximated MoSo score in data pruning and coreset selection. The pseudocode is provided in Algorithm 1. In the **MoSo scoring** step (see line 10 of Algorithm 1), we employ Eq.(4) from Proposition 1 to calculate the MoSo score. In practice, there is no need to sum all the time steps \(\{1,...,T\}\) when calculating the mathematical expectation. Instead, an efficient approach is randomly sampling several time steps for calculating the average or expectation, reducing the overall computational cost. In the **data pruning** step, samples with low scores are pruned, and the proportion of pruned data corresponds to the predefined ratio \(\delta\).
```
0:Dataset \(\mathcal{S}=\{(x_{i},y_{i})|_{i=1}^{N}\}\), pruning ratio \(\delta\);
0:Random initialized the parameter \(\mathbf{w}_{0}\) of a network;
0:cross-entropy loss \(l(\cdot)\), SGD optimizer;
0:Learning-rate scheduler \(\{\eta_{1},...,\eta_{T}\}\), maximum iteration \(T\);
1:Initialize the sample-wise score set \(\mathcal{V}=\phi\) as a null set;
2:if multiple computing devices are available then
3:Partitioning \(\mathcal{S}\) into \(\mathbb{S}:\{S_{1},...,S_{I}\}\); //With parallel acceleration.
4:else
5:\(\mathbb{S}=\{\mathcal{S}\}\) //Without acceleration.
6:endif
7:for\(S_{i}\in\mathbb{S}\)do
8:\(\{(\mathbf{w}_{t},\eta_{t})|_{t=1}^{T}\}=\texttt{SGD}\Big{(}\mathcal{L}(S_{i}, \mathbf{w}_{0}),\ T,\ \{\eta_{1},...,\eta_{T}\}\Big{)}\); //Train the surrogate network.
9:for\(z\in S_{i}\)do
10:\(\mathcal{M}(z)\leftarrow\mathbb{E}_{t\sim\text{uniform}\{1,...,T\}}\Big{(} \eta_{t}\nabla\mathcal{L}(\mathcal{S}/z,\mathbf{w}_{t})^{\text{T}}\nabla l(z, \mathbf{w}_{t})\Big{)}\); //MoSo scoring.
11:Merge into the score set \(\mathcal{V}\leftarrow\mathcal{V}+\{\mathcal{M}(z)\}\)
12:endfor
13:endfor
14:return\(\widehat{S}\leftarrow\texttt{Pruning}(\mathcal{S}|\mathcal{V},\delta)\) //Data pruning.
```
**Algorithm 1**Data Pruning with MoSo.
**Parallel acceleration by dataset partitioning.** To enhance the practical applicability of MoSo on large-scale datasets, we propose a parallel acceleration scheme that can be employed when multiple GPU devices are available. Specifically, before training the surrogate network, we initially divide the original full dataset into a series of **non-overlapping** subsets. This approach enables efficient processing by leveraging the parallel computing capabilities of multiple GPUs, where the number of subsets should be no more than the available computing devices. We select a subset from \(\mathbb{S}\) without replacement for each device and then perform training and scoring within the chosen subset, following Algorithm 1. Finally, MoSo scores from different devices are combined together. As long as the number of samples in a subset is large enough to approximately represent the overall statistics of the dataset, this partitioning scheme will not compromise performance while significantly reducing computation time by a factor of \(I\). This approach is particularly useful for large-scale datasets.
## 4 Experiments
**Datasets and baselines.** We evaluate our method on three well-known public benchmarks: the CIFAR-100 [5], which contains 50,000 training examples of 100 categories; the Tiny-ImageNet [49], which has 100,000 images of 200 classes; and the ImageNet-1K [31], which covers 1000 classes with more than 1M training images. We compare our method with a range of baselines, including: (1). Random selection; (2). Herding [29]; (3). Forgetting [28]; (4). GraNd [27]; (5). EL2N [27];
(6). Optimization-based Dataset Pruning (OPT) [38]; (7). Self-supervised pruning (SSP) [42]; (8). Moderate [48].
**Implementation details.** We implement our method in Pytorch [2]. All the experiments are run on a server with 8 Tesla-V100 GPUs. Unless otherwise specified, we use the same network structure ResNet-50 [22] for both the coreset and the surrogate network on the full data. We keep all hyper-parameters and experimental settings of training before and after dataset pruning consistent. We train the surrogate network on all datasets for 50 epochs. To estimate the mathematical expectation in Eq.(4) from Proposition 1, we randomly sample 10 time steps. Thus, MoSo can compute gradients from multiple epochs without increasing the overall time cost significantly, compared to methods that need to train a network fully (_e.g._ 200 epochs for CIFAR-100) before calculating the scores.
### Main Results
In the following subsections, we present the detailed performance comparison of our method and baselines on three experiments: data pruning, generalization to unseen structures, and robustness to label noise.
Data pruning.As Figure 1(a) and 1(b) show, our method significantly surpasses the SOTA method [48] on CIFAR-100 and Tiny-ImageNet at high pruning ratios. Note that some selected baselines perform worse than the random selection, especially when with high pruning ratios, _e.g._ the results of the classic EL2N on CIFAR-100, while our method doesn't suffer from this problem. Furthermore, in Figure 1(c), our method achieves satisfactory performances on ImageNet-1K across different pruning rates, showing its effectiveness on large-scale and complex datasets. These results also indicate that our method can capture the sample importance more accurately and robustly than the existing methods.
To study whether our algorithm improves/hurts certain classes, we visualize the class-wise accuracy before and after applying our MoSo data pruning approach in Figure 2. We observe a significant correlation between the two, with a Spearman correlation coefficient of 0.913 and a P value of
Figure 2: We show the class-wise accuracy before (bars named _Fullset_) and after (bars named _MoSo_) applying our MoSo approach. The experiment is conducted on CIFAR-100. We chose ResNet-18 as the network architecture and set the pruning ratio to be 0.8.
Figure 1: Performance comparison of our proposed MoSo and other baseline methods on three image classification datasets: CIFAR-100 [5], Tiny-ImageNet [49], and ImageNet-1K [31]. The results show that our approach outperforms most of the baselines, especially for the high pruning rate (e.g., 70%, 80%).
0.0295. This indicates that the performance before and after pruning with MoSo is consistent across categories, and no significant improvement or harm to any particular category is observed.
Generalization test.To test whether the pruning results are overfitting to the specific network architecture, we evaluate MoSo's generalization ability to unseen architectures. Following the protocol in [48], we use ResNet-50 as the surrogate network for scoring, and training different network architectures, SENet [18] and EfficientNet-B0 [30], on the selected data. Figure 3(a) and Figure 3(b) show the experimental results on different network architectures. MoSo exhibits a satisfying generalization ability to unseen models and consistently outperforms or matches the state-of-the-art methods such as SSP [42], and OPT [38].
Robustness test.Label noise is a common challenge in real-world applications. Therefore, how to improve the model robustness to label noise is an important and popular problem. In this section, we study the robustness of MoSo to label noise by conducting comparative experiments on CIFAR-100 and Tiny-ImageNet with synthetic label noise. Specifically, we inject label noise [14] into the two datasets by randomly replacing the labels for a percentage of the training data with all possible labels. The noise rate is set to 20% for all the experiments. We use ResNet-50 as the network architecture and keep all experimental settings consistent with the previous data pruning experiments. The results are shown in Figure 3(c) and Figure 3(d). We observe that some difficulty-based importance criteria don't work very well, such as Forgetting, GraNd on both settings, while Herding [29], Moderate [48] and our MoSo, perform well and significantly outperform other baselines in all settings. Our MoSo achieves comparable performance with the best baseline, Herding [29], only lagging behind Herding by less than 1% Top-1 acc.
### Computational Efficiency
We evaluated MoSo and the other baseline methods on a server with 8 Tesla V100 GPUs. We used the CIFAR-100 dataset and the ResNet50 backbone for our experiments. It should be noted here that we also take into account the training time of the surrogate network. This is because not all datasets will have a community-provided network to calculate scores, and private datasets will require practitioners to train a surrogate network. MoSo achieves the best trade-off between computational requirements and performance, making it the best-performing model with rea
Figure 4: Time-core comparison between our MoSo and other baselines. Please note that when implementing the GraNd method, we don’t take the summation of the gradient norm from all epochs, instead, we use the same time-step sampling scheme as MoSo.
Figure 3: In (a) and (b), we study the generalization performance of MoSo and other baselines on CIFAR-100 from ResNet-50 to SENet (R to S) and ResNet-50 to EfficientNet-B0 (R to E). In (c) and (d), we show the robustness against label noise of MoSo and other baselines on CIFAR-100, where the labels are randomly replaced with any possible label with a 20% probability.
sonable computational demands. Notably, it outperforms the state-of-the-art method, Moderate, while being more efficient. Because of the use of large-scale linear programming in the scoring phase, OPT is significantly more time-consuming than the other methods.
### Further Study
In this subsection, we perform additional ablation experiments to investigate the effect of the awareness of training dynamics, the effect of time step sampling, the effect of the parallel speed-up scheme (dataset partitioning), and the effect of the number of epochs in the surrogate training stage.
Effect of the awareness of training dynamics.Here we investigate the effect of incorporating the awareness of training dynamics into our MoSo score. To do this, we compare our method with a variant that removes this awareness by only considering the gradient from the very last epoch of the surrogate network. This variant is equivalent to using the gradient norm as a measure of sample importance. The results are shown in Figure 5(a). We can clearly see that our method outperforms the variant on both CIFAR-100 across different pruning rates. This indicates that the awareness of training dynamics is crucial for capturing the impact of a sample on the model performance and that the gradient norm alone from a converged network is not sufficient for measuring sample importance.
Effect of time step sampling.We then investigate the effect of time step sampling on the accuracy and efficiency of our method. Time step sampling is a technique that we use to reduce the computational cost of calculating Eq.(4) by randomly selecting a subset of epochs to estimate the MoSo score. However, this technique also introduces variance into the estimation and may even affect the quality of the selected coreset with a too-small sampling rate. To study this trade-off, we conduct experiments with different sampling rates and measure the performance of the final coreset on CIFAR-100. The results are shown in Figure 5(b). As expected, we observe that the mean performance decreases as the sampling rate decreases, and the variance also increases as the sampling rate decreases. This suggests that time-step sampling is a useful technique for improving the efficiency of our method, but it should be used with caution to avoid sacrificing too much accuracy.
Effect of the parallel speed-up scheme.The most time-consuming aspect of data pruning is training the surrogate network. Our framework utilizes a parallel speed-up scheme, explained in line 3 of Algorithm 1 in the main text. Essentially, we partition the original full set into several non-overlapping subsets with equivalent size \(\mathcal{S}\rightarrow\{S_{1},...,S_{I}\}\), where \(I\) represents the number of computing devices. On each device,
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Subsets number \(I\) & 1 & 2 & **5** & 10 \\ \hline MoSo (ours) & 74.35 & 75.11 & 75.76 & 75.81 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The effect of dataset partitioning on the final data pruning performance, where the pruning ratio is \(0.2\). The bold one represents the settings used in this work.
Figure 5: Ablation study on the effect of (a) incorporating the training-dynamic-awareness (TDA) into the MoSo score, and (b) using different time step sampling rates on the accuracy of the selected coreset (with a pruning ratio of 0.4). The experiments are conducted on CIFAR-100 with ResNet-50 as the network architecture.
we can train a surrogate network on \(S_{i}\). Then, for each sample \(z\in S_{i}\), we perform MoSo scoring within the current set by using \(\mathcal{M}(z|S_{i})\) to approximate \(\mathcal{M}(z|\mathcal{S})\). Implementing this approach reduces the overall time overhead by \(I\) fold. Table 1 demonstrates that partitioning \(\mathcal{S}\) into more subsets can improve data pruning's performance. This means that if the training set is vast, a single sample's impact may be buried, making it challenging to measure. In a relatively small training set, however, the effect of a single sample can be more sensitively reflected and easily captured.
Effect of the number of epochs in the surrogate training stage.To investigate the impact of surrogate network training epochs on the ultimate data pruning performance, we augmented the number of training epochs and presented the experimental outcomes on CIFAR-100, as shown in Table 2. However, it is evident that augmenting the training duration does not lead to a uniform improvement in performance. For instance, lengthening the training duration from 50 epochs to 200 epochs only resulted in a meager 0.82 Top-1 accuracy gain, while the time consumed quadrupled. Note that this is consistent with our conclusion in Proposition 1.2 that the approximation error between the approximated MoSo and the exact MoSo value by leave-one-out-retraining increases with time (T), leading to no improvement in DP performance with longer training time.
## 5 Conclusion
This paper introduces a novel metric for measuring sample importance, called the Moving-one-Sample-out (MoSo) score. It quantifies sample importance by measuring the change of the optimal empirical risk when a specific sample is removed from the training set. By doing so, MoSo can better distinguish important samples that contribute to the model performance from harmful noise samples that degrade it, as the former tends to lower the empirical risk, while the latter may increase it. Moreover, we propose an efficient estimator for MoSo with linear complexity and approximation error guarantees. The estimator incorporates the awareness of training dynamics by considering the gradient difference across different epochs. We conduct extensive experiments on various data pruning tasks and demonstrate the effectiveness and generalization of our method.
**Limitations and future work.** First, the MoSo score is actually the agreement between the gradient of a single sample and the mathematical expectation of the gradient. The higher the agreement, the sample will be given a higher score. In fact, this is based on the important assumption that the amount of information in the data set is much greater than the amount of noise. However, if the amount of noise is dominant, the usefulness of MoSo is not guaranteed. Therefore, we believe that in the future, it is very necessary to propose a variant of MoSo to adapt to high noise conditions with theoretical guarantees. Secondly, in terms of application, this paper only evaluates the performance of MoSo on classification tasks. Many practical tasks, _e.g._ large-scale multimodal learning, are worth considering in future work.
**Social Impact.** MoSo has potential influences in important applications such as data collection and data-efficient AI. Moreover, it is beneficial for reducing the computational workload during training and the cost of storing datasets, which is of great significance for environmentally friendly and energy-friendly economies. But it may be deployed for inhumane web-data monitoring. The potential negative effects can be avoided by implementing strict and secure data privacy regulations.
## 6 Acknowledgment
This work has been supported by Hong Kong Research Grant Council - Early Career Scheme (Grant No. 27209621), General Research Fund Scheme (Grant No. 17202422), Theme-based Research (T45-702/22-R), and RGC Matching Fund Scheme (RMGS). Part of the described research work is conducted in the JC STEM Lab of Robotics for Soft Materials funded by The Hong Kong Jockey Club Charities Trust.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Training epochs & \(50\) & \(100\) & \(150\) & \(200\) \\ \hline MoSo & 75.76 & 76.41 & 76.19 & 76.58 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The effect of surrogate network training epochs on the final data pruning performance, where the pruning ratio is \(0.2\).
|
2310.03318
|
On Metaverse Application Dependability Analysis
|
Metaverse as-a-Service (MaaS) enables Metaverse tenants to execute their
APPlications (MetaAPP) by allocating Metaverse resources in the form of
Metaverse service functions (MSF). Usually, each MSF is deployed in a virtual
machine (VM) for better resiliency and security. However, these MSFs along with
VMs and virtual machine monitors (VMM) running them may encounter software
aging after prolonged continuous operation. Then, there is a decrease in
MetaAPP dependability, namely, the dependability of the MSF chain (MSFC),
consisting of MSFs allocated to MetaAPP. This paper aims to investigate the
impact of both software aging and rejuvenation techniques on MetaAPP
dependability in the scenarios, where both active components (MSF, VM and VMM)
and their backup components are subject to software aging. We develop a
hierarchical model to capture behaviors of aging, failure, and recovery by
applying Semi-Markov process and reliability block diagram. Numerical analysis
and simulation experiments are conducted to evaluate the approximation accuracy
of the proposed model and dependability metrics. We then identify the key
parameters for improving the MetaAPP/MSFC dependability through sensitivity
analysis. The investigation is also made about the influence of various
parameters on MetaAPP/MSFC dependability.
|
Yingfan Zong, Jing Bai, Xiaolin Chang, Fumio Machida, Yingsi Zhao
|
2023-10-05T05:25:33Z
|
http://arxiv.org/abs/2310.03318v1
|
# On Metaverse Application Dependability Analysis
###### Abstract
Metaverse as-a-Service (MaaS) enables _Metaverse_ tenants to execute their _AP_/plications (MetaAPP) by allocating Metaverse resources in the form of Metaverse service functions (MSF). Usually, each MSF is deployed in a virtual machine (VM) for better resiliency and security. However, these MSFs along with VMs and virtual machine monitors (VMM) running them may encounter software aging after prolonged continuous operation. Then, there is a decrease in MetaAPP dependability, namely, the dependability of the MSF chain (MSFC), consisting of MSFs allocated to MetaAPP. This paper aims to investigate the impact of both software aging and rejuvenation techniques on MetaAPP dependability in the scenarios, where both active components (MSF, VM and VMM) and their backup components are subject to software aging. We develop a hierarchical model to capture behaviors of aging, failure, and recovery by applying Semi-Markov process and reliability block diagram. Numerical analysis and simulation experiments are conducted to evaluate the approximation accuracy of the proposed model and dependability metrics. We then identify the key parameters for improving the MetaAPP/MSFC dependability through sensitivity analysis. The investigation is also made about the influence of various parameters on MetaAPP/MSFC dependability.
Dependability, Hierarchical model, Metaverse service, Reliability block diagram, Semi-Markov process
## 1 Introduction
With recent technological advancements (e.g., extended reality, 5G/6G networks, and edge intelligence) and the substantial endeavors of major corporations like Facebook [1] and Microsoft [2]. Metaverse is being garnering increasing attention from both academia and industry over the past years. Metaverse as-a-Service (MaaS) [3] sprouts up due to the fact of dynamic and huge resources required in an _Metaverse_ APP1 (MetaAPP), e.g., education, healthcare, and manufacturing [4]. MaaS not only allows Metaverse users/tenants to pay on demand, but also improves both Metaverse service dependability (availability and reliability) and Metaverse resource utilization.
It is true that both Metaverse resource demands from Metaverse tenants and available resources of MaaS provider are dynamic and uncertain. Decomposing an MetaAPP and then executing it in the form of Metaverse service functions (MSF) can alleviate the impact of these dynamicity and uncertainty on MetaAPP dependability [5], namely, the dependability of the MSF chain (MSFC), consisting of MSFs allocated to MetaAPP. MSFC and MetaAPP are used interchangeablly in this paper. Fig. 1 illustrates three MetaAPPs/MSFs in edge-cloud-supported MaaS. MaaS provider applies Edge-Cloud resources to construct an MSFC for each MetaAPP from end users.
Each MSF is usually deployed in a virtual machine (VM) for better resiliency and security [6]. However, these MSFs along with VMs and virtual machine monitors (VMMs) executing them cannot avoid software aging after prolonged continuous execution. Then MSFC dependability (including availability and reliability) decreases and even the MSFC crashes [7][8]. Software aging can be handled by rejuvenation techniques like MSF failover, VM failover and VM migration. But they require the support of backup components, which also have the problem of software aging.
This paper explores analytical modeling techniques to quantitatively study the impact of both software aging and rejuvenation techniques on VM-based MSFC dependability in the scenarios, where both active components (MSF, VM and VMM) and their corresponding backup components are subject to software aging. We assume that the MSFC are established. How to decompose MetaAPP to set up an MSFC is beyond the scope of this paper.
To the best of our knowledge, it is the first time to quantitatively investigate VM-based MSFC dependability. There existed analytical models [9]-[22] for evaluating service function chain (SFC), which is similar to MSFC. However, the models in [9]-[22] were not suitable for analyzing time-dependent interactions between backup and active component behaviors, and time-dependent interactions between MSF, VM and VMM behaviors. And the authors in [9]-[22] ignored the behaviors of backup MSFs, backup VMs and backup VMMs. There are at least the following two challenges to be addressed in employing analytical model-based approaches to assess MSFC dependability.
* The time-dependent interactions between active component behaviors and backup component behaviors in the VM-based MSFC are more complex than that in the container-based MSFC. That is, there are backup MSFs, backup VMs and backup VMMs in the VM-based MSFC, and their behaviors interact with the behaviors of active MSFs, active VMs and active VMMs. In addition, these interactions are time-dependent. Therefore, capturing the behaviors of all components and the time-dependent interactions between various behaviors is a challenge.
* Rejuvenation-triggered intervals (RTIs) can affect the effectiveness of rejuvenation techniques. However, various time-dependent interactions in the VM-based MSFC are more complex than that in the container-based MSFC, making it difficult to collaboratively determine RTIs of MSF failover, VM failover and VM migration. Therefore, how to determine the optimal combination of RTIs of MSF failover, VM failover and VM migration and its corresponding optimal dependability is another challenge.
In order to tackle the above challenges, we explore a hierarchical model to quantitatively analyze the availability and reliability of MSFC system, incorporating proactive rejuvenation techniques to alleviate components' aging issues. The principal contributions of this paper can be summarized in the following manner:
* We develop a hierarchical model for an MSFC with \(n\) MSFs. The model includes \(n\) semi-Markov process (SMP) sub-models and a reliability block diagram (RBD) sub-model. Each SMP sub-model captures the behaviors of components in a primary Metaverse host and its backup Metaverse host (See Fig. 2). The RBD sub-model describes the composition of \(n\) SMP sub-models. We focus on the transition from the occurrence of an abnormal event (software aging or failure) to recovery through rejuvenation techniques. In our SMP sub-model, failure and recovery event-occurring times follow non-exponential distribution.
* We assess VM-based MSFC dependability in terms of steady-state availability and reliability measured by mean time to MSFC failure (MTTF). In particular, under the consideration of RTIs, we derive the closed-formed formulas for analyzing the steady-state availability and reliability of an MSFC comprising arbitrary number of MSFs.
We verify the validity of the proposed model and formulas through creating a simulator and conduct extensive numerical experiments. The experimental results indicate:
1) The cumulative distribution function (CDF) type of failure time, active MSF failure time and host fix time are important factors for improving MSFC dependability.
2) As the number of serial MSFs increases, the MSFC dependability decreases. Conversely, as the number of parallel MSFs increases, the MSFC availability increases while the MSFC reliability decreases.
3) There exists the optimal RTI combination that can maximize the MSFC dependability.
4) There is a significant difference between the numerical results under the model with considering the backup components' behaviors and those under the model without considering the backup components' behaviors.
The rest of the paper is set up as follows. Section 2 introduces related work. Section 3 introduces the studied MSFC system, the proposed hierarchical model and the formulas of calculating availability and MTTF. Section 4 presents the experiment results. Section 5 concludes the paper.
## 2 Related Work
Our investigation of public literatures finds no analytical model-based study on MSFC dependability. Thus, this section first introduces the existing researches of Metaverse, and then analytical model-based approaches for SFC are discussed.
### _Metaverse_
Metaverse is a living space and cyberspace that realizes the virtualization and digitization of the real world [23]. The current Metaverse research focuses on the following areas: constructing blockchain-based economic ecosystems [24], offering immersive experiences through interactive technologies [25], generating mirror images of the real world based on digital twins [26], accomplishing data computation, storage, processing and sharing through cloud computing [27], and achieving interconnected intelligence through AI [28] and IoT technologies [29]. However, we are the first to model the MSFC system behaviors for the dependability assessment. Our work complements the existing Metaverse works for better MaaS service provision.
Fig. 1: MetaAPP over Edge-Cloud-supported MaaS
### _Backup Strategies for SFC Dependability Improvement_
Zhang et al. [9] considered the dedicated backup and shared backup strategies and investigated the resource-aware backup allocation problem with the goal of minimizing backup resource consumption while satisfying availability. Wang et al. [10] suggested utilizing a backup SFC to safeguard the active SFC, thereby improving the availability of parallelized SFCs, and assessed the availability of the SFC. Wang et al. [11] took a comprehensive approach, considering from both the end users and edge system to backup SFC, in order to provide services with the lowest latency. Our work can evaluate the availability and reliability of MSFC, which is complementary to the aforementioned studies, so as to help service providers provide better services.
### _Model-based Approaches on SFC Dependability_
The main goal of our work is to evaluate the availability and reliability of MSFC system based on analytical models. We discuss the existing modeling approaches in the following three categories: state-space models, non-state-space models, and multi-level models.
#### 2.3.1 Non-state-space Model
Non-state-space models mainly include: RBD, reliability graph and fault tree (FT). Zhang et al. [9] studied SFC availability based on RBD. The authors in [10] used RBD to model different SFC configurations and analyzed the parallelized SFC availability. The authors in [9] and [10] assumed that the behaviors of individual components in a host are independent. However, in practical systems, there may exist dynamic interactions between the abnormal behaviors and the recovery behaviors of each component. The model introduced in this paper has the capability to encompass the time-dependencies among these behaviors.
#### 2.3.2 State-space Model
Rui et al. [12] investigated the SFC reliability based on Petri net model and further designed the VNF migration strategy. Simone et al. [13] proposed the continuous-time Markov chain (CTMC) model for analyzing the multi-tenant SFC availability. Tola et al. [14] proposed the stochastic activity network (SAN) models to describe different network function virtualization management and orchestration (NFV-MANO), and assessed the impact of software rejuvenation on the NFV-MANO availability. Studies [12]-[14] assumed that all event-occurring times followed the exponential distribution. For the past few years, our research teams have developed various state-space models based on SMP. Studies [15] and [16] respectively investigated the dependability of serial SFC and hybrid serial-parallel SFC. Subsequently, the study [17] considered the impact of RTIs on the container-based SFC dependability. In addition, studies [18] and [19] extended the development of SMP models to capture the behaviors of multiple SFs and operating systems (OSes) in a serial SFC system and a hybrid serial-parallel SFC system, respectively. Furthermore, they [20] examined the influence of backup component behaviors on the container-based SFC dependability. The above studies considered the effect of software aging and assumed that the failure and recovery event-occurring times followed general distribution. But they cannot capture the time-dependent interactions between MSF, VM and
\begin{table}
\begin{tabular}{|l c c c c c c c c c c c c c c c|} \hline \hline & & & \multicolumn{6}{c}{SFC system characteristic} & \multicolumn{6}{c}{Distribution} & Metric & \\ \cline{3-14} Reference & \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
VMM behaviors in the MSFC system.
#### 2.3.3 Multi-level Model
Besides the non-state space and state-space modeling approaches, there are multi-level modeling approaches to evaluate the dependability of virtualized systems. Mauro et al. [21] evaluated the SFC availability and performance by aggregating stochastic reward networks (SRN) and RBD. Pathirana et al. [22] calculated the availability for 5G-MEC systems based on FT and SAN. Compared with these studies, the model constructed in this paper can describe the time-dependent interactions between active component behaviors and backup component behaviors under the condition of non-exponential failure and recovery event-occurring times.
TABLE I summarizes the comparison of the existing works about SFC dependability analysis by analytical model-based approaches.
## 3 System Description And Hierarchical Model
This section first introduces the serial and parallel MSFC system architectures. Then the hierarchical model and metric formulas are presented. TABLE II and TABLE III give the definition of variables used in the model.
### System Description
The MSFC system investigated in the paper consists of a Control Plane, several Primary and Backup Metaverse hosts. Control Plane is responsible for creating, monitoring and maintaining the MSFC system. Each host only runs one VMM. In a Primary Metaverse host, the VMM can deploy an active VM and a backup VM, which execute an active MSF and a backup MSF, respectively. Backup MSFs and backup VMs are deployed to support failover technique. Backup VMMs in Backup Metaverse hosts are deployed to support VM migration technique.
Fig. 2 MSFC illustrates an MSFC system with four MSFs and also shows two examples of MSFC system architecture. One is a serial MSFC system architecture consisting of four serial MSFs. Another example is a parallel MSFC system architecture consisting of two serial MSFs and two parallel MSFs, where serial MSF1 runs on Primary Metaverse host1, parallel MSF2 runs on Primary Metaverse host2, parallel MSF3 runs on Primary Metaverse host3, and serial MSF4 runs on Primary Metaverse host4. The OS in the figure represents the operating system.
In an MSFC system, we consider that the active MSFs, backup MSFs, active VMs, backup VMs, active VMMs and backup VMMs can suffer from software aging. In addition, when active MSF, active VM or active VMM aging is detected, rejuvenation technique is not triggered immediately but wait for a while before triggering.
At the beginning, all MSFs, VMs and VMMs work properly. After a period of time, MSFs, VMs and VMMs start to suffer from software aging. When one of the active components is detected to be aging, the MSFC system immediately checks the state of the corresponding backup component for that active component. There are three states of a backup component:
* Backup component being healthy. This backup component can take over the work of the active component. The rejuvenation technique (MSF failover/VM failover/VM migration) is triggered after waiting for a period of time.
* Backup component suffering from software aging. This backup component will be restarted immediately to be ready to support rejuvenation technique. After restarting the backup component and waiting for a while, rejuvenation technique will be triggered.
Fig. 2: MSFC system
\begin{table}
\begin{tabular}{c l l l} \hline \hline
* Backup component suffering from failure. This backup component will be fixed and then restarted to support rejuvenation technique. After restarting the backup component and waiting for a while, rejuvenation technique will be triggered.
Backup component can still be affected by software aging during the rejuvenation technique-triggered intervals or during the rejuvenation technique execution. Once this situation happens, the backup component will be restarted. In addition, there are four cases that can happen before the backup component takes over the work of the active component:
* Before the backup MSF completes the failover, the active VM responsible for running its corresponding active MSF can experience software aging. Before the backup VM completes the failover, the active MSF running on its corresponding active VM can experience software aging. In both cases, the active MSF, backup MSF, this active VM and its backup VM will be restarted.
* Before the backup MSF/VM completes the failover, the active VMM can experience software aging. Before the VM migration technique is completed, the active MSF or active VM in a Primary Metaverse host can experience software aging. In both cases, the active MSF, backup MSF, this active VM and its backup VM will be restarted.
* Before the backup MSF/VM completes the failover, the active VM can experience software aging. Before the VM migration technique is completed, the active MSF or active VM in a Primary Metaverse host can experience software aging. In both cases, all components in this Primary Metaverse host and its backup Metaverse host will be restarted/rebooted.
Then, based on the above analysis, we can study the behaviors of serial and parallel MSFC systems:
Fig. 3: SMP sub-model for capturing the behaviors of a Primary Metaverse host and its Backup Metaverse host
* In a serial MSFC system, if any component crashes, the request processing stops. The serial MSFC system becomes unavailable.
* If one of the parallel components crashes, the parallel MSFC system is still available because requests can be processed as long as there are still functional components in the parallel part of the parallel MSFC system. However, if any serial component in the parallel MSFC system crashes, service also will become unavailable.
We define that if an MSFC system crashes, all MSFs, VMs, and VMMs in this system will be restarted/rebooted in sequence after the failed active component finishes its fixing.
### _System Model_
This section introduces the hierarchical model for evaluating the dependability of MSFC system. The proposed hierarchical model includes two levels: SMP sub-model and RBD sub-model.
#### 3.2.1 SMP Sub-model
Fig. 3 gives the SMP sub-model. In the boxes of Fig. 3, from top to bottom, the first row represents the active MSF and its backup, the second row represents the active VM and its backup, and the third row represents the active VMM and its backup. Furthermore, the letters outside the parentheses indicate the states of active components, while the letters inside the parentheses denote the states of their corresponding backup components. There are nine component states: Healthy, Degradation, Restart/Reboot, Failover, Migration, Failed, Arbitrary, Backup Component Restart/Reboot and Backup Component Recovery, denoted by H, D, R, L, M, F, A, BR and BC, respectively. The detailed meanings are given as follows:
* State H (Healthy): In this state, the component is robust and the requests can be processed normally. The component suffering from software aging or failure can return to this state through recovery operations.
* State D (Degradation): In this state, the component is still working, but at a low-efficient execution phase. The ability of this component to provide services in this state is lower than that of components in Healthy state.
* State R (Restart/Reboot): In this state, the component will be restarted/rebooted.
* State L (Failover): In this state, failover technique is triggered and the backup component is ready to take over the work.
* State M (Migration): In this state, VM migration technique is triggered and backup VMM will take over the work.
* State F (Failed): In this state, the component fails due to software aging. All components must be restarted/rebooted after fixing the failed component to back to the Healthy state.
* State A (Arbitrary): In this state, it is unknown whether the backup component is healthy. The probabilities of these three cases occurring are \(C_{i1}\),\(C_{i2}\),\(C_{i3}\), respectively. Here, \((C_{i1}+C_{i2}+C_{i3}=1)\)
* State BR (Backup Component Restart/Reboot): In this state, the restarting/rebooting of backup component is complete.
* State BC (Backup Component Recovery): In this state, the fixing and restarting/rebooting of backup component is complete.
Active component has five states, namely H, D, R, L/M and F. Backup component has seven states, namely H, D, R, F, A, BR and BC. Based on the description above, we describe the state of a Primary Metaverse host and its Backup Metaverse host by defining a 6-tuple index (\((\hat{u}^{\text{sf}}_{k_{i}},\hat{u}^{\text{sf}}_{k_{i}},\hat{j}^{\text{sm}}_{k _{i}},\hat{j}^{\text{sm}}_{k_{i}},k^{\text{mmm}}_{k_{i}},k^{\text{mmm}}_{k_{i }})\) ). \(\hat{u}^{\text{sf}}_{k_{i}}\) and \(\hat{u}^{\text{sf}}_{k_{i}}\) are the states of the \(i^{\text{sf}}\) active MSF and backup MSF, respectively. \(\hat{j}^{\text{sm}}_{k_{i}}\) and \(\hat{j}^{\text{sm}}_{k_{i}}\) are the states of the \(i^{\text{sf}}\) active VM and backup VM, respectively. \(k^{\text{xmm}}_{k_{i}}\) and \(k^{\text{xmm}}_{k_{i}}\) are the states of the \(i^{\text{sf}}\) active VMM and backup VMM, respectively. Therefore, there are total \(5^{3}*7^{3}\) states, where \(5^{3}*7^{3}-19\) states are meaningless and can be ignored.
We assume that the aging event-occurring times are exponentially distributed. Other event-occurring times follow non-exponential distribution.
#### 3.2.2 RBD Sub-model
We model the serial and parallel MSFC as RBD (Fig. 4). By using RBD, we can better analyze the behaviors of an MSFC consisting of multiple Primary Metaverse hosts and their Backup Metaverse hosts, and better understand how the behaviors of different components in the serial and parallel parts affects the MSFC dependability.
### _System Availability Analysis_
This section presents formulas for calculating the steady-state availability of an MSFC consisting of arbitrary number of MSFs. The details are as follows.
At first, we give the process to calculate the MSF steady-state availability.
Fig. 5 shows the kernel matrix \(\mathbf{K(t)}\). In this kernel matrix, the non-zero element \(k_{s_{\text{s}}}(t)\) represents the conditional probability that the MSF has in state \(s_{i}\) and will enter state \(s_{j}\) at time \(t\). The formulas for calculating the non-zero elements in this kernel matrix are given in Section A of the supplementary file.
Then, we construct the one-step transition probability matrix (TPM) \(\mathbf{P}\)=[\(p_{s_{\text{s}}}\)] to describe the embedded discrete-time Markov chain (EDTMC) of the SMP sub-model. \(\mathbf{P}=\underset{t\rightarrow\infty}{\lim}\mathbf{K(t)}\) are given in Section A of the supplementary file.
Next, the steady-state probability vector \(\mathbf{V}=[\mathbf{V}_{\text{s}}]\) of the EDTMC can be obtained by Equation (1). Here, \(e\) denotes the column vector where all entries are equal to 1.
\[\mathbf{V}=\mathbf{V}\mathbf{P}\text{ subject to }\mathbf{V}e^{\text{r}}=1 \tag{1}\]
We give the calculation formula of \(\mathbf{V}_{\text{s}}\) in Section A of the supplementary file.
Then, the mean sojourn time \(h_{\text{s}}\) in state \(s_{i}\) can be calculated according to Equation (2).
\[h_{\text{s}}=\int_{0}^{e}(1-G_{\text{s}}(t))dt \tag{2}\]
Fig. 4: RBD sub-model for MSFC systems
where the distribution function \(G_{i}(t)\) is used to represent the sojourn time distribution in state \(s_{i}\). We give the calculation formula of the mean sojourn time \(h_{s_{i}}\) in Section A of the supplementary file.
We then can obtain MSF steady-state availability \(\pi_{s_{i}}\) at state \(s_{i}\) according to Equation (3).
\[\pi_{s_{i}}=\frac{V_{s_{i}}h_{s_{i}}}{\sum_{V_{s_{i}}}V_{s_{i}}h_{s_{i}}} \tag{3}\]
where \(V_{s_{i}}\) and \(h_{s_{i}}\) can be gained from Equations (1) and (2).
Finally, the steady-state availability of MSF \(\mathrm{A}_{w}\) is computed by Equation (4).
\[\mathrm{A}_{w}= 1-(\pi_{s_{i}}+\pi_{s_{i}}+\pi_{s_{i}}) \tag{4}\]
where \(\pi_{s_{i}},\pi_{s_{i}},\)and \(\pi_{s_{i}}\) are MSF steady-state availability in states \(s_{i},s_{i}\) and \(s_{i}\), respectively.
Then, we give the formulas of calculating the steady-state availability of MSFC.
For a serial MSFC with \(n\) MSFs, the calculation formula of steady-state availability \(\mathrm{A}_{w}\) is shown in Equation (5), where \(\mathrm{A}_{w}\) is the steady-state availability of the \(w^{n}\) MSF. And for a parallel MSFC with \(m\) serial MSFs and \(n\)-\(m\) parallel MSFs, the calculation formula of the steady-state availability \(\mathrm{A}_{p}\) is shown in Equation (6).
\[\mathrm{A}_{w}= \prod_{w=1}^{n}\mathrm{A}_{w} \tag{5}\] \[\mathrm{A}_{p}= (1-\prod_{w=n+1}^{n}(1-\mathrm{A}_{w}))\prod_{w=1}^{n}\mathrm{A }_{w} \tag{6}\]
### System Reliability Analysis
This section presents the formulas for calculating the reliability of MSFC consisting of \(n\) MSFs. The classical reliability assessment metric, MTTF, is calculated in a scenario where recovery operations are not performed after MSFC system failure.
At first, we give the calculation process of MSF MTTF.
We construct a kernel matrix \(\mathbf{K}(\mathrm{t})\), which is deformed from \(\mathbf{K}(\mathrm{t})\) in Section 3.3. The formulas for calculating the non-zero elements in this kernel matrix are given in Section B of the supplementary file.
Then we can obtain the one-step TPM \(\mathbf{P}^{\prime}\) describing the EDTMC in the SMP sub-model with absorbing states by the method described in the previous Section 3.3. \(\mathbf{P}^{\prime}\) and the formulas for calculating the non-zero elements in it are given in Section B of the supplementary file.
The expected number of visits \(V^{\prime}_{s_{i}}\) to state \(s_{i^{\prime}}\) until absorption is calculated by applying Equation (7).
\[V^{\prime}_{s_{i}}=\alpha_{s_{i}}+\sum_{j=0}^{15}V^{\prime}_{s_{i}}p_{s_{j}_{j }_{j}_{j}_{j}} \tag{7}\]
where \(\alpha_{s_{i^{\prime}}}\) is the initial probability in state \(s_{i^{\prime}}\). \(V^{\prime}_{s_{i}}\) and \(V^{\prime}_{s_{i}}\) (\(1\leq i^{\prime}\leq 15\)) are given by Equation (8) and (9), respectively.
\[V^{\prime}_{s_{i}}= (-1)\bigg{/}\bigg{/}\bigg{/}\bigg{\{}\sum_{i=1}^{5}p_{s_{i}}p_{s_ {i}}p_{s_{i}}-1\bigg{/}\bigg{\}} \tag{8}\] \[V^{\prime}_{s_{i}}= (-p^{i}_{s_{i^{\prime}}})\bigg{/}\bigg{(}\sum_{i=1}^{5}p_{s_{i} _{i^{\prime}}}p_{s_{i}}p_{s_{i}}-1\bigg{)} \tag{9}\]
The mean sojourn time \(h^{i}_{s_{i^{\prime}}}\), in state \(s_{i^{\prime}}\) is given in Section B of the supplementary file.
Finally, MTTF of the \(w^{n}\) MSF, MTTF\({}_{w}\), is computed by Equation (10).
\[\mathrm{MTTF}_{w}=\sum_{i=0}^{5}V^{\prime}_{s_{i}}h^{i}_{s_{i}}(w= 1,2,3,...) \tag{10}\]
where \(V^{\prime}_{s_{i^{\prime}}}\) and \(h^{i}_{s_{i^{\prime}}}\) can be obtained from Equation (8), (9)and Section B of the supplementary file.
Then, we give the formulas of calculating the MTTF of MSFC.
For a serial MSFC with \(n\) MSFs, the calculation formulas of MTTF (MTTF\({}_{n}\)) is shown in Equation (11). And for a parallel MSFC with \(m\) serial MSFs and \(n\)-\(m\) parallel MSFs, the formulas of calculating MTTF (MTTF\({}_{n}\)) is shown in Equation (12).
\[\mathrm{MTTF}_{n}=\min(\mathrm{MTTF}_{1},...,\mathrm{MTTF}_{w},...,\mathrm{MTTF }_{n}) \tag{11}\] \[\mathrm{MTTF}_{n+1},\mathrm{MTTF}_{n+2},...,\mathrm{MTTF}_{n})) \tag{12}\]
Fig. 5: Kernel matrix \(\mathbf{K}(\mathrm{t})\)
Experimental Evaluation
This section first conducts simulation experiments to verify our proposed model and formulas. Then, we conduct sensitivity analysis experiments and further conduct numerical analysis experiments for key parameters. Finally, we analyze the effects of MSF number, RTIs, backup components' behaviors and CDF types on the MSFC steady-state availability and MTTF.
### _Experiment Configuration_
TABLE II, TABLE III, and TABLE IV provide the default parameter settings and the CDF types for event-occurring times in the experiments. These settings and CDF types are utilized to showcase the effectiveness of our model and formulas. The default values of parameters are obtained from prior literature [20]. Additionally, our model also applies other parameter settings and CDF types. The simulation and numerical experiments are performed in MAPLE [30].
MSFC dependability. An increase in the recovery time results in an increasing probability of component failure due to software aging during the recovery process, or an increased probability of the aging of other components, leading to a decrease in the MSFC dependability.
* Among the parameters calculated, \(\mu_{\mathrm{i}}^{\mathrm{g}}\) (related to the time of the 1* Primary Metaverse host and its Backup Metaverse host from failure to robustness) and \(\mu_{\mathrm{i}}^{\mathrm{RM}}\) (related to the time of re-starting/rebooting all MSFs, VMs and VMMs in the 1* Primary Metaverse host and its Backup Metaverse host) are the first and the second most important parameters influencing the steady-state availability, respectively. On the other hand, \(\alpha_{\mathrm{i}}^{\mathrm{res}}\) (related to the 1* active MSF failure time when backup MSF at arbitrary state) is the most important parameter affecting the MTTF. The MSFC dependability optimization focuses on identifying these crucial parameters that have the significant influence on dependability.
Subsequently, we conduct numerical experiments based on sensitivity analysis to establish the necessary range of critical parameters for ensuring availability. Fig. 10 shows the steady-state availability of serial MSFC over \(T_{1}^{\mathrm{res}}\) (the 1* active MSF aging time) and \(T_{1}^{\mathrm{g}}\) (the 1* Primary Metaverse host and its Backup Metaverse host fix time). We observe that when the \(T_{1}^{\mathrm{res}}\) is 24 months and the \(T_{1}^{\mathrm{g}}\) increases from 0.1 hours to 0.35 hours, the steady-state availability of serial MSFC decreases from 0.99998208225559 to 0.999997684076184. The steady-state availability of parallel MSFC over \(T_{1}^{\mathrm{res}}\) and \(T_{1}^{\mathrm{g}}\) are given in Section C of the supplementary file.
### Effect of the Number of MSFs on MSFC Dependability
In this section, we investigate the effect of the number of MSFs (\(n\)) on steady-state availability and MTTF of **serial and parallel MSFC**. In this experiment, we take _n_=4, _n_=5, and _n_=6 as examples. The experimental results are shown in Fig. 11 and Fig. 12. It can be observed from Fig. 11 that the steady-state availability of **serial MSFC** decreases as the number of MSFs increases and the **parallel MSFC** steady-state availability increases slightly as the number of parallel MSFs increases. We can observe from Fig. 12 that for **both serial and parallel MSFCs**, the MTTF decreases as the number of MSFs increases. It can be explained that the steady-state availability of MSFC is affected by the probability of the MSFC system being at available states and MTTF is affected by the intervals between failure event-occurring times.
### MSFC Dependability
In this section, we set different RTIs to conduct numerical experiments of analyzing the effect of RTIs on MSFC dependability. TABLE VI and TABLE VII show the numerical results of the steady-state availability and MTTF of serial MSFC under different RTIs, respectively. The steady-state availability and MTTF of parallel MSFC are given in Section C of the supplementary file. We can observe that:
\begin{tabular}{c c c c} \hline \hline
**RTI combination** & **Availabil-** & **RTI combination** & **Availabil-** \\ (hour) & **ity** & **(hour)** & **ity** \\ \hline \(\alpha_{1}^{\text{s}}\)-0,\(\alpha_{2}^{\text{s}}\)=0 & 0.9999979 & \(\alpha_{1}^{\text{s}}\)-8,\(\alpha_{1}^{\text{s}}\)=0,\(\alpha_{1}^{\text{s}}\)=0 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-0,\(\alpha_{2}^{\text{s}}\)=10,\(\alpha_{1}^{\text{s}}\)=20 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-8,\(\alpha_{1}^{\text{s}}\)=10,\(\alpha_{1}^{\text{s}}\)=20 & 0.99999797997 \\ \(\alpha_{1}^{\text{s}}\)-0,\(\alpha_{2}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-8,\(\alpha_{1}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-0,\(\alpha_{2}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-8,\(\alpha_{1}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-0,\(\alpha_{2}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.999999797 & \(\alpha_{1}^{\text{s}}\)-8,\(\alpha_{1}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{1}^{\text{s}}\)=0,\(\alpha_{1}^{\text{s}}\)=20 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=10,\(\alpha_{1}^{\text{s}}\)=20 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{2}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=20 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{2}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=20 & 0.8194574 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{2}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=20 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{2}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=40 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=20,\(\alpha_{1}^{\text{s}}\)=20 & 0.8197657 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{2}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.99999797 \\ \(\alpha_{1}^{\text{s}}\)-4,\(\alpha_{2}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.99999797 & \(\alpha_{1}^{\text{s}}\)-12,\(\alpha_{1}^{\text{s}}\)=30,\(\alpha_{1}^{\text{s}}\)=60 & 0.8165235 \\ \hline \hline \end{tabular} TABLE VI: Serial MSFC MTTF OVER RTIs
* The serial MSFC can achieve the maximum steady-state availability of 0.999997978240037 at \((\alpha_{1}^{\text{s}},\alpha_{1}^{\text{s}},\alpha_{1}^{\text{s}})=\)(6,15,30). As the RTIs increase, the steady-state availability of serial MSFC first increases and then decreases. Because the available time of serial MSFC system increases with the increase of RTIs when RTIs are less than the optimal point, and the probability of serial MSFC system entering the unavailable states increases with the increase of RTIs when RTIs are greater than the optimal point. At the same time, we can also observe that the MTTF of serial MSFC reaches the maximum value of 167191.136707818 at \((\alpha_{1}^{\text{s}},\alpha_{1}^{\text{s}},\alpha_{1}^{\text{s}},\alpha_{1}^{ \text{s}})=\)(0,0,0) and decreases with the increase of RTIs.
* As the RTIs of parallel components increase, the steady-state availability of parallel MSFC increases, while as the RTIs of the serial components increase, the steady-state availability of parallel MSFC first increases and then decreases. The MTTF of parallel MSFC decreases with the increase of RTIs and achieves the maximum value of 172180.601132625 at \((\alpha_{1}^{\text{s}},\alpha_{2}^{\text{s}},\alpha_{1}^{\text{m}})=\)(0,0,0). As the RTIs of parallel component increase, the holding time of parallel MSFC system being available increases. Therefore, the steady-state availability of parallel MSFC system increases with the increase in the RTIs of parallel components. However, the probability of this component failure and other components aging before rejuvenation increases, leading to a decrease in the MTTF of parallel MSFC as the RTIs of parallel component increase.
### Comparison of Our Model and the Model Without Considering Backup Components' Behaviors
In this section, we explore a scenario where backup components are not affected by software aging and failure. In order to capture the system behaviors in this scenario, we simplify the model presented in Section 3 by excluding the behaviors of backup components. Subsequently, we conduct numerical experiments to compare the proposed model with this simplified model. The experimental results are shown in Fig. 13 and Fig. 14. We can observe that the steady-state availability and MTTF under the model without considering backup component behaviors are greater than those under the model with considering backup component behaviors. For example, \(T_{1}^{\text{s}}\) (the fix time of the 1st Primary Metaverse host and its Backup Metaverse host) is 0.225 hours, the steady-state availability of serial MSFC is 0.9999997946150803, and the steady-state availability of parallel MSFC is 0.9999997946150803, and the steady-state availability of parallel MSFC is 0.9999998969170332 under the model with considering backup component behaviors. Under the model without considering the behaviors of backup components, the steady-state availability of serial MSFC is 0.999998940811332, and the steady-state availability of parallel MSFC is 0.999999410993508. Therefore, whether or not to consider the behaviors of backup components has a significant impact on the outcomes.
### Effect of Cumulative Distribution Function (CDF)
Fig. 14: Comparison of our model with the model without considering backup components in terms of MSFC MTTF
Fig. 13: Comparison of our model with the model without considering backup components in terms of MSFC steady-state availability
### Types on MSFC Dependability
In this section, we set the \(T_{1}^{\mathrm{n}}\) (the fix time of the 1* Primary Metaverse host and its Backup Metaverse host) to vary from 0.1 hours to 0.35 hours and perform the numerical experiment of analyzing the effect of CDF types on MSFC dependability. The steady-state availability and MTTF of **serial MSFC** and **parallel MSFC** under different CDF types are shown in Fig. 15 and Fig. 16, respectively. In these figures, 'S-F_HYPO_R_EXP' and 'P-F_HYPO_R_EXP' denote that all failure times follow the hypocomponential distribution and all recovery times follow the exponential distribution for serial and parallel MSFC, respectively. 'S-F_HYPO_R_Peter' and 'P-F_HYPO_R_Peter' denote that all failure times follow the hypoexponential distribution and all recovery times follow the deterministic distribution for serial and parallel MSFC, respectively. 'S-F_EXP_R_EXP' and 'P-F_EXP_R_EXP' denote that all failure times and recovery times follow the exponential distribution for serial and parallel MSFC, respectively. 'S-F_EXP_R_Peter' and 'P-F_EXP_R_Peter' denote that all failure times follow the exponential distribution and all recovery times follow the deterministic distribution for serial MSFC and parallel MSFC, respectively. We can observe that CDF type of failure time is an important factor for improving MSFC dependability.
## 5 Conclusion
In this paper, we develop a hierarchical model of an MSFC system consisting of \(n\) MSFs. There are \(n\) SMP sub-models and a RBD sub-model in the model. Each SMP sub-model describes the behaviors of components in a Primary Metaverse host and its Backup Metaverse host, and the RBD sub-model describes the composition of \(n\) SMP sub-models. Then, we derive the closed-formed formulas of calculating the steady-state availability and reliability of an MSFC comprising arbitrary number of MSFs. Finally, we evaluate the effect of system parameter, backup components' behaviors and CDF types on MSFC dependability, providing guidance for dependability optimization.
This paper assumes that an active component has a corresponding backup component. Nevertheless, it is possible that an active component has multiple corresponding backup components. In future work, we will explore the influence of the number of backup components on MSFC dependability.
|
2308.10890
|
Dust collapse in asymptotic safety: a path to regular black holes
|
Regular black hole spacetimes are obtained from an effective Lagrangian for
Quantum Einstein Gravity. The interior matter is modeled as a dust fluid, which
interacts with the geometry through a multiplicative coupling function denoted
as $\chi$. The specific functional form of $\chi$ is deduced from
Asymptotically Safe gravity, under the key assumption that the Reuter fixed
point remains minimally affected by the presence of matter. As a consequence
the gravitational coupling vanishes at high energies. The static exterior
geometry of the black hole is entirely determined by the junction conditions at
the boundary surface. Consequently, the resulting global spacetime geometry
remains devoid of singularities at all times. This result offers a novel
perspective on regular black holes in Asymptotically Safe gravity.
|
Alfio Bonanno, Daniele Malafarina, Antonio Panassiti
|
2023-08-21T17:44:26Z
|
http://arxiv.org/abs/2308.10890v1
|
# Dust collapse in asymptotic safety: a path to regular black holes
###### Abstract
Regular black hole spacetimes are obtained from an effective Lagrangian for Quantum Einstein Gravity. The interior matter is modeled as a dust fluid, which interacts with the geometry through a multiplicative coupling function denoted as \(\chi\). The specific functional form of \(\chi\) is deduced from Asymptotically Safe gravity, under the key assumption that the Reuter fixed point remains minimally affected by the presence of matter. As a consequence the gravitational coupling vanishes at high energies. The static exterior geometry of the black hole is entirely determined by the junction conditions at the boundary surface. Consequently, the resulting global spacetime geometry remains devoid of singularities at all times. This result offers a novel perspective on regular black holes in Asymptotically Safe gravity.
In the realm of general relativity, black holes (BH) are fascinating objects characterized by spacetime singularities concealed within an event horizon [1]. The occurrence of singularities makes a compelling case for the study of models beyond general relativity, where spacetime remains geodesically complete. One prominent approach to achieve this involves replacing the singularity with a regular patch of de Sitter space [2], an old concept that has garnered renewed attention in recent times. Much of the existing research in the literature derives regular black hole geometries through modifications of the static Misner-Sharp mass with the aim of rapid convergence to zero at small distances. This is the case for static regular BH metrics like the Poisson-Israel model [3], the Asymptotically Safe (AS) model [4], the Dymnikova regular black hole model [5; 6], or the Hayward metric [7], to name a few (see [8] for an extended review). However, endeavors to obtain these solutions from an underlying theory have encountered several challenges, and a consensus on embedding regular black hole solutions within a more general gravity theory remains elusive [9; 10].
An alternative route to obtaining modifications of the classical BH solutions is by matching a nonsingular homogeneous interior model, describing collapsing matter, to a static exterior black hole solution. This method has received considerable attention in recent years. The minisuperspace approximation in Loop Quantum Cosmology leads to an effective Friedman equation with repulsive (i.e., negative) gravity at high densities, resulting in a bouncing interior model [11; 12; 13; 14]. Conversely, assuming a Renormalization Group (RG) improved regular exterior allows for obtaining a singularity-free collapsing dust model as the interior solution [15]. Other methods obtain similar results relying on different approaches to quantization (see for example [16; 17; 18]). However, similarly to the above mentioned static solutions, deducing these models from an effective Lagrangian remains challenging. On the other hand numerical simulations of the formation of regular BH are often limited to a 2D dilaton gravity model [19; 20] and the generalization to a 4D model is problematic unless a physically motivated Lagrangian formulation is found.
Within this study, we present a resolution to this quandary by extending an initial idea by Markov and Mukanov [21]. Our approach involves formulating gravity's antiscreening behavior in ultraplanckian energy domains [22] through the inclusion of a multiplicative coupling with the matter Lagrangian. The structure of this coupling is guided by the Reuter fixed point of AS gravity [23]. Remarkably both the energy-momentum and the effective energy-momentum tensors are conserved in this theory, in contrast to what happens in most approaches mentioned above. Notably, under conditions of low energy, our model seamlessly recovers the equations of standard general relativity. Also, over extensive distances, the solution for black holes bears resemblance to the Schwarzschild so
lution.
To be more specific, let us consider a matter fluid with a proper density \(\epsilon\), characterized by a 4-velocity \(u^{\mu}\) such that \(u_{\mu}u^{\mu}=-1\), and a rest-mass density \(\rho\). The mass continuity equation is expressed as \((\rho u^{\mu})_{;\mu}=0\), and for a non-dissipative fluid, the relative variations of density are related as \(\delta\rho/\rho=\delta\epsilon/(p(\epsilon)+\epsilon)\).
Following the approach in [21], we introduce the action for our system as follows:
\[S=\frac{1}{16\pi G_{N}}\int d^{4}x\sqrt{-g}\left[R+2\chi(\epsilon)\mathcal{L} \right]. \tag{1}\]
Here, \(\mathcal{L}=-\epsilon\) represents the matter Lagrangian, and the function \(\chi=\chi(\epsilon)\) serves as a multiplicative gravity-matter coupling with the important property \(\chi(\epsilon=0)=8\pi G_{N}\). The metric variation of the matter part of the Lagrangian yields
\[\frac{1}{\sqrt{-g}}\,\delta\left(2\,\sqrt{-g}\,\chi\,\epsilon\right)=2\frac{ \partial(\chi\epsilon)}{\partial\epsilon}\delta\epsilon-\chi\,\epsilon\,g_{ \mu\nu}\,\delta g^{\mu\nu}. \tag{2}\]
Note that the variation of \(\rho\) under a change of the metric is given by [24]:
\[\delta\rho=\frac{\rho}{2}(g_{\mu\nu}+u_{\mu}u_{\nu})\delta g^{\mu\nu}. \tag{3}\]
As a result, the total variation of the action (1) leads to the following field equations:
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{\partial(\chi\epsilon)}{\partial \epsilon}T_{\mu\nu}+\frac{\partial\chi}{\partial\epsilon}\epsilon^{2}g_{\mu \nu}\equiv T_{\mu\nu}^{\rm eff}, \tag{4}\]
where
\[8\pi G(\epsilon)=\frac{\partial(\chi\epsilon)}{\partial\epsilon},\quad\Lambda( \epsilon)=-\frac{\partial\chi}{\partial\epsilon}\epsilon^{2}, \tag{5}\]
represent the effective Newton constant and cosmological constant, respectively. Here, \(T_{\mu\nu}=(\epsilon+p(\epsilon))u_{\mu}u_{\nu}+pg_{\mu\nu}\) is conserved. For spherically homogeneous collapse the metric functions in the diagonal line element in co-moving coordinates \(\{t,r,\theta,\phi\}\) are \(g_{00}=-e^{2\nu(r,t)}\), \(g_{11}=e^{2\psi(r,t)}\), and \(g_{22}=C^{2}(r,t)\). Then we can express the field equations (4) as follows
\[\frac{F_{\rm eff}^{\prime}}{C^{2}C^{\prime}}=8\pi G(\epsilon) \epsilon+\Lambda(\epsilon)=\chi(\epsilon)\epsilon\equiv\epsilon_{\rm eff} \tag{6}\] \[-\frac{\ddot{F}_{\rm eff}}{C^{2}C}=8\pi G(\epsilon)p-\Lambda( \epsilon)\equiv p_{\rm eff}, \tag{7}\]
and
\[\dot{C}^{\prime}=\dot{C}\nu^{\prime}+C^{\prime}\dot{\psi}. \tag{8}\]
Note that \(\epsilon_{\rm eff}>0\) always, while \(p_{\rm eff}\) can become negative. Dotted quantities represent derivatives with respect to the comoving time (\(t\)), while prime denotes derivatives with respect to the comoving radial coordinate (\(r\)). The function \(F_{\rm eff}\) represents the effective Misner-Sharp mass of the system, which can be defined in analogy with the Schwarzschild mass \(M\) from \(1-F_{\rm eff}/C=g_{\mu\nu}\nabla^{\mu}C\nabla^{\nu}C\)[25], and it is given by:
\[F_{\rm eff}=C(1-C^{\prime 2}e^{-2\psi}+\dot{C}^{2}e^{-2\nu}). \tag{9}\]
Additionally, the Bianchi identity takes the form:
\[\nu^{\prime}=-\frac{p_{\rm eff}^{\prime}}{\epsilon_{\rm eff}+p_{\rm eff}}. \tag{10}\]
In the case of a homogeneous perfect fluid, we have \(\epsilon(t)\) and \(p(t)\), which in turn imply \(\epsilon_{\rm eff}(t)\) and \(p_{\rm eff}(t)\). As a consequence of the Bianchi identity, we can set \(\nu=0\) and integrate equation (8) to obtain \(e^{2\psi}=C^{\prime 2}/(1-Kr^{2})\), where \(K\) is the integration constant related to the curvature of the 3-space. The line element can then be written as:
\[ds^{2}=-dt^{2}+\frac{C^{\prime 2}}{1-Kr^{2}}dr^{2}+C^{2}d\Omega^{2}, \tag{11}\]
where \(d\Omega^{2}\) represents the metric on the unit 2-sphere. By rescaling the area-radius function \(C\) using the adimensional scale factor \(a\) according to \(C=ra\), the energy-momentum conservation equation gives
\[d\epsilon+3(p(\epsilon)+\epsilon)d\ln a=0. \tag{12}\]
Following [26] we rescale also the effective Misner-Sharp mass as \(F_{\rm eff}=r^{3}m_{\rm eff}\). We can then rewrite equations (6) and (7) as follows:
\[\epsilon_{\rm eff}=\frac{3m_{\rm eff}}{a^{3}},\qquad p_{\rm eff}=-\frac{m_{\rm eff,a}}{a^{2}}, \tag{13}\]
where by \(X_{,a}\) we indicate derivatives of \(X\) with respect to \(a\), and equation (9) becomes:
\[m_{\rm eff}=a(\dot{a}^{2}+K). \tag{14}\]
Here, \(m_{\rm eff}(a)=-aV(a)\), where the potential \(V(a)\) reads
\[V(a)=-\frac{a^{2}}{3}\int_{0}^{\epsilon(a)}G(s)ds. \tag{15}\]
In principle, if we know \(G(\epsilon)\) from a fundamental theory, from (5) is possible to determine \(\chi\) and close the system.
In this work, we assume that the behavior of \(G\) as a function of the energy scale is governed by a renormalization group trajectory close to the ultraviolet (UV) fixed point of the AS program [23; 27; 28; 29; 30; 31; 32; 33]. Specifically, we adopt the approximate running of \(G\) as proposed in [34] in the limit of Quantum Einstein Gravity
\[G(k)=\frac{G_{N}}{1+G_{N}k^{2}/g_{*}}, \tag{16}\]
where \(k\) represents the IR regulator scale, and \(g_{*}=570\pi/833\) is the UV fixed point. To connect \(k\) to \(\epsilon\)
we require a prescription. The analysis in [34] demonstrates that the UV scaling of the physical Newton constant \(G_{N}(q)\), where \(q\) is the external momentum, is in essence the same as predicted by the renormalized coupling \(G(k)\). The only distinction lies in the crossover scale, a non-universal feature of the flow (see particularly Fig.4 in [34]). Hence, identifying the cutoff scale \(k\) with the characteristic energy scale of the system should capture the qualitative features of the physical flow [35] at distances \(d\sim 1/k\), consistent with early applications of the AS scenario. However, it is essential to acknowledge that the key assumption underlying equation (16) is that the presence of matter does not significantly deform the flow and compromise the fixed point [36]. This assumption is crucial in maintaining the integrity of the renormalization group trajectory and ensuring its consistency in the presence of matter.
We now consider the scenario of dust collapse, therefore taking \(p=0\), \(\epsilon\propto a^{-3}\), and \(\rho=\epsilon\). In accordance with the rationale presented in [4], we interpret the variable \(d\) as the proper distance. As as consequence we find the relationship:
\[d\sim\frac{r^{3/2}}{\sqrt{m}}\sim\frac{1}{\sqrt{\epsilon}}, \tag{17}\]
where \(r\) is the radial distance. We thus obtain the following expression for the behavior of \(G(\epsilon)\):
\[G(\epsilon)=\frac{G_{N}}{1+\xi\epsilon}, \tag{18}\]
where we introduce the dimensionful scale \(\xi\), and we include the pure number \(g_{*}\) in the definition of \(\xi\). It is important to note that, in general, we would expect \(\xi\sim 1/m_{\rm pl}^{4}\), but currently, there is no clear method to determine \(\xi\) from first principles, and this parameter should be constrained from observations. Setting \(8\pi G_{N}=1\), we obtain:
\[\chi(\epsilon)=\frac{\log(1+\xi\epsilon)}{\xi\epsilon},\quad\Lambda(\epsilon )=\frac{\log(1+\xi\epsilon)}{\xi}-\frac{\epsilon}{1+\xi\epsilon}. \tag{19}\]
Importantly, in the classical limit, achieved for \(\xi\to 0\), we recover \(\chi=1\) and \(\Lambda=0\), as expected. Figure 1 illustrates the potential \(V(a)\) for the running coupling defined in equation (18), as compared to the Oppenheimer-Snyder-Datt (OSD) case. Reducing our formalism to the OSD model [37; 38], which describes non-interacting dust particles with \(p=0\), we have \(T_{\mu\nu}=\epsilon u_{\mu}u_{\nu}\), \(\epsilon=3m_{0}/a^{3}\) and \(m_{\rm eff}\to m_{0}\). Equation (15) yields \(V(a)=-m_{0}/a\) recovering the usual equation for the scale factor of homogeneous dust:
\[\dot{a}=-\sqrt{\frac{m_{0}}{a}-K}. \tag{20}\]
It is worth noting that bound collapse is obtained for \(K>0\), while the marginally bound case has \(K=0\). Interestingly, to achieve singularity resolution at the end of the collapse, the usual energy conditions must be violated. In many models available in the literature, the growth of negative effective pressures leads to repulsive effects that halt the collapse. However, in our case, such a term is due to the running cosmological constant, which generates repulsive effects, allowing the collapse to come to a halt. For \(\xi\neq 0\) and (18), we obtain:
\[\dot{a}=-\sqrt{\frac{\log(1+3m_{0}\xi/a^{3})}{3\xi}a^{2}-K}. \tag{21}\]
The solution to this equation for marginally bound collapse (\(K=0\)) is shown in Figure 2 and compared with the OSD model and the semi-classical collapse model developed in [39]. At large times, the scale factor behaves as:
\[a(t)\sim e^{-t^{2}/4\xi},\quad t\to\infty, \tag{22}\]
indicating that \(a=0\) is never reached at any finite time, and the spacetime is geodesically complete. As \(t\) approaches infinity, the scale factor tends to diminish exponentially, but it never reaches zero, ensuring the avoidance of singularities within any finite time frame.
To implement the matching of the collapsing matter cloud described above with a suitable exterior, we employ the formalism developed by Israel in [40], which was further refined by Senovilla and others in [41; 42]. We consider the matching across a comoving boundary \(r=r_{b}\) in the interior, which corresponds to a collapsing boundary \(C_{b}(t)=C(t,r_{b})=r_{b}a(t)\). The induced metric on the matching surface \(\Sigma\) in comoving coordinates can be expressed as:
\[ds_{\Sigma}^{2}=-dt^{2}+r_{b}^{2}a^{2}d\Omega^{2}. \tag{23}\]
For the exterior, we consider a generic static and spherically symmetric line element in \(\{T,R,\theta,\phi\}\) co
Figure 1: The thick line shows the potential \(V(a)\) for dust collapse in the AS model with running \(G\) given by equation (18). For comparison the OSD model with \(8\pi G_{N}=1\) is shown as the dotted line. The parameters are fixed for illustrative purposes to \(m_{0}=1\) and \(\xi=0.001\).
ordinates written as:
\[ds^{2}=-f(R)dT^{2}+\frac{1}{f(R)}dR^{2}+R^{2}d\Omega^{2}, \tag{24}\]
where \(f=1-2M(R)/R\), and we assume a continuous matching between the two geometries. The continuity condition uniquely determines the form of \(M(R)\) in the exterior. Specifically, if the collapsing boundary is parametrized by \(R=R_{b}(T)\), the induced metric on the boundary becomes:
\[ds^{2}_{\Sigma}=-\left[f(R_{b})-f(R_{b})^{-1}\left(\frac{dR_{b}}{dT}\right)^{2 }\right]dT^{2}+R_{b}^{2}d\Omega^{2}. \tag{25}\]
The matching conditions for the metric functions on the boundary surface \(\Sigma\) immediately provide the relation between \(t\) and \(T\) on \(\Sigma\) and the condition \(R_{b}(T(t))=r_{b}a(t)\). The second fundamental form for the interior metric in comoving coordinates is
\[K^{-}_{tt}=0,\quad K^{-}_{\theta\theta}=r_{b}a\sqrt{1-Kr_{b}^{2}}. \tag{26}\]
From the extrinsic curvature on the exterior we obtain
\[K^{+}_{tt}=-\frac{1}{2}\frac{2\tilde{R}_{b}+f_{,R}(R_{b})}{\Delta(R_{b})}, \quad K^{+}_{\theta\theta}=R_{b}\Delta(R_{b}), \tag{27}\]
with \(\Delta(R_{b})=\sqrt{1-2M(R_{b})/R_{b}+\dot{R}_{b}^{2}}\), so that on imposing
\[[K_{tt}]=K^{+}_{tt}-K^{-}_{tt}=0,\quad[K_{\theta\theta}]=K^{+}_{\theta\theta} -K^{-}_{\theta\theta}=0 \tag{28}\]
the functional form of \(M(R)\) can be obtained. Finally, we arrive at the most important result of our investigation:
\[M(R)=\frac{R^{3}}{6\xi}\log\left(1+\frac{6M_{0}\xi}{R^{3}}\right), \tag{29}\]
where the matching implies \(m_{0}r_{b}^{3}=2M_{0}\). This expression describes the Misner-Sharp mass pertaining to the static exterior, originating from an interior undergoing gravitational collapse. The dynamics of this interior are guided by an effective Lagrangian that incorporates the Asymptotically Safe (AS) nature of gravitational interaction at Planckian energy scales.
Importantly, the classical limit is recovered for \(\xi\to 0\) (or equivalently for \(R\to\infty\)), leading to:
\[M(R)=M_{0}-\frac{3M_{0}^{2}\xi}{R^{3}}+\frac{12M_{0}^{3}\xi^{2}}{R^{6}}+O(\xi ^{3}), \tag{30}\]
as expected. In the low-energy limit, the Schwarzschild solution is regained. Notably, in the small \(R\) regime, \(M(R)\) behaves like
\[M(R)=\frac{1}{6\xi}R^{3}\log\left(\frac{6M_{0}\xi}{R^{3}}\right)+\frac{R^{6}} {36M_{0}\xi^{2}}+O\left(R^{7}\right) \tag{31}\]
moreover, as \(R\geq R_{b}=r_{b}a(t)\) and \(a(t)>0\) always, our solution remains everywhere regular, avoiding any singularities. This result is of significant importance as it demonstrates the compatibility of the collapsing matter interior with our AS effective Lagrangian model in producing a regular black hole exterior.
The horizon's position is determined by solving the transcendental equation \(f(R)=0\), which can yield intriguing results. Notably, for any specified value of \(M_{0}\), a critical threshold \(\xi_{\rm cr}\) exists. When \(\xi<\xi_{\rm cr}\), an event horizon becomes evident, accompanied by an inner horizon at smaller values of the radial coordinate. However, for \(\xi>\xi_{\rm cr}\), a scalar remnant emerges, as illustrated in Figure 3 for \(M_{0}=1\).
We expect for the global solution, that the evolution of the causal structure in the interior matches the formation of the causal structure for the exterior geometry. The condition for the formation of trapped
Figure 2: The thick line shows the scale factor \(a(t)\) for marginally bound (\(K=0\)) dust collapse in the AS model with running gravitation and cosmological constant solution of equation (21). For comparison the OSD collapse model is shown as the dotted line and the semi-classical collapse leading to a regular black hole developed in [39] is shown as the dashed line. The parameters are fixed for illustrative purposes to \(m_{0}=1\) and \(\xi=0.01\).
Figure 3: The behavior the metric function \(f(R)\) for different values of the parameter \(\xi\) for \(M_{0}=1\). For \(\xi<\xi_{cr}\) there are an inner and an outer horizon, corresponding to the two solutions of \(f(R)=0\) (dashed curve for \(\xi=0.1\)) and for \(\xi=\xi_{cr}\approx 0.45\) there is only one horizon (thick curve). For \(\xi>\xi_{cr}\) the two horizons disappear and one is left with a scalar remnant (dotted curve for \(\xi=0.6\)).
surfaces in the interior may be obtained from
\[1-\frac{F_{\rm eff}}{C}=1-r^{2}(\dot{a}^{2}+K)=0, \tag{32}\]
which implicitly gives the curve \(r_{\rm ah}(t)\) describing the comoving time \(t\) at which the shell \(r\) becomes trapped:
\[r_{\rm ah}(t)=\frac{1}{\sqrt{\dot{a}^{2}+K}}. \tag{33}\]
It is clear that in the OSD case as \(\dot{a}\) diverges the apparent horizon curve tends to zero. On the other hand in the AS case we have that \(\dot{a}\) goes to zero asymptotically and therefore \(r_{\rm ah}\to 1/\sqrt{K}\). In the marginally bound case, for any given \(\xi\) we have that \(r_{\rm ah}\) reaches a minimum value \(r_{\rm min}\) and then grows to infinity. Then there are values of the boundary \(r_{b}>r_{\rm min}\) that lead to \(r_{ah}\) crossing the boundary twice thus creating two horizons in the exterior. Accordingly there is a critical value \(r_{b}=r_{\rm min}\) for which only one horizon exists and for values smaller than \(r_{\rm min}\) no horizon forms throughout collapse. The apparent horizon curve is shown in Figure 4.
Our solution represents a significant alternative to present models of regular black holes. It is built upon the assumption that black hole solutions observed in Nature are sourced by a matter interior whose evolution is non-singular due to the antiscreening of the gravitational constant at small distances [22], according to a specific renormalization group trajectory terminating at the Reuter fixed point of AS gravity. This mechanism is implemented using an effective Lagrangian that incorporates a multiplicative coupling with the matter component. Although in this work, we considered an idealized model of matter consisting of a pressureless fluid, our framework can be generalized to incorporate more realistic equations of state and more accurate RG trajectories, providing a consistent description of the matter component. We intend to address these issues in future investigations.
## Acknowledgement
DM would like to thank Catania Astrophysical Observatory - INAF- for warm hospitality during the preparation of the manuscript. DM acknowledges support from Nazarbayev University Faculty Development Competitive Research Grant No. 11022021FD2926.
|
2306.06586
|
Two novel numerical methods for gradient flows: generalizations of the
Invariant Energy Quadratization method
|
In this paper, we conduct an in-depth investigation of the structural
intricacies inherent to the Invariant Energy Quadratization (IEQ) method as
applied to gradient flows, and we dissect the mechanisms that enable this
method to uphold linearity and the conservation of energy simultaneously.
Building upon this foundation, we propose two methods: Invariant Energy
Convexification and Invariant Energy Functionalization. These approaches can be
perceived as natural extensions of the IEQ method. Employing our novel
approaches, we reformulate the system connected to gradient flow, construct a
semi-discretized numerical scheme, and obtain a commensurate modified energy
dissipation law for both proposed methods. Finally, to underscore their
practical utility, we provide numerical evidence demonstrating these methods'
accuracy, stability, and effectiveness when applied to both Allen-Cahn and
Cahn-Hilliard equations.
|
Yukun Yue
|
2023-06-11T04:37:19Z
|
http://arxiv.org/abs/2306.06586v1
|
Two novel numerical methods for gradient flows: generalizations of the invariant energy quadratization method
###### Abstract.
In this paper, we conduct an in-depth investigation of the structural intricacies inherent to the Invariant Energy Quadratization (IEQ) method as applied to gradient flows, and we dissect the mechanisms that enable this method to uphold linearity and the conservation of energy simultaneously. Building upon this foundation, we propose two methods: Invariant Energy Convexification and Invariant Energy Functionalization. These approaches can be perceived as natural extensions of the IEQ method. Employing our novel approaches, we reformulate the system connected to gradient flow, construct a semi-discretized numerical scheme, and obtain a commensurate modified energy dissipation law for both proposed methods. Finally, to underscore their practical utility, we provide numerical evidence demonstrating these methods' accuracy, stability, and effectiveness when applied to both Allen-Cahn and Cahn-Hilliard equations.
Y.Y.'s work was supported in part by NSF award DMS-1912854.
## 1. Introduction
Gradient flows are a class of partial differential equations (PDEs) that arise in various scientific and engineering fields, such as fluid dynamics, materials science, and optimization, as demonstrated by [4, 10, 14, 18, 19, 30, 31, 45]. These equations model the evolution of a given quantity under the influence of a driving force, which is derived from an energy functional. The study of gradient flows has attracted considerable attention in recent years due to their wide applicability and inherent mathematical structures. In this paper, we present two innovative numerical methods for gradient flows that extend the Invariant Energy Quadratization (IEQ) method, which is recently introduced. These methods offer a generalization of the pre-existing method, leveraging the mathematical structures possessed by the method while preserving the favorable properties, such as the energy-stable property and the ability to construct efficient linear schemes to solve the problems.
To illustrate, let's consider a free energy functional \(E(\phi)=\int_{\Omega}\rho\left(\phi(x)\right)\,dx\), where \(\rho\left(\phi(x)\right)=\left[\frac{1}{2}|\nabla\phi|^{2}+F(\phi)\right]\) serves as the energy density function of \(E(\phi)\). The corresponding gradient flow can then be formulated as
\[\frac{\partial\phi}{\partial t}=\mathcal{G}\mu, \tag{1.1b}\] \[\mu=\frac{\delta E}{\delta\phi}=-\Delta\phi+\frac{\delta F}{\delta\phi}:=- \Delta\phi+f(\phi), \tag{1.1a}\]
with \(\phi(0)=\phi_{0}\) to be its initial condition. In this context, \(\mathcal{G}=I\) signifies the identity operator when we are considering a gradient flow in \(L^{2}\), and \(\mathcal{G}=\Delta\), which denotes the Laplacian, when we are investigating a gradient flow in \(H^{-1}\)[3, 36].
Gradient flows have a rich mathematical structure and are related to several important concepts in mathematics, such as the Wasserstein distance, optimal transport, and convex optimization. There has been a surge of interest in unearthing its mathematical properties, with notable contributions made by [13, 15, 37, 35]. Concurrently, researchers have been striving to establish various numerical methods to resolve gradient flow problems. These efforts have given rise to techniques such as the convex-splitting method [6, 23, 38, 43], the Invariant Energy Quadratization (IEQ) [7, 25, 26, 46, 48, 52], and the Scalar Auxiliary Variable (SAV) methods [39, 40, 41, 42]. These methods allow for the development of efficient schemes for solving gradient flows, and hold the energy-stable property, assuring the numerical solutions maintain certain physical and mathematical properties of the continuous problem.
The IEQ method, particularly, has gained popularity as a numerical method in recent years. It introduces an auxiliary variable whose quadratization equals the original energy density function when the energy density function has a lower bound. It is based on the idea of conserving certain invariants of the continuous problem in the discrete setting. As an illustration, one could define
\[r(\phi)=\sqrt{F(\phi)+A_{1}},\]
with \(A_{1}\) as a constant ensuring \(r\) to be well-defined. Provided that \(F\) is bounded from below, an appropriate constant \(A_{1}\) can always be found. Consequently, system (1.1) can be reformulated as
\[\frac{\partial\phi}{\partial t}=\mathcal{G}\mu, \tag{1.2b}\] \[\mu=-\Delta\phi+2rP,\] (1.2c) \[r_{t}=P\phi_{t},\] (1.2d) \[P=\frac{\frac{\delta F}{\delta\phi}}{2\sqrt{F(\phi)+A_{1}}} \tag{1.2a}\]
The chain rule can be conveniently applied to the auxiliary variable \(r\) to validate this reformulation. This approach has successfully facilitated the construction of linear numerical schemes for various gradient-flow type problems, including the Cahn-Hilliard equation [47, 49], the Ericksen-Leslie model [12] and Beris-Edwards model for liquid crystals [24, 50, 53], and the sine-Gordon equation [21, 27]. For instance, a linear numerical scheme can be constructed by treating \(r\) implicitly and \(P\) explicitly. Specifically, we have:
\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}=\mathcal{G}\mu^{n+1}, \tag{1.3b}\] \[\mu^{n+1}=-\Delta\phi^{n+1}+2r^{n+1}P^{n},\] (1.3c) \[r^{n+1}-r^{n}=P^{n}:(\phi^{n+1}-\phi^{n}),\] (1.3d) \[P^{n}=\frac{\frac{\delta F}{\delta\phi}(\phi^{n})}{2\sqrt{F(\phi^{n})+A_{1}}}. \tag{1.3a}\]
The resulting numerical scheme is characterized by a modified energy dissipation law (see, for instance, [53]). One of the notable strengths of the IEQ method is its energy-stable property, which ensures a monotonic decrease of the discrete energy functional along the trajectory of the numerical solution, akin to the behavior observed in the continuous problem. This property plays a vital role in the long-term behavior of the numerical solution and the preservation of inherent physical and mathematical structures.
The power of the IEQ method fundamentally lies in its innovative decomposition of the nonlinear term into a product of two functions, specifically emphasizing which part should be treated implicitly. This quadratization formulation aligns seamlessly with a usual form of the energy term encountered in physical processes, as evidenced by numerous scientific studies. In addition, quadratization formulation induces a linear term to appear, leading to a linear scheme. However, the energy term in various natural scientific systems may not be confined to a quadratic form. This observation prompts us to question the possibility of other formulations capable of preserving the advantageous properties of the IEQ method. In other words, we are interested in discovering other indigenous decomposition of the nonlinear term to construct energy-stable, linear numerical schemes.
In this paper, we propose two novel strategies for extending the Invariant Energy Quadratization (IEQ) method, namely the Invariant Energy Convexification (IEC) and the Invariant Energy Functionalization (IEF) approaches. Both methods are based on introducing an auxiliary variable originating from using different functions to replace the original energy density function. Moreover, they both induce a modified energy dissipation law within the reformulation of the original gradient-flow type system. Importantly, these two methods retain the advantageous ability to induce efficient linear numerical schemes that rival the efficacy of the IEQ method. A noteworthy contribution of these proposed approaches is their potential to accommodate various functions as auxiliary variables during the transformation process. This flexibility suggests the possibility of identifying an optimal form for implementing the method tailored to the specifics of a given physical system.
The organization of the remainder of this paper is as follows: In Section 2, we introduce the Invariant Energy Convexification (IEC) method alongside a new reformulation of the system as described in equation (1.1). We shall proceed to derive a semi-discrete numerical scheme and verify the stability of these numerical schemes under suitably modified energy. Likewise, in Section 3, we outline the Invariant Energy Functionalization (IEF) method, present a corresponding numerical scheme, and confirm its energy stability. In Section 4, we substantiate the accuracy and efficacy of our proposed methods through a series of numerical experiments, providing compelling evidence of their utility.
## 2. The invariant energy Convexification
In this section, we introduce our first numerical method that expands upon the core principles of the IEQ method. As it has been stated in the introduction, the IEQ method has established itself as a potent and effective technique for addressing a broad class of problems. At the heart of the IEQ method is the introduction of an auxiliary variable representing the square root of the energy density function. The incorporation of this auxiliary variable enables the method to more effectively manage the non-linear terms that emerge in various problems. Specifically, the IEQ method handles a portion of the resulting non-linear term implicitly while maintaining the remainder in an explicit form. This blend of implicit and explicit treatment is a distinguishing feature of the IEQ method and contributes to its efficacy in constructing linear numerical schemes. A critical question arising from this approach is whether the decomposition of the non-linear term in the IEQ method is unique or if it can be further generalized. This will be discussed in the following.
### Reformulation with L-smooth convex function
To find a more general class of functions that share the advantages of the IEQ method, we need to figure out the nature of its effectiveness first. We notice that the energy-conserving property mainly relies on the convexity of the auxiliary function used to replace the original energy density function. Meanwhile, the linearity relies on the fact that the derivative of a quadratic function is a linear function. Therefore, it enlightens us to find a class of convex functions to introduce the auxiliary variable and look for a possible linear approximation of its derivative to construct the numerical scheme. To investigate this question, we propose a new formulation designed to imitate the IEQ formulation's structure and broaden its applicability while preserving its essential advantages. As a result, we discover that quadratization is not the sole viable option for variable transformation; a particular class of convex functions can also pave the way to an energy-stable linear scheme.
Specifically, let us presume the energy density function \(F\) is bounded from below. We will take \(c:\mathbb{R}\to\mathbb{R}\) as a smooth convex function that is monotonically increasing on a connected set \(K\), and \(\mathbb{R}^{+}\) is contained \(c(K)\). We further assume that \(c\) is \(L-\)smooth [34], which means that for any \(x,y\) within domain of function \(c\), a constant \(0<L<\infty\) exists such that
\[|\nabla c(x)-\nabla c(y)|\leq L|x-y|.\]
This condition implies that
\[c(y)\leq c(x)+c^{\prime}(x)(y-x)+\frac{L}{2}|y-x|^{2}. \tag{2.1}\]
Now, consider \(r:[0,T]\times\Omega\to\mathbb{R}\) to be a function solving equation \(c\left(r(t,x)\right)=F\left(\phi(t,x)\right)+A_{1}\) for every \((t,x)\in[0,T]\times\Omega\) where \(A_{1}\) is the constant to ensure the non-negativity of \(F\). Given the invertibility of \(c\) on \(\mathbb{R}^{+}\), this definition is indeed valid. Differentiating both sides of the equation results in
\[c^{\prime}(r)\,\frac{\delta r}{\delta\phi}=\frac{\delta F(\phi)}{\delta\phi}= f(\phi).\]
Therefore, systems (1.1) will be rewritten as
\[\phi_{t}=\mathcal{G}\mu, \tag{2.2b}\] \[\mu=-\Delta\phi+c^{\prime}(r)\frac{\delta r}{\delta\phi},\] (2.2c) \[r_{t}=\frac{\delta r}{\delta\phi}\phi_{t}. \tag{2.2a}\]
Taking inner product above with \(\mu,\phi_{t}\) and \(c^{\prime}(r)\) respectively, one can easily obtain a modified energy dissipation law as:
\[\frac{d}{dt}\left[\frac{1}{2}\|\nabla\phi\|^{2}+\int_{\Omega}c(r)\right]=(G\mu,\mu)\leq 0, \tag{2.3}\]
for \(\mathcal{G}=-I\) and \(\mathcal{G}=\Delta\) where \(\|\cdot\|\) is the \(L^{2}\) norm. If \(c^{\prime}(r)\) is a linear function with respect to \(r\), then we can construct a linear numerical scheme simply by treating \(c^{\prime}(r)\) implicitly and \(\frac{\delta r}{\delta\phi}\), which is how IEQ works. However, when \(c^{\prime}(r)\) is not linear, treating \(c^{\prime}(r)\) implicitly will result in a nonlinear equation to be solved at each step which generally loses a very strong advantage of this approach. In addition, we want to point out that the convexity is not crucial in the continuous case as reformulation (2.2) and the corresponding modified energy dissipation law actually can work by simply choosing \(c\) as a smooth function, with the convexity condition being dropped. However, convexity
will play an important role in the way of constructing linear energy-stable schemes in the discrete case. It will be the main problem that we want to solve in the next subsection.
### Numerical scheme
Our newfound approach provides the cornerstone for devising a linear numerical scheme to resolve (2.2). We merely need a linear approximation for \(c^{\prime}(r)\) in the discretized scenario. With this in mind, we propose the following first-order semi-discrete scheme:
\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}=\mathcal{G}\mu^{n+1}, \tag{2.4b}\] \[\mu^{n+1}=-\Delta\phi^{n+1}+\left[c^{\prime}(r^{n})+\alpha L(r^{n+1}-r^{n}) \right]P^{n},\] (2.4c) \[r^{n+1}-r^{n}=P^{n}(\phi^{n+1}-\phi^{n}), \tag{2.4a}\]
where \(P^{n}=\frac{\delta r}{\delta Q}(\phi^{n})\) and \(\alpha\geq\frac{1}{2}\) is a chosen parameter. The elegance of this scheme lies in its efficient solvability at each time step. Specifically, given that \((\phi^{n},r^{n})\) are known, substituting (2.2c) into (2.2b) enables the replacement of \(r^{n+1}\) by the formula of \(\phi^{n+1}\), hence allowing the computation of \((\phi^{n+1},r^{n+1})\) by first finding \(\phi^{n+1}\) from
\[(I+\Delta t\,\mathcal{G}\Delta)\phi^{n+1}-\alpha L\Delta t\,\mathcal{G}\left( P^{n}\phi^{n+1}P^{n}\right)=\phi^{n}+\Delta t\mathcal{G}\,\left[c^{\prime}(r^{n})- \alpha LP^{n}\phi^{n}\right]P^{n},\]
and subsequently updating \(r^{n+1}\) by (2.4c). Alternatively, the scheme can be executed by amalgamating \(\mu^{n+1}\) in the solving process. This approach determines \((\phi^{n+1},\mu^{n+1},r^{n+1})\) by resolving the linear system
\[\begin{pmatrix}\frac{1}{\Delta t}I&-\mathcal{G}&0\\ \Delta&I&-\alpha L\tilde{P}^{n}\end{pmatrix}\begin{pmatrix}\phi^{n+1}\\ \mu^{n+1}\\ r^{n+1}\end{pmatrix}=\begin{pmatrix}\frac{1}{\Delta t}\phi^{n}\\ c^{\prime}(r^{n})P^{n}-\alpha Lr^{n}P^{n}\\ r^{n}-P^{n}\phi^{n}\end{pmatrix}, \tag{2.5}\]
at each time step, given that \((\phi^{n},r^{n})\) have been ascertained. Both implementations offer a linear pathway to updating the numerical results, thereby providing us a streamlined alternative to tackling nonlinear gradient-flow type problems, as opposed to implicitly addressing the nonlinear term and having to resolve a nonlinear equation at each juncture.
**Remark 2.1**.: _In the second implementation approach above, we use \(\tilde{P}^{n}\) to denote a functional operator, a multiplication of \(P^{n}\) to \(r^{n+1}\) point-wisely. In the fully-discrete case, assume the space has been discretized by a \(N_{x}\times N_{y}\) grid, we will treat \(\phi^{n+1},\mu^{n+1},r^{n+1}\) as a \(1\times(N_{x}*N_{y})\) vector, respectively. And \(\tilde{P}^{n}\) will be taken as a \((N_{x}*N_{y})\times(N_{x}*N_{y})\) diagonal matrix by expanding the values of \(P^{n}\) at the \(N_{x}*N_{y}\) grid points onto its diagonal elements. The same notation \(\tilde{\cdot}\) will be repeatedly used in the following to denote a point-wise multiplication operator._
Besides sustaining the linearity property, we can also immediately obtain the following modified discretized energy dissipation law from scheme (2.4).
**Theorem 2.2**.: _The numerical scheme (2.4) is energy-stable. Specifically, define_
\[E^{n}=\frac{1}{2}\|\nabla\phi^{n}\|^{2}+\int_{\Omega}c(r^{n}), \tag{2.6}\]
_then_
\[E^{n+1}-E^{n}\leq 0.\]
Proof.: Taking inner product of (2.2a) with \(\mu^{n+1}\Delta t\), (2.2b) with \(\phi^{n+1}-\phi^{n}\), (2.2c) with \(\left[c^{\prime}(r^{n})+\frac{L}{2}(r^{n+1}-r^{n})\right]\), and using property of \(L\)-smooth function (2.1), we obtain
\[(\mathcal{G}\mu^{n+1},\mu^{n+1})\Delta t =(\phi^{n+1}-\phi^{n},\mu^{n+1})\] \[=\left(\phi^{n+1}-\phi^{n},-\Delta\phi^{n+1}+\left[c^{\prime}(r^ {n})+\alpha L(r^{n+1}-r^{n})\right]P^{n}\right)\] \[=\frac{1}{2}\|\nabla\phi^{n+1}\|^{2}-\frac{1}{2}\|\nabla\phi^{n} \|^{2}+\frac{1}{2}\|\nabla\phi^{n+1}-\nabla\phi^{n}\|^{2}\] \[\quad+\int_{\Omega}\left[c^{\prime}(r^{n})(r^{n+1}-r^{n})+\alpha L (r^{n+1}-r^{n})^{2}\right]\,dx\] \[\geq\frac{1}{2}\|\nabla\phi^{n+1}\|^{2}-\frac{1}{2}\|\nabla\phi^ {n}\|^{2}+\frac{1}{2}\|\nabla\phi^{n+1}-\nabla\phi^{n}\|^{2}+\int_{\Omega} \left[c(r^{n+1})-c(r^{n})\right]\,dx\] \[\geq E^{n+1}-E^{n}.\]
Using the fact that \((\mathcal{G}\mu^{n+1},\mu^{n+1})\leq 0\) independent of choice of \(\mu^{n+1}\), we have finished the proof.
**Remark 2.3**.: _From the deduction, we can see that L-smoothness of \(c(x)\) plays a key role in keeping the energy dissipated rule. Specifically, if we modify \(c(x)\) to be a concave function, then simply taking \(L=0\) is enough to ensure the inequalities appeared in the proof to hold and it will also result in a linear energy-stable numerical scheme._
Following from this theorem, we can immediately obtain the following estimate holds for \(\mu^{n+1}\):
**Corollary 2.4**.: _Assume \(E^{0}\) is bounded. For fixed \(N>0\), we have_
\[0\leq-\sum_{n=0}^{N}(\mathcal{G}\mu^{n},\mu^{n})\Delta t\leq-E^{N}+E^{0}\leq E ^{0},\]
_and so it is bounded. Specifically, if \(\mathcal{G}=\Delta\), we have_
\[0\leq\sum_{n=0}^{N}\|\nabla\mu^{n}\|^{2}\Delta t\leq E^{0},\]
_if \(\mathcal{G}=-I\), we have_
\[0\leq\sum_{n=0}^{N}\|\mu^{n}\|^{2}\Delta t\leq E^{0}.\]
### An example of the IEC scheme
Having laid the groundwork for the standard formulation of the IEC approach and its corresponding numerical scheme's general form, we now face the necessity of an exact form of the chosen L-smooth convex function for concrete computational applications. As we conclude this section, we propose a viable candidate for such a function, demonstrating that it is indeed possible to construct an IEC scheme using a function that satisfies these criteria. Our goal is to identify an option that possesses the properties necessary to induce a valid IEC approach and is also conveniently calculable, thus facilitating our numerical implementation.
Before delving into that, it is worth mentioning that the IEQ approach can be considered a specific instance of the IEC formulation. Indeed, the quadratic function is an \(L\)-smooth convex function with \(L=2\). Thus, by selecting \(\alpha=1\) in (2.4), the resulting numerical scheme aligns perfectly with the scheme induced by the IEQ formulation. This observation serves to justify our perspective that the IEC method is a natural generalization of the IEQ method.
As an alternative to the quadratic function, we turn our attention to the Softplus function [44]. Defining \(c(r)=\ln(1+e^{r})\), we quickly find that the first-order derivative, \(c^{\prime}(r)=\frac{e^{r}}{1+e^{r}}\), and the second-order derivative, \(c^{\prime\prime}(r)=-\frac{e^{r}}{(1+e^{r})^{2}}\), are uniformly bounded. Consequently, \(c(r)=\ln(1+e^{r})\) establishes itself as a non-negative, monotonically increasing, L-smooth convex function with \(L=\frac{1}{4}\). Taking \(c(r)=F(\phi)+A_{1}\), this leads to
\[r(\phi)=\ln(e^{F(\phi)+A_{1}}-1), \tag{2.7}\]
and subsequently to \(P^{n}=P(\phi^{n})\) where
\[P(\phi)=\frac{\delta r}{\delta\phi}(\phi)=\frac{e^{F(\phi)+A_{1}}}{e^{F(\phi) +A_{1}}-1}f(\phi). \tag{2.8}\]
These equations provide an illustrative example of (2.4), demonstrating the applicability of the Softplus function in this context. We can conclude this procedure in the following Algorithm 1.
```
1:Input: The IEC formulation \(c(r)\), the initial value \(\phi\), the initial value \(\phi_{0}\), the initial value \(\phi_{1}\), the initial value \(\phi_{2}\), the initial value \(\phi_{1}\), the initial value \(\phi_{2}\), the initial value \(\phi_{3}\), the initial value \(\phi_{4}\), the initial value \(\phi_{5}\), the initial value \(\phi_{6}\), the initial value \(\phi_{7}\), the initial value \(\phi_{8}\), the initial value \(\phi_{9}\), the initial value \(\phi_{10}\), the initial value \(\phi_{11}\), the initial value \(\phi_{12}\), the initial value \(\phi_{13}\), the initial value \(\phi_{14}\), the initial value \(\phi_{15}\), the initial value \(\phi_{16}\), the initial value \(\phi_{17}\), the initial value \(\phi_{18}\), the initial value \(\phi_{19}\), the initial value \(\phi_{19}\), the initial value \(\phi_{20}\), the initial value \(\phi_{21}\), the initial value \(\phi_{22}\), the initial value \(\phi_{23}\), the initial value \(\phi_{24}\), the initial value \(\phi_{25}\), the initial value \(\phi_{26}\), the initial value \(\phi_{27}\), the initial value \(\phi_{28}\), the initial value \(\phi_{29}\), the initial value \(\phi_{29}\), the initial value \(\phi_{28}\), the initial value \(\phi_{29}\
```
1:procedure
2: Set the step size \(\Delta t\)
3: Set the total number of iterations \(N\)
4: Set the value for \(\alpha\)
5: Set the initial value \(\phi^{0}=\phi_{0}\)
6: Compute \(r^{0}=r(\phi^{0})\) using equation (2.7)
7: Compute \(P^{0}=P(\phi^{0})\) using equation (2.8)
8:for\(n=0,1,2,...,N-1\)do
9: Construct the matrix \(A=\begin{pmatrix}\frac{1}{\Delta t}I&-\mathcal{G}&0\\ \Delta&I&-\frac{\alpha}{4}\bar{P}^{n}\\ -\bar{P}^{n}&0&I\end{pmatrix}\) using \(P^{n}\)
10: Construct the right-hand side \(b=\begin{pmatrix}\frac{e^{rn}}{1+e^{rn}}P^{n}-\frac{\alpha}{4}r^{n}P^{n}\\ r^{n}-P^{n}\phi^{n}\end{pmatrix}\)
11: Update \(\phi^{n+1}\), \(r^{n+1}\) by solving the linear system \(A\begin{pmatrix}\phi^{n+1}\\ \mu^{n+1}\\ r^{n+1}\end{pmatrix}=b\)
12: Update \(P^{n+1}=P(\phi^{n+1})\) using equation (2.8)
13:endfor
14:endprocedure
```
**Algorithm 1** IEC scheme induced by Softplus function
## 3. Invariant Energy Functionalization
As previously established, the two main benefits of the IEQ method are its linearity and energy-dissipation preservation. The conservation of energy, particularly in a discrete case, is primarily ensured through the convexity of the auxiliary function, which replaces the original energy density function. The IEC formulation can be seen as an evolution of this concept, placing the preservation of convexity at its core. In contrast, the linear approximation employed in (2.4) serves as a mechanism to design a linear scheme following the assurance of convexity. It reveals a fundamental tenet of the IEC scheme: the priority of maintaining convexity over linearity.
Nevertheless, it is the linearity that brings efficiency to the IEQ method. If we shift our focus towards preserving linearity, it becomes evident that this attribute is primarily derived from the auxiliary variable's linearity with respect to the auxiliary function's derivative. This understanding guides us to consider decomposing the auxiliary function \(c(r)\) into a product of the auxiliary variable \(r\) and \(\frac{c(r)}{r}\). Here we can drop the convexity assumption of \(c(r)\) and simply take it as a smooth function. If we consider \(\frac{c(r)}{r}\) as an independent auxiliary variable, denoted as \(g(r)\), then the derivative of \(rg(r)\) ends up being linear concerning the auxiliary variable \(r\). This idea provides an alternative means of replacing the original energy density function to achieve a linear scheme but emphasizes preserving linearity more than convexity. We introduce this method in the following section, referring to it as the "Invariant Energy Functionalization" method. This name stems from integrating the auxiliary variable \(r\) with a specific function \(g\) that fulfills some conditions.
### Model reformulation
Let us consider \(g\) to be a smooth function on \(\mathbb{R}\). Analogous to the hypothesis in the IEC formulation, we assume that \(s(r):=rg(r)\) is invertible over a connected set \(K\) with \(\mathbb{R}^{+}\subset s(K)\). We define \(r:[0,T]\times\Omega\to K\) to be a function that resolves the equation \(r(t,x)g\left(r(t,x)\right)=F\left(\phi(t,x)\right)+A_{1}\) for all \((t,x)\in[0,T]\times\Omega\), where \(A\) is a constant employed to ascertain positivity of \(F\). Utilizing the chain rule, we can then derive the following:
\[\left[rg^{\prime}(r)+g(r)\right]\frac{\delta r}{\delta\phi}=f(\phi).\]
Thus, we can rewrite gradient-flow system (1.1) as
\[\phi_{t}=\mathcal{G}\mu, \tag{3.1b}\] \[\mu=-\Delta\phi+\left[rg^{\prime}(r)+g(r)\right]\frac{\delta r}{\delta\phi},\] (3.1c) \[r_{t}=\frac{\delta r}{\delta\phi}\phi_{t}. \tag{3.1a}\]
\[g_{t}=g^{\prime}(r)\,r_{t} \tag{3.1d}\]
Given the equations above, we can deduce a modified energy dissipation law by computing the inner product of (3.1) with \(\mu\), \(\phi_{t}\), \([rg^{\prime}(r)+g(r)]\), and \(r\) respectively, and then summing the results. This leads us to
\[\frac{d}{dt}\left[\frac{1}{2}\|\nabla\phi\|^{2}+\int_{\Omega}rg(r)\right]=(G\mu,\mu)\leq 0, \tag{3.2}\]
which holds for both \(\mathcal{G}=-I\) and \(\mathcal{G}=\Delta\). It's worth pointing out that one of the significant advantages of this formulation is that the auxiliary variable \(r\) appears in a linear form in the reformulation of \(f(\phi)\). This linear structure facilitates the development and implementation of a linear scheme to solve the equation, a topic that we will discuss more in the upcoming subsection.
### Numerical scheme
Motivated by the formulation proposed above, a natural next step is introducing an additional variable to represent \(g\) within the corresponding numerical scheme. In the continuous formulation, \(g=g(r)\) is a dependent function of \(r\). Thus, once \(r\) is established, the value of \(g\) is subsequently determined. Nonetheless, to preserve the linearity of the numerical scheme, we propose the introduction of a discrete variable for \(g\), and update its value through an explicit discretization. This idea leads to the following Invariant Energy Functionalization (IEF) numerical scheme:
\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}=\mathcal{G}\mu^{n+1}, \tag{3.3b}\] \[\mu^{n+1}=-\Delta\phi^{n+1}+\left[r^{n+1}g^{\prime}(r^{n})+g^{n+1}\right]P^{ n},\] (3.3c) \[r^{n+1}-r^{n}=P^{n}(\phi^{n+1}-\phi^{n}),\] (3.3d) \[g^{n+1}-g^{n}=g^{\prime}(r^{n})(r^{n+1}-r^{n}), \tag{3.3a}\]
with \(P^{n}=\frac{\delta r}{\delta Q}(\phi^{n})\). In addition to the regularity and the invertibility conditions that should be satisfied by \(g\), we need to assume the derivative of \(g\) to be non-negative, namely,
\[g^{\prime}(r)\geq 0, \tag{3.4}\]
for any \(r\in\mathbb{R}\). This is necessary to ensure the modified energy stability in discrete cases to hold, as stated below in Theorem 3.1. Before that, we will discuss the procedure to implement this method first.
In a manner analogous to that of the IEC scheme, (3.3c), (3.3d) provide the path representing \(r^{n+1},g^{n+1}\) in terms of \(\phi^{n+1}\). Consequently, one can replace \(r^{n+1},g^{n+1}\) in (3.3b), resulting in a linear scheme pertaining to \(\phi^{n+1}\) when considered in conjunction with (3.1a). Alternatively, an update can be performed on the tuple \((\phi^{n+1},\mu^{n+1},r^{n+1},g^{n+1})\) collectively by developing a linear scheme to resolve (3.3) as a unified system. More precisely, at each iteration, the following linear system could be solved:
\[\begin{pmatrix}\frac{1}{\Delta t}I&-\mathcal{G}&0&0\\ \Delta&I&-g^{\prime}(r^{n})P^{n}&-\tilde{P}^{n}\\ -\tilde{P}^{n}&0&I&0\\ 0&0&-\widetilde{g^{\prime}(r^{n})}&I\end{pmatrix}\begin{pmatrix}\phi^{n+1}\\ \mu^{n+1}\\ r^{n+1}\\ g^{n+1}\end{pmatrix}=\begin{pmatrix}\frac{1}{\Delta t}\phi^{n}\\ 0\\ r^{n}-P^{n}\phi^{n}\\ g^{n}-g^{\prime}(r^{n})r^{n}\end{pmatrix}, \tag{3.5}\]
given that \((\phi^{n},\mu^{n},r^{n},g^{n})\) is known. Here \(\widetilde{g^{\prime}(r^{n})}\) should also be interpreted as a multiplication operator similar as \(\tilde{P}^{n}\).
Now we turn to prove a modified energy dissipation law for the IEF scheme which is a discretized version of (3.2).
**Theorem 3.1**.: _The numerical scheme (3.3) is energy-stable. Specifically, define_
\[\hat{E}^{n}=\frac{1}{2}\|\nabla\phi^{n}\|^{2}+\int_{\Omega}g^{n}r^{n}, \tag{3.6}\]
_then_
\[\hat{E}^{n+1}-\hat{E}^{n}\leq 0.\]
Proof.: Taking inner product of (3.1a) with \(\mu^{n+1}\Delta t\), (3.1b) with \(\phi^{n+1}-\phi^{n}\), (3.1c) with \(g^{n+1}\), (3.1d), and summing them up with (3.1d) together, we obtain,
\[(\mathcal{G}\mu^{n+1},\mu^{n+1})\Delta t =(\phi^{n+1}-\phi^{n},\mu^{n+1})\] \[=\left(\phi^{n+1}-\phi^{n},-\Delta\phi^{n+1}+\left[r^{n+1}g^{ \prime}(r^{n})+g^{n+1}\right]P^{n}\right)\] \[=\frac{1}{2}\|\nabla\phi^{n+1}\|^{2}-\frac{1}{2}\|\nabla\phi^{n} \|^{2}+\frac{1}{2}\|\nabla\phi^{n+1}-\nabla\phi^{n}\|^{2}\] \[\quad+\int_{\Omega}(g^{n+1}-g^{n})\,r^{n+1}\,dx+\int_{\Omega}g^{ n+1}(r^{n+1}-r^{n})\,dx\] \[=\frac{1}{2}\|\nabla\phi^{n+1}\|^{2}-\frac{1}{2}\|\nabla\phi^{n} \|^{2}+\frac{1}{2}\|\nabla\phi^{n+1}-\nabla\phi^{n}\|^{2}\] \[\quad+\int_{\Omega}g^{n+1}r^{n+1}\,dx-\int_{\Omega}g^{n}r^{n}\, dx+\int_{\Omega}(g^{n+1}-g^{n})\,(r^{n+1}-r^{n})\,dx\] \[=\frac{1}{2}\|\nabla\phi^{n+1}\|^{2}-\frac{1}{2}\|\nabla\phi^{n} \|^{2}+\frac{1}{2}\|\nabla\phi^{n+1}-\nabla\phi^{n}\|^{2}\] \[\quad+\int_{\Omega}g^{n+1}r^{n+1}\,dx-\int_{\Omega}g^{n}r^{n}\, dx+\int_{\Omega}g^{\prime}(r^{n})(r^{n+1}-r^{n})^{2}\,dx\] \[\geq\hat{E}^{n+1}-\hat{E}^{n},\]
where we have used the assumption (3.4) to see that \(g^{\prime}(r^{n})\geq 0\). Using the fact that \((\mathcal{G}\mu^{n+1},\mu^{n+1})\leq 0\) independent of choice of \(\mu^{n+1}\), we have finished the proof.
Similar as Corollary 2.4, we can also obtain an estimate for \(\mu^{n+1}\) for the IEF formulation.
**Corollary 3.2**.: _Assume \(\hat{E}^{0}\) is bounded. For fixed \(N>0\), we have_
\[0\leq-\sum_{n=0}^{N}(\mathcal{G}\mu^{n},\mu^{n})\Delta t\leq-\hat{E}^{N}+\hat{ E}^{0}\leq\hat{E}^{0},\]
_and so it is bounded. Specifically, if \(\mathcal{G}=\Delta\), we have_
\[0\leq\sum_{n=0}^{N}\|\nabla\mu^{n}\|^{2}\Delta t\leq\hat{E}^{0},\]
_if \(\mathcal{G}=-I\), we have_
\[0\leq\sum_{n=0}^{N}\|\mu^{n}\|^{2}\Delta t\leq\hat{E}^{0}.\]
### An example of the IEF formulation
Similar as what we have done after establishing the IEC scheme, in this part, we will provide a detailed example to illustrate the practicality of the IEF method. A suitable choice is to take \(g(r)=r^{2k+1}\), for \(k=0,1,2,\cdots\). It is easy to justify that \(rg(r)=r^{2k+2}\) is invertible over \([0,\infty)\) and assumption (3.4) is satisfied since \(g^{\prime}(r)=(2k+1)r^{2k}\geq 0\). This would result in
\[r(\phi)=(F(\phi)+A_{1})^{\frac{1}{2k+2}}\,. \tag{3.7}\]
Therefore, the numerical scheme (3.3) can be implemented as
\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}=\mathcal{G}\mu^{n+1}, \tag{3.8b}\] \[\mu^{n+1}=-\Delta\phi^{n+1}+\left[(2k+1)(r^{n})^{2k}r^{n+1}+g^{n+1}\right]P^{n},\] (3.8c) \[r^{n+1}-r^{n}=P^{n}(\phi^{n+1}-\phi^{n}),\] (3.8d) \[g^{n+1}-g^{n}=(2k+1)(r^{n})^{2k}(r^{n+1}-r^{n}), \tag{3.8a}\]
where \(P^{n}=P(\phi^{n})\) and
\[P(\phi)=\frac{\delta r}{\delta\phi}(\phi)=\frac{f(\phi)}{(2k+2)(F(\phi)+A_{1})^ {\frac{2k+1}{2k+2}}}. \tag{3.9}\]
This procedure can be summarized as the following Algorithm 2.
```
1:procedure
2: Set the step size \(\Delta t\)
3: Set the total number of iterations \(N\)
4: Set the initial value \(\phi^{0}=\phi_{0}\)
5: Compute \(r^{0}=r(\phi^{0})\) using equation (3.7)
6: Compute \(g^{0}=(r^{0})^{2k+1}\)
7: Compute \(P^{0}=P(\phi^{0})\) using equation (3.9)
8:for\(n=0,1,2,...,N-1\)do
9: Construct the matrix \(A=\begin{pmatrix}\frac{1}{\Delta t}I&-\mathcal{G}&0&0\\ \Delta&I&-(2k+1)\widetilde{(r^{n})^{2k}P^{n}}&-P^{n}\\ -\tilde{P}^{n}&0&I&0\\ 0&0&-(2k+1)\widetilde{(r^{n})^{2k}}&I\end{pmatrix}\) using \(P^{n}\)
10: Construct the right-hand side \(b=\begin{pmatrix}\frac{1}{\Delta t}\phi^{n}\\ 0\\ r^{n}-P^{n}\phi^{n}\\ g^{n}-(2k+1)(r^{n})^{2k}r^{n}\end{pmatrix}\)
11: Update \(\phi^{n+1}\), \(r^{n+1},g^{n+1}\) by solving the linear system \(A\begin{pmatrix}\phi^{n+1}\\ \mu^{n+1}\\ r^{n+1}\\ g^{n+1}\end{pmatrix}=b\)
12: Update \(P^{n+1}=P(\phi^{n+1})\) using equation (3.9)
13:endfor
14:endprocedure
```
**Algorithm 2** IEF scheme induced by monomial functions
It merits attention that by simply assigning a value of \(k=0\), which consequently yields \(g(r)=r\), we can reconstruct the IEQ method. It is because, in this scenario, the computation \(g(r)r=r^{2}\) holds, and the update rule given by (3.8d) is equivalent to (3.8c). This enforces \(g^{n+1}=r^{n+1}\) for each iterative step \(n=0,1,2,\ldots,N-1\). Such a formulation endorses our claim that the IEF is genuinely an extension of the IEQ method.
When choosing a suitable function \(g\) for the IEF scheme, our selection should not be confined to monomial functions, provided they adhere to the abovementioned conditions. However, finding functions other than the monomials that result in an explicit formula for \(r\) in terms of \(F(\phi)\) is challenging. In such instances, one might need to employ root-finding techniques to determine the value of \(P^{n}\) once \(\phi^{n}\) is known. This inherent challenge positions our method as an Invariant Energy Monomial method in the examples provided here and in subsequent numerical instances. Even so, should an efficient technique be found for computing the root of the equation \(g(r)r=F(\phi)+A_{1}\) for a specific form of \(g\), constructing an IEF scheme using our approach would not pose a significant challenge. This potential advancement forms an intriguing aspect for future exploration of this method.
## 4. Numerical Experiments
In this section, we conduct a series of numerical experiments, focusing on the Allen-Cahn [2] and Cahn-Hilliard equations [9, 10] under the application of periodic boundary conditions in a two-dimensional domain. The equations above are commonplace test cases for gradient flow-targeted numerical algorithms, as illustrated in [8, 17, 22, 40, 41, 51]. Moreover, these equations are particular manifestations of general gradient flows (1.1) and can be formulated using the free energy:
\[E(\phi)=\int_{\Omega}\left(\frac{\varepsilon^{2}}{2}|\nabla\phi|^{2}+F(\phi) \right)\,dx,\]
where
\[F(\phi)=\frac{1}{4}\left(\phi^{2}-1\right)^{2}. \tag{4.1}\]
Here \(\epsilon\) symbolizes a parameter that introduces stiffness issues into the PDE system when \(\varepsilon\ll 1\)[47].
Employing the variational approach to \(E(\phi)\) in \(L^{2}\), the Allen-Cahn equation materializes as:
\[\phi_{t}=-M\mu, \tag{4.2b}\] \[\mu=-\varepsilon^{2}\Delta\phi+f(\phi). \tag{4.2a}\]
Here \(M\) is the mobility constant, and \(f(\phi)=\frac{\delta F}{\delta\phi}=\phi(\phi^{2}-1)\)[11].
In a similar manner, applying the variational approach to \(E(\phi)\) within the \(H^{-1}\) space yields the Cahn-Hilliard equation:
\[\phi_{t}=M\Delta\mu, \tag{4.3b}\] \[\mu=-\varepsilon^{2}\Delta\phi+f(\phi). \tag{4.3a}\]
We choose a computational domain defined by a square \(\Omega=[0,2\pi]\times[0,2\pi]\) for the subsequent experiments. We will implement a standard finite difference method for this problem with periodic boundary conditions to facilitate full discretization. Unless otherwise stated, the domain will be discretized utilizing a \(40\times 40\) grid, and we will set the parameters \(M=0.6\) and \(\varepsilon=0.4\).
### Accuracy test
In this section, we start our numerical experiments by assessing the convergence rates of the Invariant Energy Convexification (IEC) scheme in its application to the Allen-Cahn equation, as given by (4.2). For these experiments, we adopt
\[\phi(x,y,t)=\sin(x)\cos(y)\cos(t) \tag{4.4}\]
as the exact solution and include an appropriate right-hand side force field term to ensure that the designated solution complies with the system (4.2). We draw attention to the fact that the function \(\phi\) employed here aligns with the choice made for the accuracy test in [47].
In the context of the IEC scheme (2.4), we examine three distinct choices for the function \(c(r)\), namely, \(\ln(1+e^{r}),\ln(r)^{2},r^{2}\). Each of these functions are L-smooth and convex, and an elementary analysis would suggest that \(L=2\) serves as an appropriate choice for all three. Maintaining consistency, we set \(\alpha=0.5\) for our tests.
We then measure the \(L^{2}\) errors for the variable \(\phi\), contrasting our numerical solution with the exact solution at \(T_{end}=1\) under varying time step sizes. As visualized in Figure 0(a), each of the three choices of convex functions demonstrates first-order accuracy. The error is linearly decreasing when the time step size is decreasing as well. More specific error data for varying time step sizes can be found in Table 1.
A noteworthy observation emerges when keeping the parameters \(\alpha\) and \(L\) constant in the numerical scheme: errors vary depending on the choice of the corresponding convex function. Specifically, \(c(r)=\ln(r)^{2}\) appears to yield the best performance, while \(c(r)=r^{2}\) lags behind. This finding underscores the likelihood that, for a particular problem, a certain function may appear to be more suitable to the construction of the numerical scheme in contrast to a quadratic function to get better precision.
We further seek to explore the impact of alterations in the values of \(\alpha\) on the accuracy of the IEC scheme. With this objective in mind, we apply the IEC schemes with \(c(r)=\ln(1+e^{r})\) and \(c(r)=\ln(r)^{2}\) using a range of values for \(\alpha\). We continue to assess the \(L^{2}\) error between the numerical solution and the exact solution at \(T_{end}=1\). The outcomes of this exploration are depicted in Figure 2. For this experiment, we select values for \(\alpha\) that range from its theoretical lower bound of \(0.5\)--a value necessary to ensure the energy stability of the numerical scheme--up to \(16\). When the time step size is large, the performance of numerical schemes with different values of \(\alpha\) has obvious differences. In comparison, such a difference becomes negligible when the time step size is small. Another intriguing fact is that when the time step size is relatively large, an increase in the value of \(\alpha\) initially enhances the accuracy of the corresponding numerical scheme but then reduces precision. When constructing the numerical scheme using \(c(r)=\ln(1+e^{r})\), we find that optimal accuracy is achieved for values of \(\alpha\) within the interval \([4,8]\). Contrarily, for \(c(r)=\ln(r)^{2}\), the scheme exhibits optimal performance when \(\alpha\) lies within the range \([2,4]\). These findings suggest that different convex functions may produce distinct optimal values for \(\alpha\). As such, when we use a relatively large time step, careful consideration of \(\alpha\)'s value is necessary when implementing the IEC scheme.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline FunctionTime step & \(\delta t=1.00\times 10^{-1}\) & \(\delta t=5.00\times 10^{-2}\) & \(\delta t=2.50\times 10^{-2}\) & \(\delta t=1.25\times 10^{-2}\) & \(\delta t=6.25\times 10^{-3}\) & \(\delta t=3.12\times 10^{-3}\) \\ \hline \(c(r)=\ln(1+e^{r})\) & \(9.42\times 10^{-2}\) & \(4.76\times 10^{-2}\) & \(2.37\times 10^{-2}\) & \(1.16\times 10^{-2}\) & \(5.53\times 10^{-3}\) & \(2.48\times 10^{-3}\) \\ \hline \(c(r)=\ln(r)^{2}\) & \(6.36\times 10^{-2}\) & \(3.23\times 10^{-2}\) & \(1.61\times 10^{-2}\) & \(7.81\times 10^{-3}\) & \(3.64\times 10^{-3}\) & \(1.54\times 10^{-3}\) \\ \hline \(c(r)=r^{2}\) & \(1.14\times 10^{-1}\) & \(5.73\times 10^{-2}\) & \(2.85\times 10^{-2}\) & \(1.40\times 10^{-2}\) & \(6.72\times 10^{-3}\) & \(3.07\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 1. The \(L^{2}\) numerical errors at \(T_{end}=1\) for \(\phi\) with three different convex functions at different time step size
We now discuss the experimental findings of the Invariant Energy Functionalization (IEF) scheme. We will employ the scheme constructed using a monomial as the auxiliary function \(g\), a concept introduced in Section 3.3. Our evaluation metric remains the \(L^{2}\) error for \(\phi\), calculated between the numerical and exact solutions at \(T_{end}=1\). The errors are illustrated in Figure 0(b), and the precise data is provided in Table 2. Insight into these results reveals that the highest level of algorithmic precision is achieved when \(g(r)=r\), corresponding to the Invariant Energy Quadratization (IEQ) method. Nevertheless, it is worth noting that the discrepancies between different values of \(k\) are tiny. Furthermore, regardless of the choice of \(k\), all options yield first-order accuracy. It implies that the norm of errors diminishes linearly to the reduction in time step size. This outcome further validates the robustness and utility of the IEF scheme in this context.
### Energy stability
In our discussion thus far, we have highlighted a significant advantage shared by the IEC and the IEF schemes, which they inherit from the Invariant Energy Quadratization (IEQ) method: the preservation
Figure 1. The \(L^{2}\) numerical errors at \(T_{end}=1\) for \(\phi\) under different formulation. (a) Illustrates the numerical errors of the IEC scheme using different choices of convex functions. (b) Demonstrates the numerical errors of the IEF scheme, where varying monomials are utilized as the multiplying functions.
Figure 2. The \(L^{2}\) numerical errors at \(T_{end}=1\) for \(\phi\) with different values of \(\alpha\) under the IEC formulation. (a) Scheme constructed with \(c(r)=\ln{(1+e^{r})}\). (b) Scheme constructed with \(c(r)=\ln(r)^{2}\).
of the energy dissipation law. In this section, we set out to empirically validate this property through a series of numerical experiments. We will examine an example with the initial condition given by:
\[\phi(x,y,0)=\sin(x)\cos(y), \tag{4.5}\]
and scrutinize the evolution of energy over the time interval \([0,5]\) when employing both the IEC and the IEF schemes to solve both the Allen-Cahn and Cahn-Hilliard equations.
We commence with the IEC scheme. We maintain a consistent choice of \(c(r)\) as the Softplus function for these experiments, thereby employing Algorithm 1 for our implementations. As depicted in Figure 3, the modified energy exhibits a monotonic decrease independent of the time step size for both the Allen-Cahn and Cahn-Hilliard equations. The definition of the modified energy, as referenced here, aligns with equation (2.6). This outcome confirms preserving the energy dissipation property within the IEC scheme.
Another research question is the difference between the modified and original energies. Specifically, the original energy function \(F(\phi)\), defined by equation (4.1), is a function of the variable \(\phi\), whereas its modified counterpart, \(c(r)\), is a function of \(r\) in the IEC formulation. An intriguing question is whether \(c(r)\) would converge to \(F(\phi)\) in the limit as the time step size approaches zero. In more technical terms, we aim to compute the quantity \(|\int_{\Omega}(c(r)-F(\phi))|\) at \(T_{end}=5\) for various time step sizes and observe if this quantity approaches zero as the time step size diminishes.
To investigate this, we employ the same example with the initial condition specified by equation (4.5), persisting with the use of the Softplus function for constructing our IEC scheme. As demonstrated in Figure 3(a), the difference between the original energy and the modified energy indeed appears to approach zero as the time step size shrinks, when tested on the Allen-Cahn equation, with a first-order convergence rate. Furthermore, figure 3(b) validates the same numerical results for the Cahn-Hilliard equation. This result offers additional evidence in favor of implementing the change of variable procedure, underscoring that the auxiliary variable computed by the numerical scheme will converge to the original function it aims to represent when the time step size is sufficiently small.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \(g(r)=r^{k}\)Time step & \(\delta t=1.00\times 10^{-1}\) & \(\delta t=5.00\times 10^{-2}\) & \(\delta t=2.50\times 10^{-2}\) & \(\delta t=1.25\times 10^{-2}\) & \(\delta t=6.25\times 10^{-3}\) & \(\delta t=3.12\times 10^{-3}\) \\ \hline \(k=0\) & \(1.15\times 10^{-1}\) & \(5.77\times 10^{-2}\) & \(2.86\times 10^{-2}\) & \(1.39\times 10^{-2}\) & \(6.48\times 10^{-3}\) & \(2.79\times 10^{-3}\) \\ \hline \(k=1\) & \(1.14\times 10^{-1}\) & \(5.70\times 10^{-2}\) & \(2.82\times 10^{-2}\) & \(1.37\times 10^{-2}\) & \(6.39\times 10^{-3}\) & \(2.74\times 10^{-3}\) \\ \hline \(k=3\) & \(1.14\times 10^{-1}\) & \(5.72\times 10^{-2}\) & \(2.83\times 10^{-2}\) & \(1.37\times 10^{-2}\) & \(6.42\times 10^{-3}\) & \(2.76\times 10^{-3}\) \\ \hline \(k=5\) & \(1.14\times 10^{-1}\) & \(5.74\times 10^{-2}\) & \(2.84\times 10^{-2}\) & \(1.38\times 10^{-2}\) & \(6.44\times 10^{-3}\) & \(2.77\times 10^{-3}\) \\ \hline \(k=7\) & \(1.15\times 10^{-1}\) & \(5.75\times 10^{-2}\) & \(2.84\times 10^{-2}\) & \(1.38\times 10^{-2}\) & \(6.45\times 10^{-3}\) & \(2.77\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 2. The \(L^{2}\) numerical errors at \(T_{end}=1\) for \(g(r)=r^{k}\) with different values of \(k\)
Figure 3. Evolution of the modified energy over time under the IEC scheme with \(c(r)=\ln(1+e^{r})\): (a) Allen-Cahn equation; (b) Cahn-Hilliard equation.
We next turn our attention to comparable experiments performed using the IEF scheme. We shall confine our discussion to applying the IEF scheme to the Cahn-Hilliard equation for brevity. Employing the same initial condition given by equation (4.5), we choose \(g(r)=r^{7}\) to facilitate our investigations. The modified energy in this context is defined as per equation (3.6). Figure 4(a) delineates the evolution of this modified energy over time for varying time step sizes. The modified energy exhibits a monotonous decrease, in alignment with our theoretical predictions outlined in Theorem 3.1.
For our second experiment using the IEF scheme, we diverge slightly from the approach used in the IEC scheme, where we computed the discrepancy between the modified and original energies. Instead, we focus on the disparity between the auxiliary variable and its corresponding value as a function of \(\phi\) of differing time step sizes. As illustrated in Figure 4(b), the \(L^{2}\) norm of the difference between \(\left|g-g\left(r(\phi)\right)\right|\) and \(\left|r-r(\phi)\right|\) at \(T_{end}=5\) shows a linear dependency on the time step size, again demonstrating first-order accuracy. The results of these two experiments attest to the robustness and accuracy of the IEF scheme.
Figure 4. Graphs depicting \(\left|\int_{\Omega}(c(r)-F(\phi))\right|\) at \(T_{end}=5\) for varying time step sizes under the IEC scheme, constructed using the Softplus function: (a) Allen-Cahn equation; (b) Cahn-Hilliard equation.
Figure 5. Numerical results of the IEF scheme applied to the Cahn-Hilliard equation
### Coarsening effect
The coarsening effect [16, 29] refers to the phenomenon where small-scale structures or features in the system tend to merge and form larger-scale structures over time. This effect is related to the system's dynamics of phase separation or pattern formation. Here we will present some experimental results by applying the IEC and the IEF scheme to the Cahn-Hilliard equation to simulate such an effect. We firstly consider a benchmark problem for the corasening effect, for example, see [11, 33, 47]. The initial condition is set to be
\[\phi(x,y,0)=-\sum_{i=1}^{2}\tanh\left(\frac{\sqrt{(x-x_{i})^{2}+(y-y_{i})^{2}} -r_{i}}{1.2\varepsilon}\right)+1, \tag{4.6}\]
where we choose \((x_{1},y_{1},r_{1})=(\pi-0.7,\pi-0.6,)\) and \((x_{2},y_{2},r_{2})=(\pi+1.65,\pi+1.6,0.8)\). We apply the IEC scheme formulated by the Softplus function, which is Algorithm 1, to solve this problem with times step size set to be \(0.001\). As we can see from the snapshots presented in Figure 6, we observe the coarsening effect as the small circle is generally absorbed by the large circle over the time span \([0,3]\). The fully absorption happens at around \(t=2.00s\).
Our last exploration entails a new example, which we introduce with the following initial condition:
\[\phi(x,y,0)=0.25+0.4\text{rand}(x,y). \tag{4.7}\]
This numerical experiment reproduces a similar one conducted in [32]. We employ the IEF scheme with the function \(g(r)\) set as \(r^{7}\) for our investigations. The time step size adopted is \(0.0001\). The dynamic patterns of the mixing of the phase field, starting from a random initial value, can be discerned in Figure 7. The observed results, embodying the intriguing interplay of phase transitions, comply nicely with empirical findings.
Figure 6. Snapshots of \(\phi\) taken within the time span \([0,3]\), solving the Cahn-Hilliard equation with initial condition (4.6) by the IEC scheme formulated by the Softplus function with time step size set to be \(0.001\).
## 5. Extension to the SAV scheme
In the forthcoming discussion, we present the extensions of the frameworks of the IEC and the IEF scheme onto the Scalar Auxiliary Variable method (SAV). These schemes will share similar properties as our generalization of the IEQ method in the sense that they will also lead to energy-stable linear numerical schemes.
### A brief review of the SAV method
The Scalar Auxiliary Variable (SAV) method, initially presented in [40, 41] and subsequently refined [1, 5, 20, 28, 39, 42], is a robust tool analogous to the IEQ method. It is widely employed to construct efficient and accurate time discretization schemes for a broad spectrum of gradient flows. In terms of formulation, it takes a notable resemblance to the IEQ method. Both introduce an auxiliary variable to replace the original energy and then construct an energy-stable linear scheme based on this reformulation. A distinct differential point emerges in the scope of the auxiliary variables utilized by the two methodologies. While the IEQ approach employs a variable over both time and space (represented as \(r=\sqrt{F(\phi)+A_{1}}\) where \(F\) is the energy density function outlined in (1.1)), the SAV method utilizes a variable defined solely over time (\(r=\sqrt{E_{1}(\phi)+A_{2}}\)), represented as the square root of \(E_{1}(\phi)\), which is defined as \(E_{1}(\phi)=\int_{\Omega}F(\phi)\,dx\). \(A_{2}\) here is a constant to ensure the auxiliary variable is well-defined given that \(E_{1}\) is bounded from below, which is necessary for the free energy term to be physically sound. Specifically, as initially outlined in [41], a first-order SAV scheme can be carried out
Figure 7. Snapshots of \(\phi\) taken within the time span \([0,1.1]\), solving the Cahn-Hilliard equation with initial condition (4.7) by the IEF scheme formulated by the \(g(r)=r^{7}\) with time step size set to be \(0.0001\).
\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}=\mathcal{G}\mu^{n+1}, \tag{5.1b}\] \[\mu^{n+1}=-\Delta\phi^{n+1}+\frac{r^{n+1}}{\sqrt{E_{1}(\phi^{n})}+A_{2}}F^{n},\] (5.1c) \[r^{n+1}-r^{n}=\frac{1}{c^{\prime}\left(r(\phi)\right)}\int_{\Omega}F^{n}(\phi^{n +1}-\phi^{n})\,dx. \tag{5.1a}\]
We take \(F^{n}=F(\phi^{n})\) here. Various numerical experiments have shown the efficiency and practicability of this method. In addition, as it has been stated in [40, 41], the SAV methodology inherits all the advantages of the IEQ method and additionally enjoys the benefits of a simpler implementation and the requirement for only \(E_{1}(\phi)\) to be bounded from below (instead of the need for \(F(\phi)\) also to be bounded from below). Given the commonality in the core construction principles of these methods, it is feasible to transpose our ideas onto the SAV method, enabling further generalizations. We will present it in the following.
### The C-SAV method
In the following, we will propose a variant of the standard SAV method, the Convex Scalar Auxiliary Variable method, abbreviated as C-SAV. Guided by the parallel adaptations we made for the IEC method, a natural proposition is to supplant \(r^{2}\) with an alternative auxiliary function. In this context, we will adhere to the same assumptions we have put forth for the IEC scheme with respect to the auxiliary function \(c\); namely, \(c:\mathbb{R}\rightarrow\mathbb{R}\) is taken as a smooth, L-smooth, and convex function exhibiting a monotonic increase, with \(\mathbb{R}^{+}\) included within the function's range. Then by introducing a scalar auxiliary variable \(r(t)\) such that
\[c\left(r(t)\right)=E_{1}(\phi)+A_{2},\]
system (1.1) can be reformulated as
\[\frac{\partial\phi}{\partial t}=\mathcal{G}\mu, \tag{5.2b}\] \[\mu=-\Delta\phi+\frac{c^{\prime}\left(r(t)\right)}{c^{\prime}\left(c^{-1} \left(E_{1}(\phi)+A_{2}\right)\right)}\frac{\delta F}{\delta\phi},\] (5.2c) \[r_{t}=\frac{1}{c^{\prime}\left(c^{-1}\left(E_{1}(\phi)+A_{2}\right)\right)} \int_{\Omega}\frac{\delta F}{\delta\phi}\phi_{t}, \tag{5.2a}\]
From this, we propose the following variant of the IEC scheme, through the application of the same conceptual framework to the SAV method:
\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}=\mathcal{G}\mu^{n+1}, \tag{5.3b}\] \[\mu^{n+1}=-\Delta\phi^{n+1}+\frac{c^{\prime}(r^{n})+\alpha L(r^{n+1}-r^{n})}{ c^{\prime}\left(r(\phi^{n})\right)}F^{n},\] (5.3c) \[r^{n+1}-r^{n}=\frac{1}{c^{\prime}\left(r\left(\phi^{n}\right)\right)}\int_{ \Omega}F^{n}(\phi^{n+1}-\phi^{n})\,dx. \tag{5.3a}\]
By defining modified energy as \(E_{CS}^{n}=\frac{1}{2}|\nabla\phi^{n}|^{2}+c(r^{n})\), we can deduce, through parallel arguments to Theorem 2.2, that this discrete energy demonstrates a monotonic decrease. We can similarly extend the IEF formulation to the SAV methodology, thereby constructing linear, energy-stable numerical schemes. We omit the details for the sake of brevity, but we encourage interested readers to explore this subject further.
## 6. Concluding remarks
In this work, we have introduced two novel numerical methodologies: the Invariant Energy Convexification (IEC) and the Invariant Energy Functionalization (IEF) methods. These methods generalize the well-known Invariant Energy Quadratization (IEQ) method. The inherent ability of these proposed techniques to construct linear energy-stable numerical schemes, a crucial feature inherited from the IEQ method, underscores their significance.
The numerical experiments on the Allen-Cahn and Cahn-Hilliard equations substantiated the theoretical properties deduced for both methods in practical implementations. The results conclusively demonstrated that both methods are robust and provide energy stability. Moreover, if an appropriate function is meticulously selected and carefully chooses the parameters in the method, the IEC method exhibits the potential to outperform the standard IEQ method.
As we conclude this manuscript, we wish to underscore several notable points and delineate potential lanes for future research related to this work:
1. In the extant version of the IEC method, the selection of uniformly L-smooth functions, such as the Softplus function, is imperative for designing the corresponding numerical scheme in theory. However, in our practical implementations, we observed that numerous functions, which are only locally L-smooth, can also facilitate the development of numerical schemes with a commendable performance. It could be attributed to the energy bound of the auxiliary variable, which may consequently result in a bound of the function's second derivative. The necessity for theoretical justification of this aspect needs to be considered.
2. The accuracy of the numerical solutions is influenced by choice of the auxiliary function \(c(x)\) and the parameter \(\alpha\), especially when the time discretization is coarse. It is evident from our discussion in Section 4. Therefore, deriving a theoretical justification to find the optimal formulation for specific problems is challenging and immensely useful.
3. Our numerical results reveal that both methods lead to convergence to the exact solution with a desirable convergence rate. However, contrary to the IEQ method, the convergence analysis of the IEC and the IEF methods may present formidable challenges due to the non-quadratic structure of their modified energy. It necessitates using specific analytical tools for the auxiliary functions used in their formulation--this facet of our work beckons further investigation.
4. It is worth noting that the challenges and problems mentioned above are also relevant when considering extending these methods to the Scalar Auxiliary Variable (SAV) method.
|
2308.01745
|
Energy spectrum of valence band in HgTe quantum wells on the way from a
two to the three dimensional topological insulator
|
The magnetic field, temperature dependence and the Hall effect have been
measured in order to determine the energy spectrum of the valence band in HgTe
quantum wells with the width (20-200)nm. The comparison of hole densities
determined from the period Shubnikov-de Haas oscillations and the Hall effect
shows that states at the top of valence band are double degenerate in teh entry
quantum wells width the width range. The cyclotron mass determined from
temperature dependence of SdH oscillations increases monotonically from
(0.2-0.3) mass of the free electron, with increasing hole density from 2e11 to
6e11 cm^-2. The determined dependence has been compared to theoretical one
calculate within the four band kp model. The experimental dependence was found
to be strongly inconsistent with this predictions. It has been shown that the
inclusion of additional factors (electric field, strain) does not remove the
contradiction between experiment and theory. Consequently it is doubtful that
the mentioned kp calculations adequately describe the valence band for any
width of quantum well.
|
G. M. Minkov, O. E. Rut, A. A. Sherstobitov, S. A. Dvoretski, N. N. Mikhailov, V. Ya. Aleshkin
|
2023-08-03T13:08:40Z
|
http://arxiv.org/abs/2308.01745v1
|
Energy Spectrum of the Valence Band in HgTe Quantum Wells on the Way from a Two- to Three-Dimensional Topological Insulator
###### Abstract
The magnetic field and temperature dependences and the Hall effect have been measured in order to determine the energy spectrum of the valence band in HgTe quantum wells with the width \(d_{\rm QW}\) = 20-200 nm. The comparison of hole densities determined from the period of Shubnikov-de Haas oscillations and the Hall effect shows that states at the top of the valence band are doubly degenerate in the entire \(d_{\rm QW}\) range, and the cyclotron mass \(m_{h}\) determined from the temperature dependence of the amplitude of Shubnikov-de Haas oscillation increases monotonically from 0.2\(m_{0}\) to 0.3\(m_{0}\) (\(m_{0}\) is the mass of the free electron) with increasing hole density \(p\) from 2 \(\times\) 10\({}^{11}\) to 6 \(\times\) 10\({}^{11}\) cm\({}^{-2}\). The determined dependence has been compared to theoretical dependences \(m_{h}\)[\(p\), \(d_{\rm QW}\) ] calculated within the four-band \({\bf k}P\) model. These calculations predict an approximate stepwise increase in \(m_{h}\) owing to the pairwise merging of side extrema with increasing hole density, which should be observed at \(p\) = (4-4.5) \(\times\) 10\({}^{11}\) and 4 \(\times\) 10\({}^{10}\) cm\({}^{-2}\) for \(d_{\rm QW}\) = 20 and 200 nm, respectively. The experimental dependences are strongly inconsistent with this prediction. It has been shown that the inclusion of additional factors (electric field in the quantum well, strain) does not remove the contradiction between the experiment and theory. Consequently, it is doubtful that the mentioned \({\bf k}P\) calculations adequately describe the valence band at all \(d_{\rm QW}\) values.
## 1 Introduction
Structure with HgTe quantum wells (QWs) attract great attention for several reasons. First, the QW is formed from a gapless semiconductor, whereas HgCdTe barriers are formed from a semiconductor with normal band ordering.1 Second, the band structures of the parent materials HgTe and HgCdTe are studied in detail and their parameters are well known. Third, the multiband \({\bf k}P\) method for the calculation of the energy spectrum \(E(k)\) in HgTe quantum wells is well developed (see, e.g., [1, 2, 3, 4] and references therein). These calculations show that various energy spectra occur depending on the width of the quantum well \(d_{\rm QW}\) from the spectrum similar to the spectrum of a narrow-gap semiconductor at \(d_{\rm QW}\) < 6.3 nm to a semimetallic spectrum at \(d_{\rm QW}\) > \(\approx\)15 nm. Fourth, the theory predicts that the HgTe QW with \(d_{\rm QW}\) > 6.5 nm is a two-dimensional topological insulator, where one-dimensional edge states are formed in addition to two-dimensional states. The HgTe QW with \(d_{\rm QW}\) > 60-80 nm is a three-dimensional topological insulator, where two-dimensional single-spin surface states are formed with the characteristic localization length in the \(z\) direction, which is perpendicular to the plane of the QW, much larger than \(d_{\rm QW}\). Fifth, the technology of growth of HgCdTe/HgTe/HgCdTe structures is well developed [5, 6].
Footnote 1: In normal band ordering at the \(\Gamma\) point in II–VI semiconductors, the doubly degenerate \(\Gamma_{6}\) term forms the conduction band, the quadruply degenerate \(\Gamma_{6}\) term constitutes the valence band consisting of band of heavy and light holes, and the doubly degenerate \(\Gamma_{7}\) term forms the spin–orbit split valence band.
All these circumstances seem to allow detailed understanding of (transport, optical, etc.) properties of HgCdTe/HgTe/HgCdTe structures.
Theoretical calculations predict that the conduction band is quite simple, nearly isotropic and nonparabolic. Its spectrum in structures with \(d_{\rm QW}\) = 4-80 nm is thoroughly studied by the optical and magnetotransport methods, as well as by the photoelectric method in a wide photon energy range beginning with the terahertz band [8, 9, 10, 11, 12]. It was shown that the energy spectrum is reasonably described in general within the four-band \(\mathbf{k}P\) model, and some discrepancies between theory and experiment were discussed in [13].
The energy spectrum of the valence band is much more complex. The theory predicts that the top of the valence band at \(d_{\rm QW}\) < 7-7.5 nm is located at \(\mathbf{k}\) = 0 and has the curvature (mass) close to the curvature of the conduction band. The top of the valence band at \(d_{\rm QW}\) > 7-7.5 nm is formed by the four side extrema so that the states at the top of the valence band in symmetric quantum wells have a degree of degeneracy of \(K\) = 8 (2 owing to "spin" multiplied by 4, which is the number of side extrema). The anisotropy of these states near extrema is small and the cyclotron mass of holes given by the formula \(m_{\rm h}=(h^{2}/\pi)dS/dE\), where \(S\) is the area of the constant-energy cross section at the energy \(E\), is (0.2-0.3)\(m_{0}\) at \(p<\) (5-9)\(\times 10^{11}\) cm\({}^{-2}\).
The interface inversion asymmetry in HgCdTe/HgTe/HgCdTe structures, which is described by a single parameter \(g4\) within the four-band \(\mathbf{k}P\) model [14], leads to the "spin" splitting of states at the top of the valence band, so that the degree of degeneracy decreases to 4. Figure 1 presents constant-energy contours of the upper split state calculated taking into account the interface inversion asymmetry in the 8.3-nm-wide quantum well with the parameter \(g4\) = 0.8. This figure demonstrates that constant-energy contours of two extrema presented in the figure are merged with a decrease in the energy (i.e., with an increase in the hole density) to 18-18.5 meV, which should lead to the doubling of the cyclotron mass \(m_{h}\) caused by the doubling of \(dS/dE\).
The energy spectrum of the valence band is much less studied experimentally [15, 16, 17]. It was shown that the extremum of the valence band at \(d_{\rm QW}\)= 5-7 nm, when it is located at \(\mathbf{k}\) = 0, is strongly split owing to the interface inversion asymmetry [15, 17].
At \(d_{\rm QW}\) > 7-7.5 nm, when the top of the valence band is formed by four side extrema, the energy spectrum is experimentally studied much less. In [18], we show that the effective mass in the QW with \(d_{\rm QW}\) = 8-20 nm at \(p\) = (2-5) \(\times\)\(10^{11}\) cm\({}^{-2}\) is close to the theoretical value, but the degree of degeneracy is \(K\) = 2 rather than 4, as should be the case taking into account the interface inversion asymmetry in the symmetric quantum well. It was shown that the additional asymmetry (e.g., different widths of heterointerfaces or different parameters \(g4L\) and \(g4R\) characterizing the contribution from the interface inversion asymmetry on the left and right walls, respectively) results in halving of the degree of degeneracy \(K\) and, thereby, makes it possible to remove this discrepancy with theory.
Experimental studies of the cyclotron mass of holes in the QW with \(d_{\rm QW}\) = 8-20 nm in the hole density range of \(p\) = (2-5) \(\times\)\(10^{11}\) cm\({}^{-2}\) showed that the effective mass is close to the theoretical value [17]. The results from [17] are presented in Fig. 2.
Figure 1: (Color online) Constant-energy contours of the upper split state calculated taking into account the interface inversion asymmetry. Constant-energy contours of the lower split state are similar and located approximately 6 meV below in energy. The calculations were performed for the 8.3-nm-wide quantum well on the (013) substrate at the parameter \(g4\) = 0.8. One half of the picture is shown; the second half is a mirror image. The energy is measured from \(E(k=0)\) with a step of 1 meV. Some energies are indicated near the corresponding contours. The dashed line indicates the direction passing through the maximum of \(E(k)\).
Figure 2: (Color online) Cyclotron mass of holes versus the hole density in structures with \(d_{\rm QW}\) = 8.3–20 nm (from [18]). The black and red solid lines are calculated for \(d_{\rm QW}\) = 8.3 and 20 nm at \(g4L\) = 0.8 and \(g4R\) = 1, respectively. The dashed rectangle marks the \(m_{h}/m_{0}\) region including all experimental results.
Any detailed experimental results for \(d_{\rm QW}\) > 20 nm are absent.
In this work, the effective mass of holes and the degree of degeneracy of the top of the valence band in the QW with \(d_{\rm QW}\) = 20-200 nm in the hole density range of \((2\)-\(6)\times 10^{11}\) cm\({}^{-2}\) are studied experimentally.
## 2 Experimental Results and Discussion
The studied Hg\({}_{1-x}\)Cd\({}_{x}\)Te/HgTe/Hg\({}_{1-x}\)Cd\({}_{x}\)Te ( \(x\) = 0.6-0.7) structures with quantum wells of the widths \(d_{\rm QW}\) = 22, 32, 46, 80, 88, 120, 200 nm were grown by molecular beam epitaxy on the (013) semi-insulating GaAs substrate (in addition, one structure with \(d_{\rm QW}\) = 80 nm was grown on the (100) substrate). The measurements were carried out with Hall bars with a channel width of 0.5 mm and potential contacts separated by 0.5 mm. An aluminum gate was deposited after the deposition of a gate dielectric (parylene) on the surface of the bars. The dc measurements were performed in the temperature range of 1.3-4 K in magnetic fields up to 5 T.
Experimental results and their processing are identical for all studied structures. We describe them in detail for structure 180824 with \(d_{\rm QW}\) = 32 nm.
The magnetic field dependences of the longitudinal resistance \(R_{xx}\) and the Hall coefficient \(R_{\rm H}\) presented in Fig. 3a show that transport involves at least two types of carriers: electrons, which determine the magnetic field dependences of \(R_{xx}\) and \(R_{\rm H}\) in low magnetic fields \(B\) < 0.3-0.5 T, and holes, which determine the magnetic field dependences of \(R_{xx}\) and \(R_{\rm H}\) at magnetic fields \(B\) > 0.5 T (similar dependences are observed at all gate voltages \(V_{\rm g}\) < 0). At gate voltages \(V_{\rm g}\) > 0, the hole contribution to the conductivity vanishes and \(R_{xx}\) and \(R_{\rm H}\) are determined only by conduction electrons. This occurs because the HgTe quantum well with \(d_{\rm QW}\) > 14-15 nm is a semimetal; i.e., the bottom of the conduction band, which is located at the center of the Brillouin zone at **k** = 0, is below the side extrema in the valence band (see the inset of Fig. 3a). Dependences of the hole and electron densities on \(V_{\rm g}\) are presented in Fig. 3b. The electron density at \(V_{\rm g}\) < 0 was determined from the magnetic field dependences of \(R_{xx}\) and \(R_{\rm H}\) in the magnetic field range of \(B\) = 0.03-0.6 T in the two-carrier conduction model, whereas
Figure 3: (Color online) (a) Magnetic field dependences of \(R_{xx}\) and \(R_{\rm H}\). The inset shows the sketch of \(E_{\rm c}(k)\) and \(E_{\rm v}(k)\) at \(d_{\rm QW}\) > 14 nm. (b) Hole, \(p\), and electron, \(n\), densities and the charge number of the quantum well \(Q/e\) = \(p\) - \(n\) versus the gate voltage \(V_{\rm g}\) determined in the two-carrier conduction model at the gate voltage \(V_{\rm g}\) < 0 and the density \(p\) determined from the period of Shubnikov–de Haas oscillations under the assumption of double degeneracy of Landau levels. The inset is the mobility of holes versus the hole density.
the electron density at \(V_{\rm g}\)? 0, when \(R_{\rm H}\)! 0 and hardly depends on the magnetic field, was determined from the Hall effect at \(B\) = 0.03 T as \(n\) = \((eR_{\rm H})^{-1}\), where \(e\) is the electron charge.
The hole density was determined both from the magnetic field dependences of \(R_{xx}\) and \(R_{\rm H}\) in the range of \(B\) = 0.05-1 T within the two-carrier conduction model and from the frequency \(F\) of Shubnikov-de Haas oscillations as \(p_{\rm SdH}\) = \((e/\hbar)FK\) (Fig. 4a). Figure 3b demonstrates that \(p_{\rm SdH}\) at \(K\) = 2 coincides within the experimental error with the Hall hole density.2
Footnote 2: This work is focused on the study of the spectrum of the valence band. For this reason, we do not discuss the behavior of \(R_{xx}\) and \(R_{\rm H}\) in the electron region. The behavior of this structure in the electron region was analyzed in detail in [18].
Figure 3b also shows that the charge of the quantum well depends linearly on \(V_{\rm g}\) in the entire \(V_{\rm g}\) range, which indicates the absence of missed conduction channels. This conclusion is confirmed by the fact that the slope of the gate-voltage dependence of \(Q/(eV_{\rm g})\) coincides within the error with the gate-voltage dependence of \(C/S_{\rm g}\), where \(C\) is the capacitance between the two-dimensional gas and the gate and \(S_{\rm g}\) is the area of the gate.
For example, we consider oscillations of \(R_{xx}(B)\) in the hole region at \(V_{\rm g}\)= -4 V shown in Fig. 3a. The Fourier spectrum of the oscillatory part \(\delta R_{xx}\) = \((R_{xx}-R^{\rm mon})/R^{\rm mon}\) of \(R_{xx}\), where \(R^{\rm mon}\) is the monotonic part of the magnetoresistance, at \(V_{\rm g}\) = -4 V is shown in Fig. 4a. The low- and high-frequency components of the spectrum correspond to the contributions from electrons and holes to oscillations of
\(R_{xx}(B)\), respectively. This immediately follows from the temperature dependence of the amplitudes of these components (Fig. 4a). As the temperature increases from 1.32 to 2.4 K, the amplitude of the low-frequency component decreases only by 20% (this is due to a small effective mass of electrons), whereas the amplitude of the high-frequency component decreases by a factor of about 5.
The first conclusion following from Figs. 3b and 4a is that the degree of degeneracy of Landau levels in the
Figure 4: (Color online) (a) Fourier spectrum of oscillations of \(R_{xx}\) presented in Fig. 3a; the long-dashed line shows the filter for the separation of contribution from holes to oscillations of \(R_{xx}\). (b) Shubnikov–de Haas oscillations of holes found after the filtration of the Fourier spectrum, as shown in panel (a). (c) (Circles) Temperature dependence of the amplitude of Shubnikov–de Haas oscillations in a field of 1.5 T and (line) the Lifshitz–Kosevich formula with \(m_{h}/m_{0}\) = 0.22. (d) Magnetic field dependence of \(m_{h}/m_{0}\).
valence band is 2. This follows from the fact that the hole density determined from the period of high-frequency oscillations under the assumption that Landau levels are doubly degenerate coincides within the error with the Hall hole density.
To determine the effective mass of holes from the temperature dependence of the amplitude of oscillations, these oscillations were reconstructed by the inverse Fourier transform of the filtered (as shown by the dashed line in Fig. 4a) Fourier spectrum (Fig. 4b). The amplitudes of oscillations in a magnetic field of 1.5 T at several temperatures are presented by circles in Fig. 4c, where the line corresponds to the Lifshitz-Kosevich formula ensuring the best reproduction of the experimental results, which is achieved at \(m_{h}/m_{0}=0.22\). To estimate the error, the ratio \(m_{h}/m_{0}\) was determined at different magnetic fields. These results are presented in Fig. 4d. Thus, \(m_{h}/m_{0}=0.22\pm 0.03\) at \(p=4.05\times 10^{11}\) cm\({}^{-2}\). Such measurements and their analysis were performed in the entire accessible hole density range; the corresponding results are shown in Fig. 5 together with the calculated dependence \(m\,[p]/m\,\).
It is seen that the effective hole mass \(m_{h}/m_{0}\) at hole densities below \(3\times 10^{11}\) cm\({}^{-2}\) is in good agreement both with the results for \(d_{\rm QW}=8\)-20 nm (Fig. 2) and with the theoretical dependence. However, a sharp increase in \(m_{h}/m_{0}\) caused by the pairwise merging of side extrema is not observed.
Dependences of \(m_{h}/m_{0}\) on the hole density calculated with several \(d_{\rm QW}\) values in the range of 20-200 nm are presented in Fig. 6 together with experimental results for \(m_{h}/m_{0}\) obtained only in structures with \(d_{\rm QW}=200\) nm. The experimental \(m_{h}/m_{0}\) values in structures with \(d_{\rm QW}=22\), 32, 46, 60, 88, and 120 nm lie in the dashed rectangle (experimental values are not presented because they are too numerous and it will be very difficult to understand to which structures different symbols belong).
It is seen that the hole density at which a jump in \(m_{h}/m_{0}\) occurs because of the pairwise merging of side extrema should decrease strongly with increasing \(d_{\rm QW}\) and this jump at \(d_{\rm QW}=200\) nm should be observed at \(p=0.4\times 10^{11}\) cm\({}^{-2}\). However, experimental \(m_{h}/m_{0}\) values at all \(d_{\rm QW}\) values are close to each other and increase smoothly from \(0.2\pm 0.03\) at \(p=2\times 10^{11}\) cm\({}^{-2}\) to \(0.3\pm 0.03\) at \(p=5\times 10^{11}\) cm\({}^{-2}\).
It could be thought that \(m_{h}/m_{0}\) strongly depends on the orientation of the QW. We tested this assumption for the QW with \(d_{\rm QW}=80\) nm, which was deposited on two substrates with the (013) and (100) orientations. Experimental \(m_{h}/m_{0}\) values at different hole densities presented in the inset of Fig. 6 show that \(m_{h}(p)/m_{0}\) is independent of the orientation within the experimental error.
Thus, the degree of degeneracy of states near the top of the valence band is 2 in the entire range
Figure 5: (Circles) Hole mass versus the hole density at \(d_{\rm QW}=32\) nm and (line) the calculated dependence. The arrow indicates the hole density at which side extrema should be pairwise merged, leading to a sharp increase in \(m_{h}/m_{0}\). The dashed rectangle same as in Fig. 2 indicates the \(m_{h}/m_{0}\) region including all experimental results at \(d_{\rm QW}=8\)–20 nm [18].
Figure 6: (Color online) (Lines) Calculated density dependences of the hole mass \(m_{h}(p)/m_{0}\) at the indicated \(d_{\rm QW}\) values and (circles) experimental values for the largest quantum well with the width \(d_{\rm QW}=200\) nm. The dashed rectangle same as in Figs. 2 and 5 indicates the \(m_{h}/m_{0}\) region including all experimental results at \(d_{\rm QW}=8\)–20 nm (Fig. 2) and at \(d_{\rm QW}=22\), 32, 46, 60, 88, and 120 nm. The inset shows the experimental \(m_{h}/m_{0}\) values for two structures with \(d_{\rm QW}=80\) nm on the (013) and (100) substrates.
8-200 nm at hole densities of (1.5-5.5) \(\times\)10\({}^{11}\) cm\({}^{-2}\). The effective mass of holes \(m_{h}/m_{0}\) at all \(d_{\rm QW}\) values increases monotonically with the hole density from 0.2 \(\pm\) 0.03 to 0.3 \(\pm\) 0.03. This behavior drastically differs from the theoretical spectrum calculated within the four-band \({\bf k}P\) model, which predicts a stepwise (by a factor of about 2) increase in \(m_{h}/m_{0}\) at the hole density of (4-4.5) \(\times\) 10\({}^{11}\) cm\({}^{-2}\) in the 20-nm-wide QW and at the hole density of 0.4 \(\times\) 10\({}^{11}\) cm\({}^{-2}\) in the 200-nm-wide QW.
What is a reason for such discrepancy?
**1.** The hole density in the experiment was changed by varying the gate voltage, i.e., in the presence of the electric field \(E_{z}\) in the quantum well, whereas the calculation was performed for the "empty" spectrum. More accurate self-consistent calculations require the \(z\) distribution of the charge of holes; i.e., it is necessary to know wavefunctions at energies below the Fermi energy at all \(k_{x}\) and \(k_{y}\) values. This problem seems too difficult and, to estimate the effect of the electric field in the quantum well, we consider \(E_{z}\) = const. The dependences of \(m_{h}/m_{0}\) on the hole density calculated at \(E_{z}\) = 5 \(\times\) 10\({}^{2}\) V/cm presented in Fig. 7 show that doubly degenerate states at the top of the valence band are split and the mass jump is shifted toward lower hole densities in one branch and toward higher hole densities in the other branch. In a field of 5 \(\times\) 10\({}^{2}\) V/cm at \(p\) \(<\) 2.5 \(\times\) 10\({}^{10}\) cm\({}^{-2}\), only one upper state is filled, so that the degree of degeneracy in this range should be 1 and \(m_{h}/m_{0}\)\(\approx\) 0.17. At 2.5 \(\times\) 10\({}^{10}\)\(<\)\(p\)\(<\) 5 \(\times\) 10\({}^{10}\) cm\({}^{-2}\), both the upper and lower states are filled (marked as upper and lower in Fig. 7), so that the degree of degeneracy in this range should be 2 and \(m_{h}/m_{0}\)\(\approx\) 0.17-0.18. At 5 \(\times\) 10\({}^{10}\)\(<\)\(p\)\(<\) 1.2 \(\times\) 10\({}^{11}\) cm\({}^{-2}\), the upper state, where the effective hole mass becomes \(m_{h}/m_{0}\)\(\approx\) 0.36, and the lower state with the effective hole mass \(m_{h}/m_{0}\)\(\approx\) 0.18 are filled. At \(p\) \(>\) 2 \(\times\)10\({}^{11}\) cm\({}^{-2}\), both states with close effective hole masses \(m_{h}/m_{0}\)\(\approx\) 0.35-0.38 are filled. Thus, the theory predicts that both the degree of degeneracy and the effective hole mass in the structure with \(d_{\rm QW}\) = 80 nm in the presence of the electric field \(E_{z}\) should change with an increase in the hole density. However, at \(p\) \(>\) 2 \(\times\) 10\({}^{11}\) cm\({}^{-2}\), as well as in the absence of the field, the degree of degeneracy should be 2 and \(m_{h}/m_{0}\)\(\approx\) 0.4-0.45.
This behavior of \(m_{h}/m_{0}\) is not observed: \(m_{h}/m_{0}\) remains in the interval of 0.2-0.3 in the entire hole density range. Thus, discrepancy between the theory and experiment cannot be explained by the fact that the calculations presented in Fig. 6 are not self-consistent.
**2.** All calculations were performed under the assumption that deformation in the quantum well remains the same as in narrow wells. However, it can be partially removed in wide wells. To estimate the effect of this factor, we calculated the dependence \(m_{h}(p)/m_{0}\) at two values of deformation-induced addition \(Hp\) to the Hamiltonian: \(Hp\) corresponding to the complete deformation (narrow wells) and 0.5\(Hp\) (Fig. 8).
It is seen that the hole density at which the jump in \(m_{h}/m_{0}\) should be observed hardly depends on deformation.
Thus, reasons for the drastic discrepancy between experimental and theoretical dependences
Figure 8: (Color online) Density dependences of the hole mass \(m_{h}(\ p)/m_{0}\) at two additions \(Hp\) and 0.5\(Hp\) to the Hamiltonian describing the contribution from deformation.
Figure 7: (Color online) Hole mass \(m_{h}/m_{0}\) versus the hole density in the upper and lower branches of the spectrum split by the electric field \(E_{z}\). The inset shows the hole mass \(m_{h}/m_{0}\) versus the energy measured from the top of the valence band at different electric field \(E_{z}\).
remain unclear. Consequently, it is doubtful that the \(\mathbf{k}P\) calculations adequately describe the valence band at all \(d_{\mathrm{0W}}\) values.
Direct experimental evidence of the existence of four fairly high side extrema is absent. The possibility of achieving agreement between the experiment and theory at \(d_{\mathrm{Qw}}<20\) nm and \(p<4\times 10^{11}\) cm\({}^{-2}\) is not such evidence.
To summarize, the reported study of the energy spectrum of the top of the valence band in HgTe quantum wells with the widths \(d_{\mathrm{0W}}=8\)-\(200\) nm has shown that states at the top of the valence band at the hole densities \(p<6\times 10^{11}\) cm\({}^{-2}\) are doubly degenerate and the cyclotron mass of holes \(m_{h}\) increases monotonically from \(0.2m_{0}\) to \(0.3m_{0}\)with increasing hole density from \(1.5\times 10^{11}\) to \(5.5\times 10^{11}\) cm\({}^{-2}\). The dependences \(m_{h}(p)/m_{0}\) calculated within the four-band \(\mathbf{k}P\) model significantly differ from the corresponding experiment dependences. The estimates of the effects of the electric field \(E_{z}\) in the quantum well and deformation cannot explain discrepancy between experimental and theoretical results. Reasons for this discrepancy remain unclear.
## Funding
This work was supported by the Ministry of Science and Higher Education of the Russian Federation, project no. 075-15-2020-797 (13.1902.21.0024).
|
2301.12554
|
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive
Smoothing
|
While prior research has proposed a plethora of methods that build neural
classifiers robust against adversarial robustness, practitioners are still
reluctant to adopt them due to their unacceptably severe clean accuracy
penalties. This paper significantly alleviates this accuracy-robustness
trade-off by mixing the output probabilities of a standard classifier and a
robust classifier, where the standard network is optimized for clean accuracy
and is not robust in general. We show that the robust base classifier's
confidence difference for correct and incorrect examples is the key to this
improvement. In addition to providing intuitions and empirical evidence, we
theoretically certify the robustness of the mixed classifier under realistic
assumptions. Furthermore, we adapt an adversarial input detector into a mixing
network that adaptively adjusts the mixture of the two base models, further
reducing the accuracy penalty of achieving robustness. The proposed flexible
method, termed "adaptive smoothing", can work in conjunction with existing or
even future methods that improve clean accuracy, robustness, or adversary
detection. Our empirical evaluation considers strong attack methods, including
AutoAttack and adaptive attack. On the CIFAR-100 dataset, our method achieves
an 85.21% clean accuracy while maintaining a 38.72% $\ell_\infty$-AutoAttacked
($\epsilon = 8/255$) accuracy, becoming the second most robust method on the
RobustBench CIFAR-100 benchmark as of submission, while improving the clean
accuracy by ten percentage points compared with all listed models. The code
that implements our method is available at
https://github.com/Bai-YT/AdaptiveSmoothing.
|
Yatong Bai, Brendon G. Anderson, Aerin Kim, Somayeh Sojoudi
|
2023-01-29T22:05:28Z
|
http://arxiv.org/abs/2301.12554v5
|
# Improving the Accuracy-Robustness Trade-off
# Improving the Accuracy-Robustness Trade-off
of Classifiers via Adaptive Smoothing
Yatong Bai\({}^{1}\), Brendon G. Anderson\({}^{1}\), Aerin Kim\({}^{2}\), Somayeh Sojoudi\({}^{1}\)
\({}^{1}\)University of California, Berkeley \({}^{2}\)Scale AI {yatong_bai, bganderson, sojoudi}@berkeley.edu, [email protected]
\(-\) _Abstract -_
While it is shown in the literature that simultaneously accurate and robust classifiers exist for common datasets, previous methods that improve the adversarial robustness of classifiers often manifest an accuracy-robustness trade-off. We build upon recent advancements in data-driven "locally biased smoothing" to develop classifiers that treat benign and adversarial test data differently. Specifically, we tailor the smoothing operation to the usage of a robust neural network as the source of robustness. We then extend the smoothing procedure to the multi-class setting and adapt an adversarial input detector into a policy network. The policy adaptively adjusts the mixture of the robust base classifier and a standard network, where the standard network is optimized for clean accuracy and is not robust in general. We provide theoretical analyses to motivate the use of the adaptive smoothing procedure, certify the robustness of the smoothed classifier under realistic assumptions, and justify the introduction of the policy network. We use various attack methods, including AutoAttack and adaptive attack, to empirically verify that the smoothed model noticeably improves the accuracy-robustness trade-off. On the CIFAR-100 dataset, our method simultaneously achieves an 80.09% clean accuracy and a 32.94% AutoAttacked accuracy. The code that implements adaptive smoothing is available at [https://github.com/Bai-YT/AdaptiveSmoothing](https://github.com/Bai-YT/AdaptiveSmoothing).
## 1 Introduction
The vulnerability of neural networks to adversarial attacks has been observed in various applications, such as computer vision [25, 44] and control systems [31]. In response, "adversarial training" [12, 13, 25, 36, 62] has been studied to alleviate the susceptibility. Adversarial training builds robust neural networks by training on adversarial examples.
A parallel line of work focuses on certified robustness. There are a number of techniques that provide robustness certifications to existing neural networks [5, 6, 41]. Among these methods, "randomized smoothing" seeks to achieve certified robustness at test time [19, 38, 49]. The recent work [7] has shown that a locally biased smoothing method provides an improvement over the traditional data-blind randomized smoothing. However, [7] only focuses on binary classification problems, significantly limiting the applications. Moreover, the method has a fixed balance parameter between clean accuracy (accuracy on clean data without attack) and adversarial robustness, and an accuracy-robustness trade-off thus limits its performance.
While some works have shown that there exists a fundamental trade-off between accuracy and robustness [57, 61], recent research has argued that it should be possible to simultaneously achieve robustness and accuracy on benchmark datasets [59]. To this end, variants of adversarial training that improve the accuracy-robustness trade-off have been proposed, including TRADE [61], Interpolated Adversarial Training (IAT) [37], and many others [11, 50, 56, 58, 60]. However, even with these improvements, clean accuracy is often an inevitable price of achieving robustness. Moreover, standard non-robust models often take advantage of pre-training on larger datasets, gaining enormous performance gains, whereas the effect of pre-training on robust classifiers is less understood and may be less prominent [18, 23].
This work makes a theoretically disciplined step towards performing robust classification without sacrificing clean accuracy, with the contributions summarized below.
* In Section 3, under the observation that the perfor
mance of the \(K\)-nearest-neighbor (\(K\)-NN) classifier, a crucial component of locally biased smoothing, becomes a bottleneck of the overall performance, we replace the \(K\)-NN classifier with a robust neural network that can be obtained via various existing methods, and modify the smoothing formulation accordingly. The resulting formulation (4) is a convex combination of the outputs of a standard neural network and a robust neural network. When the robust neural network has a certified Lipschitz constant, the combined classifier also has a closed-form certified robust radius.
* In Section 4, we propose a data-aware adaptive smoothing procedure that adaptively adjusts the mixture of a standard model and a robust model with the help of a policy network. This procedure uses a type of adversary detector as a policy network that adjusts the convex combination of the two networks, improving the accuracy-robustness trade-off. We then empirically verify the robustness of the proposed method using gray-box and white-box projected gradient descent (PGD) attack, AutoAttack, and adaptive attack, demonstrating that the policy network is robust against the types of adversaries it is trained with. When the policy is trained with examples generated by a carefully-constructed adaptive AutoAttack, the composite model sacrifices little robustness but significantly enhances the clean accuracy, demonstrating a significantly improved accuracy-robustness trade-off.
Note that we do not make any assumptions about how the standard and robust base models are obtained, nor does the method make assumptions on the type and budget of the adversarial attack. Thus, adaptive smoothing can take advantage of pre-trained weights via the standard base classifier and benefit from ever-improving robust training methods via the robust base classifier.
## 2 Background and related works
### Notations
The symbol \(\left\|\cdot\right\|_{p}\) denotes the \(\ell_{p}\) norm of a vector, while \(\left\|\cdot\right\|_{p*}\) denotes its dual norm. The matrix \(I_{d}\) denotes the identity matrix in \(\mathbb{R}^{d\times d}\). For a scalar \(a\), \(\mathrm{sgn}(a)\in\{-1,0,1\}\) denotes its sign. For a natural number \(c\), \([c]=\{1,2,\ldots,c\}\). For an event \(A\), the indicator function \(\mathbb{I}(A)\) evaluates to \(1\) if \(A\) takes place and \(0\) otherwise. The notation \(\mathbb{P}_{X\sim\mathcal{S}}[A(X)]\) denotes the probability for an event \(A(X)\) to occur, where \(X\) is a random variable drawn from the distribution \(\mathcal{S}\).
Consider a model \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{c}\), whose components are \(g_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R},\ i\in[c]\), where \(d\) is the dimension of the input and \(c\) is the number of classes. A classifier \(f:\mathbb{R}^{d}\rightarrow[c]\) can be obtained via \(f(x)\in\mathrm{arg\,max}_{i\in[c]}\,g_{i}(x)\). In this paper, we assume that \(g(\cdot)\) does not have the desired level of robustness, and refer to it as a "standard classifier" (as opposed to a "robust classifier" which we denote as \(h(\cdot)\)). We use \(\mathcal{D}\) to denote the set of all validation input-label pairs \((x_{i},y_{i})\).
In this work, we consider \(\ell_{p}\)-norm-bounded attacks on differentiable neural networks. A classifier \(f(\cdot)\) is considered robust against adversarial perturbations at an input data \(x\in\mathbb{R}^{d}\) if it assigns the same label to all perturbed inputs \(x+\delta\) such that \(\|\delta\|_{p}\leq\epsilon\), where \(\epsilon\geq 0\) is the attack radius.
### Related adversarial attacks and defenses
While the fast gradient sign method (FGSM) and PGD attacks based on differentiating the cross-entropy loss have been considered the most classic and straightforward attacks [25, 42], they have been shown to be too weak, as defenses that are only designed against the FGSM and PGD attacks are often easily circumvented [9, 10, 16, 48]. To this end, various attack methods based on alternative loss functions, Expectation Over Transformation, and black-box perturbations have been proposed. Such efforts include MultiTargeted attack loss [28], AutoAttack [22], adaptive attack [55], minimal distortion attack [21], and many others, even considering attacking test-time defenses [20].
On the defense side, while adversarial training [42] and TRADE [61] have seen enormous success, such methods are often limited by a significantly larger amount of required training data [52]. Initiatives that construct more effective training data via data augmentation [26, 27, 51] and generative models [53] have successfully produced more robust models. Improved versions of adversarial training [32, 54] have also been proposed.
Previous research has developed models that improve robustness by dynamically changing at test time. Specifically, "Input-Adaptive Inference (IAI)" improves the accuracy-robustness trade-off by appending side branches to a single network, allowing for early-exit predictions [30]. Other initiatives that aim to enhance the accuracy-robustness trade-off include using the SCORE attack during training [46] and applying adversarial training for regularization [63]. Moreover, ensemble-based defenses, such as random ensemble [39] and diverse
ensemble [2, 47], have been proposed. In comparison, this work considers two separate classifiers and uses their synergy to improve the accuracy-robustness trade-off, achieving much higher performances.
### Locally biased smoothing
Randomized smoothing, popularized by [19], achieves robustness at test time by replacing \(f(x)\) with a smoothed classifier, given by \(\widetilde{f}(x)\in\operatorname*{arg\,max}_{i\in[c]}\mathbb{P}_{\delta \sim\mathcal{S}}\big{[}f(x+\delta)=i\big{]}\), where \(\mathcal{S}\) is a smoothing distribution. A common choice for \(\mathcal{S}\) is a Gaussian distribution.
The authors of [7] have recently argued that data-invariant randomized smoothing does not always achieve robustness. They have shown that in the binary classification setting, randomized smoothing with an unbiased distribution is suboptimal, and an optimal smoothing procedure shifts the input point in the direction of its true class. Since the true class is generally unavailable, a "direction oracle" is used as a surrogate. This "locally biased smoothing" method is no longer randomized and outperforms traditional data-blind randomized smoothing. The locally biased smoothed classifier \(g^{\gamma}(\cdot)\) is obtained via the deterministic calculation
\[g^{\gamma}(x)=g(x)+\gamma h(x)\|\nabla g(x)\|_{p^{*}},\]
where \(h(x)\in\{-1,1\}\) is the direction oracle and \(\gamma\geq 0\) is a trade-off parameter. Since locally biased smoothing aims to improve robustness, the direction oracle should come from an inherently robust classifier (which is often less accurate). In [7], this direction oracle is chosen to be a one-nearest-neighbor classifier. Intuitively, when \(\|\nabla g(x)\|_{p^{*}}\) is large, \(g(x)\) is more susceptible to adversarial attacks because perturbing the input by the same amount induces a larger output change. Thus, when \(\|\nabla g(x)\|_{p^{*}}\) is large, locally biased smoothing trusts the direction oracle more.
### Adversarial input detectors
It has been shown that adversarial inputs can be detected via various methods. For example, [43] proposes to append an additional detection branch to an existing neural network, and uses adaptive adversarial data to train the detector in a supervised fashion. However, [15] has shown that it is possible to bypass this detection method. They constructed adversarial examples via the C&W attacks [16] and simultaneously targeted the classification branch and the detection branch by treating the two branches as an "augmented classifier". According to [15], the detector is effective against the types of attack that it is trained with, but not necessarily the attack types that are absent in the training data. It is thus reasonable to expect the detector to be able to detect a wide range of attacks if it is trained using sufficiently diverse types of attacks (including those targeting the detector itself). While exhaustively covering the entire adversarial input space is intractable, and it is unclear to what degree one needs to diversify the attack types in practice, our experiments show that our modified architecture based on [43] can recognize the state-of-the-art AutoAttack adversaries with a high success rate.
To mitigate the above challenges faced by detectors obtained via supervised training, unsupervised detectors have been proposed [3, 4]. Other detection methods include [1, 17]. Unfortunately, universally effective detectors have not been discovered yet, and therefore this paper focuses on transferring the properties of the existing detector towards better overall robustness. Future advancements in the field of adversary detection can further enhance the performance of our method.
## 3 Using a robust neural network as the smoothing oracle
Since locally-biased smoothing was designed for binary classification problems, we first extend it to the multi-classification setting. To achieve this, we treat the output of each class \(g_{i}(x)\) independently, giving rise to:
\[g_{\text{smoothed}1,i}^{\gamma}(x)=g_{i}(x)+\gamma h_{i}(x)\|\nabla g_{i}(x )\|_{p^{*}},\ \ i\in[c]. \tag{1}\]
Note that if \(\|\nabla g_{i}(x)\|_{p^{*}}\) is large for some \(i\), then \(g_{\text{smoothed}1,i}^{\gamma}(x)\) can be large even if both \(g_{i}(x)\) and \(h_{i}(x)\) are small, potentially leading to incorrect predictions. To remove the effect of the magnitude difference across the classes, we propose a normalized formulation as follows:
\[g_{\text{smoothed}2,i}^{\gamma}(x)=\frac{g_{i}(x)+\gamma h_{i}(x)\|\nabla g _{i}(x)\|_{p*}}{1+\gamma\|\nabla g_{i}(x)\|_{p*}},\ \ i\in[c]. \tag{2}\]
The parameter \(\gamma\) adjusts the trade-off between clean accuracy and robustness. When \(\gamma=0\), it holds that \(g_{\text{smoothed}2,i}^{\gamma}(x)\equiv g_{i}(x)\) for all \(i\). When \(\gamma\rightarrow\infty\), it holds that \(g_{\text{smoothed}2,i}^{\gamma}(x)\to h_{i}(x)\) for all \(x\) and all \(i\).
With the smoothing procedure generalized to the multi-class setting, we are now ready to discuss the choice of the robust oracle \(h_{i}(\cdot)\). While \(K\)-NN classifiers are relatively robust and can be used as the direction oracle, the representation power of \(K\)-NN classifiers is too
weak. On the CIFAR-10 image classification problem [35], \(K\)-NN only achieves around \(35\%\) accuracy on clean test data. In contrast, an adversarially trained ResNet can reach a \(50.0\%\) accuracy on adversarial test data [42]. Such a lackluster performance of \(K\)-NN becomes a significant bottleneck of the accuracy-robustness trade-off of the smoothed classifier. To this end, we replace the \(K\)-NN classifier with a robust neural network. The robustness of this network can be achieved via various methods, including adversarial training, TRADE, and traditional randomized smoothing.
Further scrutinizing (2) leads to the question of whether \(\|\nabla g_{i}(x)\|_{p*}\) is the best choice for adjusting the mixture of \(g(\cdot)\) and \(h(\cdot)\). In fact, this gradient magnitude term is a result of the assumption of \(h(x)\in\{-1,1\}\), which is the setting considered in [7]. Here, we no longer have this assumption. Instead, we assume both \(g(\cdot)\) and \(h(\cdot)\) to be differentiable. Thus, we further generalize the formulation as
\[g_{\mathrm{smoothed}3,i}^{\gamma}(x)=\frac{g_{i}(x)+\gamma R_{i}(x)h_{i}(x) }{1+\gamma R_{i}(x)},\ \ i\in[c], \tag{3}\]
where \(R_{i}(x)\) is an extra scalar term that can potentially include \(\nabla g_{i}(x)\) and \(\nabla h_{i}(x)\) to determine the "trustworthiness" of the base classifiers. Here, we empirically compare four options for \(R_{i}(x)\): \(1\), \(\|\nabla g_{i}(x)\|_{p*}\), \(\|\nabla\max_{j}g_{j}(x)\|_{p*}\), and \(\frac{\|\nabla g_{i}(x)\|_{p*}}{\|\nabla h_{i}(x)\|_{p*}}\).
Another design question is whether \(g(\cdot)\) and \(h(\cdot)\) should be the pre-softmax logits or the post-softmax probabilities. Note that since most attack methods are designed based on the logits, incorporating the softmax function into the model may result in gradient masking, an undesired phenomenon that makes it hard to properly evaluate the proposed method. Therefore, we are left with the following two choices that will make the smoothed model compatible with existing gradient-based attacks:
* Use the logits for both \(g(\cdot)\) and \(h(\cdot)\);
* Use the probabilities for both \(g(\cdot)\) and \(h(\cdot)\), and then convert the smoothed probabilities back to logits. The required "inverse-softmax" operator is given simply by the natural logarithm, and does not change the overall prediction.
In Figure 1, we compare the different choices for \(R_{i}(x)\) by visualizing the accuracy-robustness trade-off. Based on this "clean accuracy versus PGD\({}_{10}\)-attacked accuracy" plot (PGD\({}_{T}\) denotes \(T\)-step PGD), we conclude that \(R_{i}(x)=1\) gives the best accuracy-robustness trade-off, and \(g(\cdot)\) and \(h(\cdot)\) should be the probabilities. While Figure 1 only considers one set of base classifiers (a pair of standard and adversarially-trained ResNet18s), we provide additional examples in Appendix B using alternative model architectures, different methods to train robust base classifiers, and various attack budgets. Note that the resulting formulation can then be re-parameterized as
\[g_{\mathrm{CNN},i}^{\alpha}(x)=\log\big{(}(1-\alpha)g_{i}(x)+\alpha h_{i}(x) \big{)},\ \ i\in[c], \tag{4}\]
where \(\alpha=\frac{\gamma}{1+\gamma}\in[0,1]\). Therefore, we select (4) as our formulation of Adaptive Smoothing, which builds a composite classifier \(g_{\mathrm{CNN}}^{\alpha}(\cdot)\) that outputs the natural log of a convex combination of the probabilities of \(g(\cdot)\) and \(h(\cdot)\).
### Theoretical certified robust radius
Similar to local biased smoothing, adaptive smoothing provides a certified robust radius when the base classifier \(h(\cdot)\) is certifiably robust, with the robust radius depending on the constant \(\alpha\). We present this theoretical result below:
**Definition 1**.: A function \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is called \(\ell_{p}\)_-Lipschitz continuous_ if there exists \(L\in(0,\infty)\) such that \(|f(x^{\prime})-f(x)|\leq L\|x^{\prime}-x\|_{p}\) for all \(x^{\prime},x\in\mathbb{R}^{d}\). The _Lipschitz constant_ of such \(f\) is defined to be \(\mathrm{Lip}_{p}(f)\coloneqq\inf\{L\in(0,\infty):|f(x^{\prime})-f(x)|\leq L\|x^ {\prime}-x\|_{p}\text{ for all }x^{\prime},x\in\mathbb{R}^{d}\}\).
Let \(\alpha\in(0,1)\). Consider the convex combination classifier introduced in the adaptive smoothing formulation (4). Since we use probabilities for both \(g(\cdot)\) and \(h(\cdot)\), it holds that \(0\leq g_{i}(\cdot)\leq 1\) and \(0\leq h_{i}(\cdot)\leq 1\) for all \(i\).
Figure 1: Comparing the options for \(R_{i}(x)\). “Softmax” represents the formulation that use the probabilities for \(g(\cdot)\) and \(h(\cdot)\) followed by a natural log on \(g_{\mathrm{smoothed}3,i}^{\gamma}(\cdot)\).
**Assumption 1**.: The classifier \(h(\cdot)\) is robust in the sense that, for all \(i\in\{1,2,\ldots,n\}\), \(h_{i}(\cdot)\) is Lipschitz continuous with Lipschitz constant \(\mathrm{Lip}_{p}(h_{i})\in(0,\infty)\).
Assumption 1 is not restrictive in practice. For example, randomized smoothing with Gaussian smoothing variance \(\sigma^{2}I_{d}\) on the input yields robust classifiers with \(\ell_{2}\)-Lipschitz constant \(\sqrt{\frac{2}{\pi\sigma^{2}}}\).
**Theorem 1**.: _Let \(x\in\mathbb{R}^{d}\) and let \(i,j\in\{1,2,\ldots,n\}\). Then the relation \(\mathrm{sgn}(g^{\alpha}_{\text{CNN},i}(x+\delta)-g^{\alpha}_{\text{CNN},j}(x+ \delta))=\mathrm{sgn}(h_{i}(x)-h_{j}(x))\) holds for all \(\delta\in\mathbb{R}^{d}\) such that_
\[\|\delta\|_{p}\leq r_{p}^{\alpha}(x)\coloneqq\frac{\alpha\left|h_{i}(x)-h_{j} (x)\right|+\alpha-1}{\alpha\left(\mathrm{Lip}_{p}(h_{i})+\mathrm{Lip}_{p}(h_{ j})\right)}.\]
The proof of Theorem 1 is given in Appendix A.1. We remark that the \(\ell_{p}\)-norm that we certify using Theorem 1 may be arbitrary (e.g., \(\ell_{1}\), \(\ell_{2}\), or \(\ell_{\infty}\)), so long as the Lipschitz constant of the robust network \(h(\cdot)\) is computed with respect to the same norm.
Notice that, if \(\alpha\to 1\), then \(r^{\alpha}(x)\to\frac{\left|h_{i}(x)-h_{j}(x)\right|}{\mathrm{Lip}_{p}(h_{i})+ \mathrm{Lip}_{p}(h_{j})}\), which is the standard (global) Lipschitz-based robust radius of \(h(\cdot)\) around \(x\) (see, e.g., [24, 29] for further discussions on Lipschitz-based robustness). On the other hand, if \(\alpha\) is too small in comparison to the relative confidence of \(h(\cdot)\), namely,
\[\alpha\leq\frac{1}{1+\left|h_{i}(x)-h_{j}(x)\right|},\]
then \(r^{\alpha}(x)<0\), and in this case we cannot provide nontrivial certified robustness for \(g^{\alpha}_{\text{CNN}}(\cdot)\). This is rooted in the fact that a tiny \(\alpha\) amounts to an excess weight into the non-robust classifier \(g(\cdot)\). If \(h(\cdot)\) is \(100\%\) confident in its prediction, then \(\left|h_{i}(x)-h_{j}(x)\right|=1\), and therefore this threshold value of \(\alpha\) becomes \(\frac{1}{2}\), leading to a non-trivial certified radii for \(\alpha>\frac{1}{2}\). However, once we put over \(\frac{1}{2}\) of the weight into \(g(\cdot)\), we no longer certify a nonzero radius around \(x\). Again, this is intuitive since we have made no assumptions on the robustness of \(g(\cdot)\) around \(x\).
The above result clearly generalizes to the even less restrictive scenario of using local Lipschitz constants over a neighborhood \(\mathcal{U}\) of \(x\) as a surrogate for the global Lipschitz constants, so long as the condition \(\delta\in\mathcal{U}\) is also added to the hypotheses.
## 4 Adaptive smoothing strength with the policy network
So far, \(\alpha\) has been treated as a fixed hyperparameter. A more intelligent approach is to allow \(\alpha\) to be different for each \(x\) by replacing the constant \(\alpha\) with a function \(\alpha(x)\). Here, we take \(\alpha(x)\) to be deterministic, as stochastic defenses can be much harder to evaluate.
One motivation for adopting the adaptive trade-off parameter \(\alpha(x)\) is that the optimal \(\alpha^{\star}\) can vary when \(x\) changes. For example, when \(x\) is clean and unperturbed, the standard model \(g(\cdot)\) outperforms the robust model \(h(\cdot)\). If \(x\) is an attacked input targeting \(g(\cdot)\), then the robust model \(h(\cdot)\) should be used. However, as shown in Figure 2, if the target of the attack is \(h(\cdot)\), then even though \(h(\cdot)\) is robust, a better choice is to feed \(x\) into \(g(\cdot)\). This is because the loss landscapes of \(g(\cdot)\) and \(h(\cdot)\) differ enough that an adversarial perturbation targeting \(h(\cdot)\) is benign to \(g(\cdot)\).
When the PGD adversary targets a smoothed classifier \(g^{\alpha_{t}}_{\text{CNN}}(\cdot)\), as \(\alpha_{t}\) varies, the optimal strategy also changes. We provide a visualization in Figure 2 based on the CIFAR-10 dataset. Specifically, we put together a composite model \(g^{\alpha_{t}}_{\text{CNN}}(\cdot)\) using a ResNet18 standard classifier \(g(\cdot)\) and a ResNet18 robust classifier \(h(\cdot)\) via (4).1 Then, we attack \(g^{\alpha_{t}}_{\text{CNN}}(\cdot)\) with different values of \(\alpha_{t}\) via PGD\({}_{20}\), save the adversarial instances, and report the accuracy of \(g(\cdot)\) and \(h(\cdot)\) evaluated on these instances. When \(\alpha_{t}\leq\text{Sigmoid}(5.72)=0.9967\), the robust model \(h(\cdot)\) performs better. When \(\alpha_{t}>0.9967\), the standard model \(g(\cdot)\) is more suitable.
Footnote 1: The ResNet classifiers are obtained from [45].
### The existence of \(\alpha(x)\) that achieves the trade-off
The following theorem shows that when \(\alpha\) is a function of the input, there exists an \(\alpha(\cdot)\) that makes the combined classifier correct whenever either \(g(\cdot)\) and \(h(\cdot)\)
Figure 2: Attacked accuracy of the standard classifier \(g(\cdot)\) and the robust classifier \(h(\cdot)\) when the adversary targets different values of \(\alpha_{t}\). For better readability, we use \(\text{Logit}(\alpha_{t})\) as the horizontal axis labels, where \(\text{Logit}(\cdot)\) denotes the inverse function of Sigmoid.
makes the correct prediction, which further implies that the combined classifier matches the clean accuracy of \(g(\cdot)\) and the attacked accuracy of \(h(\cdot)\).
**Theorem 2**.: _Let \(\epsilon>0\), \((x_{1},y_{1}),(x_{2},y_{2})\sim\mathcal{D}\), and \(y_{1}\neq y_{2}\) (i.e., each input corresponds to a unique true label). Assume that \(h_{i}(\cdot)\), \(\|\nabla h_{i}(\cdot)\|_{p*}\), and \(\|\nabla g_{i}(\cdot)\|_{p*}\) are all bounded and that there does not exist \(z\in\mathbb{R}^{d}\) such that \(\|z-x_{1}\|_{p}\leq\epsilon\) and \(\|z-x_{2}\|_{p}\leq\epsilon\). Then, there exists a function \(\alpha(\cdot)\) such that the assembled classifier \(g^{\alpha}_{\text{CNN}}(\cdot)\) satisfies_
\[\mathbb{P}_{(x,y)\sim\mathcal{D},\delta\sim\mathcal{F}}\Big{[} \operatorname*{arg\,max}_{i\in[c]}g^{\alpha}_{\text{CNN},i}(x+\delta)=y\Big{]}\] \[\geq\max\left\{\begin{matrix}\mathbb{P}_{(x,y)\sim\mathcal{D}, \delta\sim\mathcal{F}}\big{[}\operatorname*{arg\,max}_{i\in[c]}g_{i}(x+ \delta)=y\big{]},\\ \mathbb{P}_{(x,y)\sim\mathcal{D},\delta\sim\mathcal{F}}\big{[}\operatorname* {arg\,max}_{i\in[c]}h_{i}(x+\delta)=y\big{]}\end{matrix}\right\},\]
_where \(\mathcal{F}\) is any distribution such that \(\mathbb{P}_{\delta\sim\mathcal{F}}\big{[}\|\delta\|_{p}>\epsilon\big{]}=0\)._
The proof of Theorem 2 is shown in Appendix A.2. Note that the distribution \(\mathcal{F}\) is arbitrary, implying that the test data can be clean data, any type of adversarial data, or some combination of both. As a special case, when the probability density function (PDF) of \(\mathcal{F}\) is a Dirac delta at zero, Theorem 2 implies that the clean accuracy of \(g^{\alpha}_{\text{CNN}}(\cdot)\) is as good as the standard classifier \(g(\cdot)\). Conversely, when the PDF of \(\mathcal{F}\) is a Dirac delta at the worst-case perturbation, the adversarial accuracy of \(g^{\alpha}_{\text{CNN}}(\cdot)\) is not worse than the robust model \(h(\cdot)\), implying that if \(h(\cdot)\) is inherently robust, then \(g^{\alpha}_{\text{CNN}}(\cdot)\) inherits the robustness. One can then conclude that there exists a \(g^{\alpha}_{\text{CNN}}(\cdot)\) that matches the clean accuracy of \(g(\cdot)\) and the robustness of \(h(\cdot)\).
While finding an \(\alpha(\cdot)\) function that perfectly achieves this trade-off is hard, we will use experiments to show that an \(\alpha(\cdot)\) represented by a neural network can retain most of the robustness \(h(\cdot)\) while vastly boosting the clean accuracy, even on challenging datasets such as CIFAR-100.
### Attacking the adaptive classifier
When the combined model \(g^{\alpha}_{\text{CNN}}(\cdot)\) is under adversarial attack, the policy \(\alpha(\cdot)\) provides an addition gradient flow path. Intuitively, the attack should be able to force \(\alpha\) to be small through this additional gradient path, tricking the policy to favor the non-robust \(g(\cdot)\). Following the guidelines for constructing adaptive attacks [55], in the experiments, we consider the following types of attacks:
**A Gray-box PGD\({}_{20}\):** In this setting, the adversary has access to the gradients of both \(g(\cdot)\) and \(h(\cdot)\), but is not given the gradient of the policy network. We use untargeted PGD attack with a fixed initialization to generate the attacks.
**B White-box PGD\({}_{20}\):** Since the smoothed classifier is end-to-end differentiable, following [55], we allow the adversary to access the end-to-end gradient, including the gradient of the policy network.
**C White-box AutoAttack:**[22] has proposed to use an ensemble of four automated attack algorithms to form a stronger attack - "AutoAttack". The method considers APGD attacks generated via the untargeted cross-entropy loss and the targeted DLR loss, in addition to the targeted FAB attack and the black-box Square attack [8]. Again, the end-to-end gradient of the smoothed classifier is available to the adversary. AutoAttack requires much more computation budget than PGD\({}_{20}\).
**D Adaptive white-box AutoAttack:** Since the policy network is a crucial component of the defense, we adapt AutoAttack to target the policy by adding an APGD loss component that aims to decrease \(\alpha\). We use this additional attack type for evaluation purposes.
We will show that the adaptively smoothed model is robust against the attack that it is trained against. When trained using APGD\({}_{75}\) attack with untargeted and targeted loss functions, our model becomes robust against AutoAttack. Furthermore, a significant improvement in the accuracy-robustness trade-off is achieved.
### Training the policy network
In practice, we use a neural network \(\alpha_{\theta}(\cdot):\mathbb{R}^{d}\rightarrow[0,1]\) to learn an effective policy that adjusts the outputs of \(g(\cdot)\) and \(h(\cdot)\). Here, \(\theta\) represents the trainable parameters of the policy, and we refer to \(\alpha_{\theta}(\cdot)\) as the "policy network". The output range constraint is enforced by applying a Sigmoid function to the policy network. Note that when training the policy network \(\alpha_{\theta}(\cdot)\), the base classifiers \(g(\cdot)\) and \(h(\cdot)\) are frozen to avoid unnecessary feature distortions.
Since the policy network should treat clean and attacked inputs differently, its task is closely related to the adversary detection problem. To this end, we adapt the detection architecture introduced in [43] for our policy network. While [15] has argued that simultaneously attacking the base classifier and the adversary detector can bring the detection rate of the detection method proposed in [43] to near zero, we make a few key modifications:
* Our policy \(\alpha_{\theta}(\cdot)\) takes advantage of the two available models \(g(\cdot)\) and \(h(\cdot)\) by using the intermediate features of both networks via concatenation.
* Instead of using the output of \(\alpha_{\theta}(\cdot)\) directly for attack identification, we use it more delicately. Since Figure 1 shows that even a constant \(\alpha\) can improve the accuracy-robustness trade-off, our method does not excessively rely on the performance of the policy network \(\alpha_{\theta}(\cdot)\).
* We include stronger adaptive adversaries during training to generate more diverse training examples.
The modified architecture is shown in Figure 3. In Section 5.2, we provide empirical results demonstrating that the above modifications help the overall composite network defend against strong attacks. For the policy network, we choose a ResNet18-like structure, which is known to perform well for a wide range of computer vision applications and is often considered the go-to architecture.
Consider the following two loss functions for training the policy \(\alpha_{\theta}(\cdot)\):
* **Multi-class cross-entropy:** We minimize the multi-class cross-entropy loss of the combined classifier, which is the ultimate goal of the policy network: \[\min_{\theta}\mathbb{E}_{\begin{subarray}{c}(x,y)\sim\mathcal{D}\\ \delta\sim\mathcal{F}\end{subarray}}\Big{[}\ell_{\text{CE}}\big{(}g_{\text{ CNN}}^{\theta}(x+\delta),y\big{)}\Big{]},\] (5) where \(\ell_{\text{CE}}\) is the cross-entropy loss for logits and \(y\in[c]\) is the label corresponding to \(x\). The base classifiers \(g(\cdot)\) and \(h(\cdot)\) are not updated. Again, \(\delta\) denotes the perturbation and the distribution \(\mathcal{F}\) is arbitrary. In our experiments, to avoid overfitting to a particular attack radius, \(\mathcal{F}\) is selected to be formed by perturbations with randomized radii.
* **Binary cross-entropy:** The optimal \(\alpha^{\star}\) that minimizes \(\ell_{\text{CE}}\) in (5) can be estimated for each training point. Specifically, depending on whether the input is attacked and how it is attacked, either \(g(\cdot)\) or \(h(\cdot)\) should be prioritized. Thus, we treat the task as a binary classification problem and solve the optimization problem \[\min_{\theta}\mathbb{E}_{\begin{subarray}{c}(x,y)\sim\mathcal{D}\\ \delta\sim\mathcal{F}\end{subarray}}\Big{[}\ell_{\text{BCE}}\big{(}\alpha_{ \theta}(x+\delta),\widetilde{\alpha}\big{)}\Big{]},\] where \(\ell_{\text{BCE}}\) is the binary cross-entropy loss for probabilities and \(\widetilde{\alpha}\in\{0,1\}\) is the "pseudo label" for the output of the policy.
Using only the multi-class loss suffers from a distributional mismatch between the training set and the test set. The robust classifier \(h(\cdot)\) may achieve a low loss on adversarial training data but a high loss on adversarial test data. For example, with the CIFAR-10 dataset and our ResNet18 robust classifier, the PGD\({}_{10}\) adversarial training accuracy is 93.01% while the PGD\({}_{10}\) test accuracy is 45.55%. As a result, approximating (5) with empirical risk minimization on the training set does not effectively optimize the true risk. When the adversary attacks a test input \(x\) targeting \(h(\cdot)\), the standard prediction \(g(x)\) yields a lower loss than \(h(x)\). However, if \(x\) is an attacked example in the training set, then the losses of \(g(x)\) and \(h(x)\) are similar, and the policy network does not receive a strong incentive to choose \(g(\cdot)\) when it detects an attack targeting \(h(\cdot)\).
Figure 3: The overall architecture of the adaptively smoothed classifier introduced in Section 4 when applied to a pair of ResNet18 classifiers. “RNB” is an abbreviation of ResNetBlock and “BN” represents the 2D batch normalization layer. The Conv1x1 layer serves the role of reducing the number of features and improving efficiency.
The binary loss, however, does not capture the potentially different sensitivity of each input. Certain inputs can be more vulnerable against adversarial attacks, and ensuring the correctness of the policy on these inputs is more crucial.
To this end, we propose a composite loss function that combines the above two components, providing incentives for the policy to select the standard classifier \(g(\cdot)\) when appropriate, while forcing the policy to remain conservative. The composite loss for a data-label pair \((x,y)\) is given by
\[\ell_{\text{composite}}\big{(}\theta,(x,y,\widetilde{\alpha}) \big{)} \tag{6}\] \[=c_{1}\cdot\ell_{\text{CE}}\big{(}g^{\theta}_{\text{CNN}}(x+ \delta),y\big{)}+\] \[\quad\quad c_{2}\cdot\ell_{\text{BCE}}\big{(}\alpha_{\theta}(x+ \delta),\widetilde{\alpha}\big{)}+\] \[\quad\quad c_{3}\cdot\ell_{\text{CE}}\big{(}g^{\theta}_{\text{ CNN}}(x+\delta),y\big{)}\cdot\ell_{\text{BCE}}\big{(}\alpha_{\theta}(x+ \delta),\widetilde{\alpha}\big{)},\]
where the hyperparameters \(c_{1},c_{2}\), and \(c_{3}\) control the weights of the loss components.
## 5 Numerical experiments
In this section, we use experiments on the CIFAR-10 and the CIFAR-100 datasets to validate the proposed method. Due to the lower difficulty of CIFAR-10, recent progress in learning robust models has made the accuracy-robustness trade-off less noticeable for this dataset [26, 27, 51]. On more challenging tasks, such as CIFAR-100, this trade-off is still highly noticeable, and the advantages of adaptive smoothing are more prominent for these tasks. Nonetheless, due to the popularity of CIFAR-10 in the field of adversarial robustness, we still use small models trained on this dataset to perform ablation analyses and proof-of-concept demonstrations. On the more suitable CIFAR-100 dataset, we use state-of-the-art classifiers as the base models \(g(\cdot)\) and \(h(\cdot)\), where \(g(\cdot)\) takes advantage of accuracy-optimized pre-training and \(h(\cdot)\) exploits recent robust training methods. We then apply adaptive smoothing to these high-performance models and demonstrate that our method trains simultaneously accurate and robust models, reconciling the accuracy-robustness trade-off to an unprecedented level.
### Robust neural network smoothing with a fixed strength
We first use the CIFAR-10 dataset to evaluate the performance of the composite models \(g^{\alpha}_{\text{CNN}}(\cdot)\) with different fixed values of \(\alpha\). Specifically, we use a ResNet18 model trained on clean data as the standard model \(g(\cdot)\) and use another ResNet18 trained on PGD\({}_{20}\) data as the robust model \(h(\cdot)\). We consider PGD\({}_{20}\) attacks that target \(g(\cdot)\) and \(h(\cdot)\), in addition to the adaptive PGD\({}_{20}\) attacks generated using the end-to-end gradient of \(g^{\alpha}_{\text{CNN}}(\cdot)\).
The test accuracy of each composite model is presented in Figure 4. As \(\alpha\) increases, the clean accuracy of \(g^{\alpha}_{\text{CNN}}(\cdot)\) converges from the clean accuracy of \(g(\cdot)\) to the clean accuracy of \(h(\cdot)\). In terms of the attacked performance, when the attack targets \(g(\cdot)\), the attacked accuracy increases with \(\alpha\). When the attack targets \(h(\cdot)\), the attacked accuracy decreases with \(\alpha\), showing that the attack becomes more benign to the composite model when it emphasizes \(g(\cdot)\) more. When the adaptive attack targets \(g^{\alpha}_{\text{CNN}}(\cdot)\), the attacked accuracy increases with \(\alpha\).
### Robust neural network smoothing with adaptive strength
Next, we evaluate the performance of the adaptive composite model \(g^{\theta}_{\text{CNN}}(\cdot)\) using the CIFAR-10 and the CIFAR-100 datasets. We consider \(\ell_{\infty}\) attacks and use different robust neural networks for \(h(\cdot)\). In all experiments, the hyperparameters for the composite loss function (6) are \(c_{1}=0.5\), \(c_{2}=1\), and \(c_{3}=0.1\). The AdamW optimizer [33] is used for optimization.
The training inputs for the policy \(\alpha_{\theta}(\cdot)\) include the clean data and the corresponding types of attacked data. For each dataset, we train three policy networks using adversarial examples generated with the attack settings A, B, and C presented in Section 4.2, respectively. To alleviate overfitting, we randomize the attack radius and the number of steps. Moreover, we add a
Figure 4: The performance of the smoothed model \(g^{\alpha}_{\text{CNN}}(\cdot)\). “STD attack”, “ADV attack”, and “Adaptive attack” refer to the PGD\({}_{10}\) attack generated using the gradient of \(g(\cdot)\), \(h(\cdot)\), and \(g^{\alpha}_{\text{CNN}}(\cdot)\) respectively, with \(\epsilon\) set to \(\frac{8}{255}\).
randomly-weighted binary cross-entropy loss component that targets the policy (this loss tries to trick the policy to favor \(g(\cdot)\)). For the setting C (AutoAttack), the training data only include targeted and untargeted APGD attacks. The other two AutoAttack components, FAB and Square, are excluded during training in the interest of efficiency but are included for evaluation. Tables 1 and 2 present the test accuracy of \(g^{\theta}_{\text{CNN}}(\cdot)\) for each setting, where each column represents the performance of one adaptively smoothed model.
The empirical results show that the combined classifier can defend against the attacks it is trained on. Specifically, for the attack setting A (gray-box PGD), \(g^{\theta}_{\text{CNN}}(\cdot)\) is able to achieve the same level of PGD\({}_{20}\)-attacked accuracy as \(h(\cdot)\) while retaining a similar level of clean accuracy as \(g(\cdot)\). For the setting B (white-box PGD), the attack is allowed to follow the gradient path provided by \(\alpha(\cdot)\) and deliberately evade the part of the adversarial input space recognized by \(\alpha_{\theta}(\cdot)\). While the training task becomes more challenging, the improvement in the accuracy-robustness trade-off is still substantial. Furthermore, the composite model can generalize to examples generated via the stronger AutoAttack.
For the setting C (AutoAttack), the difficulty of the training problem further escalates. While the performance of \(g^{\theta}_{\text{CNN}}(\cdot)\) on clean data slightly decreases, the policy network can offer a more vigorous defense against AutoAttack data, still improving the accuracy-robustness trade-off. Note that the improvement is more significant on the CIFAR-100 dataset, where \(g^{\theta}_{\text{CNN}}(\cdot)\) correctly classifies 1173 additional clean images compared with \(h(\cdot)\) (cutting the error rate by a third) while making only 404 additional incorrect predictions on AutoAttacked inputs (increasing the error rate by merely 6.4 relative percent). Since the attacked accuracy of the non-robust base classifier \(g(\cdot)\) on the CIFAR-100 dataset is zero, the observation that \(g^{\theta}_{\text{CNN}}(\cdot)\) preserves \(\frac{32.94}{36.98}\approx 89\%\) of the AutoAttacked accuracy of \(h(\cdot)\) implies that among all AutoAttacked inputs that are correctly predicted by \(h(\cdot)\), the policy helps \(g^{\theta}_{\text{CNN}}(\cdot)\) identify 89% of them.
The above results show that \(\alpha_{\theta}(\cdot)\) is capable of approximating a robust and high-performance policy when trained with sufficiently diverse attacked data. The fact that \(g^{\theta}_{\text{CNN}}(\cdot)\) combines the clean accuracy of \(g(\cdot)\) and the robustness of \(h(\cdot)\) highlights that our method significantly improves the accuracy-robustness trade-off. If a different type of attack needs to be considered, the training set for \(\alpha_{\theta}(\cdot)\) can be further augmented with the corresponding adversarial data.
## 6 Conclusions
This paper proposes "adaptive smoothing", a flexible framework that leverages the mixture of the outputs of an accurate classifier and a robust model to mitigate the accuracy-robustness trade-off of neural networks. We mathematically prove that the smoothed model can inherit the certified robustness of the robust base model under realistic assumptions. We then adapt an adversarial input detector into a deterministic policy network, further improving the accuracy-robustness trade-off. Solid empirical results show that our method can simultaneously benefit from the high accuracy of modern pre-trained standard (non-robust) models and
\begin{table}
\begin{tabular}{c|c|c|c} \hline \multicolumn{4}{c}{CIFAR-10 base classifier performances} \\ \hline Model & Architecture & Clean & PGD\({}_{20}\) & AutoAtt. \\ \hline \(g(\cdot)\) (accurate) & ResNet-18\({}^{\ddagger}\) & 95.28 \% & 0.12 \% & 0.00 \% \\ \(h(\cdot)\) (robust) & WRN-34\({}^{\ddagger}\) & 84.92 \% & 57.16 \% & 53.09 \% \\ \hline \multicolumn{4}{c}{CIFAR-100 \(g^{\theta}_{\text{CNN}}(\cdot)\) performance} \\ \hline Training Setting & **A** & **B** & **C** \\ Eval Setting & & & \\ \hline Clean & 92.05 \% & 92.07 \% & 91.51 \% \\
**A** (gray-box PGD\({}_{20}\)) & 57.22 \% & 57.25 \% & 56.30 \% \\
**B** (white-box PGD\({}_{20}\)) & 56.63 \% & 57.09 \% & 56.29 \% \\
**C** (white-box AutoAtt.) & 40.04 \% & 40.02 \% & 42.78 \% \\
**D** (adaptive AutoAtt.) & 39.85 \% & 39.70 \% & 42.66 \% \\ \hline \(\dagger\): [45] (Vanilla training). & \(\dagger\): [61] (TRADE). & \\ \hline \end{tabular}
\end{table}
Table 1: CIFAR-10 results of the smoothed models trained with three different settings.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \multicolumn{4}{c}{CIFAR-100 base classifier performances} \\ \hline Model & Architecture & Clean & PGD\({}_{20}\) & AutoAtt. \\ \hline \(g(\cdot)\) (accurate) & ResNet-152\({}^{\ddagger}\) & 91.38 \% & 0.14 \% & 0.00 \% \\ \(h(\cdot)\) (robust) & WRN-70\({}^{\dagger\dagger}\) & 69.17 \% & 40.86 \% & 36.98 \% \\ \hline \multicolumn{4}{c}{CIFAR-100 \(g^{\theta}_{\text{CNN}}(\cdot)\) performance} \\ \hline Training Setting & **A** & **B** & **C** \\ Eval Setting & & & \\ \hline Clean & 83.99 \% & 83.96 \% & **80.90** \% \\
**A** (gray-box PGD\({}_{20}\)) & 40.04 \% & 39.80 \% & 39.26 \% \\
**B** (white-box PGD\({}_{20}\)) & 30.59 \% & 34.48 \% & 38.92 \% \\
**C** (white-box AutoAtt.) & 23.54 \% & 26.37 \% & **32.94** \% \\
**D** (adaptive AutoAtt.) & 23.78 \% & 26.17 \% & 32.80 \% \\ \hline \(\dagger\): [45] (BiT). & \(\dagger\): [26]. & \\ \hline \end{tabular}
\end{table}
Table 2: CIFAR-100 results of the smoothed models trained with the three settings. When the training setting C is used, an 80.90% clean accuracy and a 32.94% AutoAttacked accuracy is achieved.
the robustness achieved via state-of-the-art robust classification methods. Because our theoretical study demonstrates the possibility of leveraging the policy network to avoid the accuracy-robustness trade-off entirely, future advancements in adversarial example identification can reconcile this trade-off even more effectively via our framework. Thus, this work paves the way for future research to focus on either accuracy or robustness without sacrificing the other.
|
2302.04666
|
Understand Code Style: Efficient CNN-based Compiler Optimization
Recognition System
|
Compiler optimization level recognition can be applied to vulnerability
discovery and binary analysis. Due to the exists of many different compilation
optimization options, the difference in the contents of the binary file is very
complicated. There are thousands of compiler optimization algorithms and
multiple different processor architectures, so it is very difficult to manually
analyze binary files and recognize its compiler optimization level with rules.
This paper first proposes a CNN-based compiler optimization level recognition
model: BinEye. The system extracts semantic and structural differences and
automatically recognize the compiler optimization levels. The model is designed
to be very suitable for binary file processing and is easy to understand. We
built a dataset containing 80,028 binary files for the model training and
testing. Our proposed model achieves an accuracy of over 97%. At the same time,
BinEye is a fully CNN-based system and it has a faster forward calculation
speed, at least 8 times faster than the normal RNN-based model. Through our
analysis of the model output, we successfully found the difference in assembly
codes caused by the different compiler optimization level. This means that the
model we proposed is interpretable. Based on our model, we propose a method to
analyze the code differences caused by different compiler optimization levels,
which has great guiding significance for analyzing closed source compilers and
binary security analysis.
|
Shouguo Yang, Zhiqiang Shi, Guodong Zhang, Mingxuan Li, Yuan Ma, Limin Sun
|
2023-01-18T03:52:52Z
|
http://arxiv.org/abs/2302.04666v1
|
# Understand Code Style: Efficient CNN-based Compiler Optimization Recognition System
###### Abstract
Compiler optimization level recognition can be applied to vulnerability discovery and binary analysis. Due to the exists of many different compilation optimization options, the difference in the contents of the binary file is very complicated. There are thousands of compiler optimization algorithms and multiple different processor architectures, so it is very difficult to manually analyze binary files and recognize its compiler optimization level with rules. This paper first proposes a CNN-based compiler optimization level recognition model: BinEye. The system extracts semantic and structural differences and automatically recognize the compiler optimization levels. The model is designed to be very suitable for binary file processing and is easy to understand. We built a dataset containing 80028 binary files for the model training and testing. Our proposed model achieves an accuracy of over 97%. At the same time, BinEye is a fully CNN-based system and it has a faster forward calculation speed, at least 8 times faster than the normal RNN-based model. Through our analysis of the model output, we successfully found the difference in assembly codes caused by the different compiler optimization level. This means that the model we proposed is interpretable. Based on our model, we propose a method to analyze the code differences caused by different compiler optimization levels, which has great guiding significance for analyzing closed source compilers and binary security analysis.
compilation optimization, software security, CNN, binary analysis, position embedding, model interpretability
## I Introduction
With the scale of IoT devices getting larger and larger, the security of software systems running on IoT devices is becoming more and more important. IoT software security is increasingly affected by different compiler optimization levels. With using different compilation optimization levels, the same source code can generate different binary code. The binary code can be different in terms of function control flow, data flow, function inlining style, and so on. These differences can easily lead to security risks. According to the previous work, compiler optimization levels recognition can be used in areas such as vulnerability discovery [1] and binary similarity detection [2]. Different compiler optimization levels have a significant impact on software security [3]. Compiler optimization levels are an important reference for software security analysis. For example, the compiler optimization algorithm "dead store elimination" causes a very famous security problem CWE-14 [4]. The "undefined behavior" which means compilers are free to decide how to handle code optimization in C/C++ can also create security vulnerabilities in some cases such as CVE-2016-9843 and CVE-2016-9840.
Today's common compilers, such as GCC, have thousands of compiler optimization algorithms built in, each of which corresponds to an option at compile time. For ease of use, compilers such as GCC integrate numerous compiler optimization levels into five broad levels -O0, -O1, -O2, -O3 and -Os. The smallest optimization-associated unit generated by compilers is the object file. The object files compiled with different compilation optimization levels may be linked to the same executable binary file, which we can not identify the compiler optimization of the executable binary file directly, so the system target file we aim for is the compiler-generated object file. But a variety of compiler optimizations options causing a tiny difference in the binary code of object file, which is very time-consuming with the human to observe, and need to be very rich experience in assembly language. So we propose an end-to-end deep learning model to recognize different compiler optimization levels of an object file. In order to be more easily understood, we successfully build the model without any compiler knowledge and analysis for the binary file, which means it is easy to handle security problems causing by compiler optimization.
With reference to Facebook's great success in using CNN in machine translation applications in the field of language processing [9] and the application of CNN in malware detection [8], the performance of CNN in language processing and binary security analysis is not inferior [9]. In addition, CNN can make full use of GPU for parallel computing and more efficient parameter gradient calculation and training compared with RNN. So, our model is fully based on CNN technology, which can be used to identify compiler optimization levels directly from the raw data in the executable binary. According to previous work [7], the binaries compiled from compilation optimization level -O2 or -O3 are not very different. Since the compiler optimization levels -O2 and -O3 are almost identical for the compilation of most source code, we define a four-category task: identify -O0, -O1, -O2/-O3, -Os four different compiler optimization levels from the binary file. We perform a lot of experiments with our model for the four-category task. The experimental results show that the accuracy of our model for the four-category task can reach 97%. Compared with the
model which is aimed at the binary files on the X64 platform proposed by Chen [11], Our model is aimed at the binary files under the ARM platform. Since our model directly takes the raw binary bytes as input, the model does not require any prior knowledge and complex feature engineering. It can be trained and converge more quickly. The model is smaller and the forward calculation speed is also faster. We successfully found several assembly code differences caused by different levels of compiler optimization through the output of the model, and summarized a method for code analysis with the model. Due to the characteristics of the RISC, the model can be easily extended to other compilers and instruction sets which belongs to other RISC platform.
The contributions of this paper are summarized as follows:
* We propose a model: **BinEye** to recognize compiler optimization levels, which does not require any prior knowledge and feature engineering. The object file is directly fed to our model as input. The model based on CNN can more fastly perform parallel training and testing, perform very fast forward calculations.
* We built a dataset consisting of **80028** object files totaling 691MB. We perform experiments with our model and the model achieved a high recognition accuracy.
* We propose a method that can quickly and clearly analyze the differences caused by different compilation optimization levels with the help of the output of our model.
## II Related Work
Prior to this, there were a lot of researches on software security in terms of compiler optimization levels. D'Silva et al. [3] proposed a broad research programme whose goal is to identify, understand, and mitigate the impact of security errors introduced by different compiler optimizations. Wang et al. [1] proposed a novel model, which views unstable code in terms of various compiler optimizations that leverage undefined behavior. Different compiler optimizations which introduce unstable code may cause a security problem. Using their model, they had uncovered 160 new bugs that have been confirmed and fixed by developers. It can be learned that different compiler optimization levels could cause security vulnerabilities.
Due to the existence of many kinds of programming languages and CPU architectures in the real world, it makes the methods using rules to handle security problem more complicated, which also needs stronger professional expertise and capability, takes a long time to analyze and formulate rules. And according to the study of Chen et al. [5], the process of static binary executable analysis involves lots of manual, repetitive efforts, like adjusting load offsets and handling disassembling errors. It is limited to get the semantic information due to difficulties in disassembly. So we try to capture semantic and structural patterns directly from the binary code. With the successful application of deep learning in the fields of image processing, language processing and so on, deep learning is also applied to the security field. The neural network can automatically learn and memorize complex patterns in programming languages through deep learning techniques. For example, the CNN-based malware recognition model proposed by Edward et al. [8] can better identify the maliciousness of the software. The model can automatically learn the characteristics of the binary code in the malware to distinguish malware and non-malware. Shihab [10] proposed an efficient and scalable technique for computer network security which using a multi-layer neural network for decryption scheme and public key creation. Xu et al.'s [13] use of graph-encoded neural networks for vulnerability function match tasks has also achieved good results. Shin [14] propose to apply artificial neural network to solve important yet difficult problems in binary analysis, that is function recognition, which is a crucial first step in binary analysis techniques. The result shows that deep learning technique can identify functions in binaries better than other machine-learning-based methods. The deep learning model based on RNN proposed by Chen et al. [11] is also applied to compiler optimization levels recognition but requires strong expertise for feature engineering and complex data preprocessing, and his work only focus on X86 architecture. The feature extraction in their work from the assembly code need the disassembly process, as mentioned before, there are many disassembling errors in the disassembly process. So, the method proposed by Chen is not accurate enough.
Inspired by the above work, we applied a new CNN-based neural network to compiler optimization levels recognition tasks and achieved good results. Compared with the previous work of Chen [11], we improved the accuracy of the 4 classification tasks and simplified the compiler optimization levels recognition task: directly identify the compiler optimization level of object files without extracting features and complex preprocessing. The data fed into our model does not need the disassembling process which may introduce some flaws. The performance of our model is much higher than the model using RNN in terms of model size, forward calculation speed, etc.
## III Detailed Design
The overall system architecture is shown in Fig. 1. First part is data generation part. In this part, we compiled sources codes for five different compiler optimization levels with cross-compilation toolchain arm-linux-grineabi-gcc. After compilation, we get four kinds of different object files -O0, -O1, -O2/-O3, -Os. Then we extract code segment data from object
Fig. 1: System Architecture.
files as inputs of **BinEye**. Code segment binary data is fed into model and it performs the compiler optimization levels recognition task. With the intermediate output of the model, we can analyze the assembly code,which we discussed in section V.After we get the information about compiler optimization levels, we can do other mission such as binary analysis and vulnerability discovery, which we do not discuss in this paper.
### _Data Generation and Extraction_
We use buildroot [16] cross-compilation toolchain to compile 463 open source components with five different compiler optimization levels -O0, -O1, -O2, -O3, -Os, and generated a total of 80028 object files (See Table I for details). The command to configure with buildroot is **"make menuchconfig"**, and the command to download the component source code is "make source". To generate object files with different compiler optimization levels, we replace the string associated with the compilation optimization level configuration ("-O0, -O1, -O2, -O3, -Os") in the component source package by the shell command and replace them with the target optimization level. Then go back to the root directory of the buildroot and execute the "make" command to compile the source code according to the specified optimization level. Since the binary file is linked by the object file, and the object file is the smallest compilation unit, this work only considers the object file when constructing the dataset. It should be noted that even if the compilation optimization level is specified in the previous step, there is no guarantee that all source code will be successfully configured (Because compilation optimization level of some source code is not specified in the makefile). Therefore, we remove all MD5 signature-consistent files among compiled object file dataset by the principle of insufficiency, so as to ensure that the selected binary files have the correct compilation optimization level.
According to the structure of the ELF file, many sections of the content are not affected by the compiler optimization levels. So, we only focus on the content of the code segment. It is easy to get the content of the code segment with the "readelf" tool. Since ARM architecture belongs to RISC, after we get 4-byte aligned binary data with the "readelf" tool, the binary data was directly fed into **BinEye** Model without any processing.
### _Model Structure_
The model mainly consists of three parts: data representation part, convolution and pooling part, and output part. The detailed design is shown in Fig. 2, Since the convolution and pooling parts are the more commonly used modules in CNN, we only show the data embedding representation part III-B1 and III-B2, and our special structure in CNN III-B3. The various parts are introduced as followed sections.
#### Iii-B1 Word Embedding
We extract \(1024\times 4\) bytes from the code segment of the binary object file as the original input to the neural network. Because the ARM architecture is a kind of RISC, each instruction occupies a fixed 4 bytes, extracting \(1024\times 4\) bytes is equivalent to extracting 1024 instructions as neural network input. This method of data extraction can also be applied to the architecture of all RISC, such as MIPS, PowerPC, and so on. The first layer of the neural network is the word embedded layer, which is used to encode the input binary instructions bytes. We use the embedded layer to represent each byte with a 4-dimensional vector in order to find and represent the semantic similarity of certain instructions.
In order to better capture the characteristics of RISC, we embedded input \(x=(x_{1},...,x_{i},...,x_{m})\) in a distributed space as \(w=(w_{1},...,w_{j}...,w_{m})\), where \(w_{j}\in R^{f}\) is a column in an embedding matrix \(D\in R^{f\times V}\), Where V is the number of possible values of the input data. Since each byte has a maximum of \(2^{8}\) values, so \(V=2^{8}\), \(f\) represents one byte from input represented by an f-dimensional vector.
In previous work [15], we can know that word embedding has the advantage of data dimension reduction compared to one-hot data representation. On the other hand, word embedding has the ability to express the semantic relationship between instructions.
#### Iii-B2 Position Embedding
In order for our model to perceive the order relationship between different instructions, we embed the absolute position \((1,...,m)\) of the m instructions binary data as \(p=(p_{1},...,p_{m})\), where \(p_{j}\in R^{g}\) is a column in a position embedding matrix \(F\in R^{g\times S}\) (\(S\) represents the number of input instructions ), indicating that the fixed position corresponding to the instruction is embedded as a g-dimensional vector, so that model can combine the position embedding and word embedding. Since the object file was not linked by compiler, the code segment data offset address starts from 0, so using absolute and relative addresses is equivalent. It is worth noting that the position embedding matrix is different from the embedding matrix mentioned in the previous section. It is a hard embedding representation. The matrix is constant, that is, the embedding representation corresponding to a certain position is fixed and is not trainable. The word embedding matrix mentioned in the previous section is trainable. For a given position \(n\), where the dimension of the embedded representation is specified as \(g\), the position \(n\) is embedded as a vector \(p_{n}\) as equation 1 shows.
\[p_{n}[k]=(1-\frac{n}{S+1})-(\frac{k}{g+1})\times(1-2\times\frac{n}{(S+1)}) \tag{1}\]
Where \(0<=k<g\), which represents the \(k_{th}\) dimension of the position code. S represents the total number of instructions, \(S=1024\) in this paper.
Word embedding and position embedding both are combined to represent the input data \(e=(w_{1}+p_{1},...,w_{m}+p_{m})\) as the input data of the next layer. Position embedding representation is very useful in models, which gives the model the ability to capture instruction position information so that model can get higher accuracy.
#### Iii-B3 Convolution and Pooling
We set all convolution kernel widths to be the same as the previous embedding layer output width and the lengths to \(Conv\_K_{1},Conv\_K_{2},Conv\_K_{3},Conv\_K_{4}\) (which means four different kinds of convolution kernels), and do not pad after convolution. After the convolution operation, we can get a \(m-k+1\) dimension vector instead of a matrix (k for the convolution kernel length and m for the sequence length). Since we get the \(m*n*h\) (h represents the distributed representation of each byte with the h-dimensional vector) after embedding the \(m*n\) data (n represents the width of the input matrix). Then after convolution, we get a matrix of \((m-k+1)\times 1\times 1\times num\_filters\) (\(num\_filters\) represents the number of convolution kernels per class), Then use max pooling to take the maximum value of each channel to get a matrix of \(1\times 1\times num\_filters\). Then splicing and reshaping the output of all convolution operations to get a \(num\_filters\times filter\_sizes\) (\(filter\_sizes\) represents the number of types of convolutional kernels) vector as input to the next layer of fully connected layers.
#### Iii-B4 Output
The last part of the model is the fully connected layer, which enhances the expressive power of the model. The output of the fully connected layer uses the softmax function to obtain the classification probability for each class, as defined below equation 2
\[S_{i}=\frac{e^{V_{i}}}{\sum_{i}^{C}e^{V_{i}}} \tag{2}\]
Where \(V_{i}\) is the ith value of the dense layer output vector, and C is the total number of classes. Since we have -O0, -O1, -O2/-O3, -Os four classes in total, C has a value of 4. Because the output of the model is a 4-dimensional vector,
Fig. 2: BinEye Model Structure.
where i-th dimension represents the probability of the ith class. we use argmax function to get the subscript with the largest probability dimension as the predicted class.
## IV Experiment
This part mainly describes the training environment of the model, the training and test results of the model, and the analysis of the experimental results.
### _Model Performance_
Our model **BinEye** is fully based on CNN and equipped with position embedding. Due to the weight sharing characteristics of CNN, the total number of parameters of our neural network model is 30, 639, and the calculation of 457 samples in the forward direction takes only 7 seconds which is much faster than RNN model. Our model has achieved a 97.24% accuracy rate which is more accurate than other RNN model.
Sometimes the order of the program statements is very important for the control flow, which requires our model to have the ability to capture the order of sequences. In this paper, by introducing positional embedding, each statement position is numbered, and each number corresponds to a vector. By combining the position vector and the instruction embedding vector, a certain position information is introduced for each instruction. We do an experiment on the model with the ability to identify different compiler optimization. In order to better enable the model to capture the semantic differences of instruction combinations of different lengths, we have tried different length convolution kernels and different combinations, and finally obtained a better performance convolution kernel combination.. Four different types of convolutional kernels are used in the experimental model. The size of each convolution kernel is only different in length, which is 2,3,4 and 5 respectively. Our model took a few hours to train on the training dataset and eventually got an accuracy on the test dataset. According to Table II, we can know that the recognition accuracy of the model **BinEye** with kernel size 2,3,4,5 for each compiler optimization levels -O0, -O1, -Os, -O2/-O3 is above 95%, accuracy for four kinds of compiler optimization levels reached 97.24%.
### _Compared With RNN Model_
For comparison, we have built several RNN neural networks. We also use word embedding representation technology in the RNN neural network. After word embedding layer, we built RNN layer which is one of **LSTM, GRU, CNN+LSTM, CNN+GRU**. Then we built a dense layer as the output layer. we use the same dataset to feed RNN model we built, and the RNN model took a long time to train and test on the same machine as **BinEye** model. Then we get the four type RNN network test result for comparison with our CNN-based model, at the same time we record the model size (Number of parameters) and the forward calculation speed (The number of calculations per second). The comparison result is shown in Table II. we can learn that our model is far superior in terms of accuracy and speed of calculation.
### _Application on Object Files_
For the reason that we only cared about 4K size code section which belongs to a part of the code segment of object file during model training and testing, the final accuracy rate of 97.24% may not fully represent the BinEye's ability to recognize the optimization levels for the object file. So we reloaded the trained model and test the ability to recognize the optimization levels for the object file in our built dataset and we got a final accuracy of 97.49% on the whole dataset. For one single object file, we split it into several 4K size binary code blocks. We used BinEye to recognize the optimization level of every single code blocks, then we take the mode of the recognition results of several code blocks as the final result of
Fig. 3: ROC of Compiler Optimization Level Classification.
the object file. We counted the TPR and FPR under different thresholds and got the ROC curve as shown in Fig 3.
## V A Method to Analyze Compilation Optimization
We analyzed the outputs of convolution and pooling part and found that only a few outputs are the non-zero value, which means not all instructions have a contribution to compilation optimization level recognition task. We started to focus on the instruction sequences corresponding to non-zero outputs. After analyzing dozens of object cases, we took two typical file exec_O0.o and exec_O2.o which were compiled from same source code and by different optimization level for analysis. The two files can be obtained from github [21].
We got a few large output values from the outputs of convolution and pooling operation for exec_O2.o since the large value has a greater impact on the final classification. Figure 4 shows the outputs that we got. The comment for each point represents the score of corresponding position code block at one address. The x-axis represents the position of the instruction in the 1024 instructions of model input. By analyzing the higher-score instructions and comparing them with the instruction sequences at the corresponding locations of the exec_O0.o file, we have proposed a method to analyze the difference between the homology binary code with different compilation optimization levels. The method steps are as follows.
1. Extract the binary data of code segments and divide the binary data into 4K sizes.
2. Input the code segment data into the BinEye model, perform forward calculation, and obtain the results after convolution and pooling.
3. Obtain the corresponding instruction address whose output value is greater than 0, and then extract consecutive 4 instructions backwards.
4. Summarize the patterns corresponding to successive instructions, and compare other code with different compilation optimization levels of the same source code.
5. Compare the differences in patterns with other compiler optimization levels codes, and summarize the differences in different levels of compilation optimization.
Through our method, we found that there are three types of differences between the two different compilation optimization levels, which are all marked in different shapes in the Figure 4. For the three different differences: function headers, special instruction sequences, register usage, we'll cover them in the following sections.
### _Difference in Function Header_
According to Fig 4, we first analyzed the instruction sequence corresponding to **(0x484, 6.33)**. Corresponding we also analyzed and compared the codes of the corresponding function header of the -O0 object file exec_O0.o. We got a very clear difference between -O0 and -O2 compilation optimization level. We found that code compiled with -O0 level always saves R11 register into the stack on the function header and restore it when function exists. But in codes compiled with -O2 level, One or more of R4-R10 registers were used at the function header by instruction STMFD. The difference sample is shown in Fig 5 (-O2 uses instruction _STMFD SP!_,{_R4,R5,LR_}, while -O0 uses _STMFD SP!_,{_R11,LR_}). By analyzing the difference in function headers between the two optimization object files in a large amount (such as (0xd3c,6.50)(0x0,5.18)), we think this can be a good basis for distinguishing different compiler optimization level. It is very impressive that BinEye can recognize the nuances of register usage in the function header.
### _Special Instruction Sequence_
We analyze the instruction sequence of length 4 according to **(0x958, 6.56)** shown in Fig4 and corresponding instruction sequence in -O0 object file. Then we compare similar instruction combinations elsewhere in the file (such as (0x498,6.18)). We found a very interesting combination of instructions. Instruction combination "CMP BEQ" or "CMP CMPEQ BEQ" often appear together under -O0 optimization level, but under -O2 optimization level, instruction combination "CMP BNE" or
Fig. 4: Convolution and Pooling output
Fig. 5: instructions in 0x484 of -O2 and corresponding instructions of -O0 for function header analysis
Fig. 6: instructions in 0x958 of -O2 and corresponding instructions of -O0 for special instruction sequence analysis
### _Register Usage_
Register using is an important rule in ARM Architecture Procedure Call Standard [20]. By analyzing the instruction sequence of length 4 according to **(0xf24, 4.42)** shown in Figure 4 and corresponding instruction sequence in -O0 object file. We found that code under optimization level -O2 use registers as much as possible to store local variables. But under -O0 optimization level, the code always pushes the local variables into the function stack, and load them when needed. As shown in Figure 7, code under optimization level -O0 use "LDR R3, [R11,#var_3C]" load a local variable, and save immediate \(2\) into memory address corresponding "parameter+fdtable". But in code under optimization level -O2, register R4 was used to hold the local variable.
By analyzing the outputs of the BinEye convolution and pooling part and comparing the instruction sequences of the corresponding parts of optimization level -O0 and optimization level -O2, we did find some unique patterns between different compilation optimization levels. That is to say, BinEye remembers some special instruction patterns related to the compilation optimization by continuously learning the input binary code so that BinEye can recognize the compilation optimization levels of different object files.
## VI Conclusion
This paper mainly proposes an efficient deep learning model without any prior knowledge: **BinEye**, which is used to identify the compilation optimization levels of the object file. Model based entirely on CNN, with fewer parameters, high forward calculation speed and fast training speed, etc. We did a lot of preparation work first, specifying five different compilation optimization levels to compile hundreds of open source components and generating a large dataset for model training and testing. According to the classification accuracy of the model, we continuously adjust the parameters and modify the network structure. Finally, the classification task accuracy of the model is about 97.24%, which achieves an ideal effect. Through the analysis of the model output, we found that the model can memorize the special instruction patterns between different compilation optimization levels. With the help of the BinEye, we successfully analyzed and verified several differences in assembly code at different levels of compilation optimization.
## VII Acknowledgement
The research was supported by National Natural Science Foundation of China (No.U1636120), Strategic Priority Research Program of Chinese Academy of Sciences (No. XDC02020100) and Key Program of National Natural Science Foundation of China (No.U1766215).
|
2308.11556
|
Odd unimodal sequeneces
|
In this paper we study odd unimodal and odd strongly unimodal sequences. We
use $q$-series methods to find several fundamental generating functions.
Employing the Euler--Maclaurin summation formula we obtain the asymptotic main
term for both types of sequences. We also find families of congruences modulo
$4$ for the number of odd strongly unimodal sequences.
|
Kathrin Bringmann, Jeremy Lovejoy
|
2023-08-22T16:37:23Z
|
http://arxiv.org/abs/2308.11556v1
|
# Odd unimodal sequences
###### Abstract.
In this paper we study odd unimodal and odd strongly unimodal sequences. We use \(q\)-series methods to find several fundamental generating functions. Employing the Euler-Maclaurin summation formula we obtain the asymptotic main term for both types of sequences. We also find families of congruences modulo \(4\) for the number of odd strongly unimodal sequences.
Key words and phrases:asymptotics, Euler-Maclaurin summation formula, mock theta functions, modular forms, partitions, Tauberian theorems, unimodal sequences 2020 Mathematics Subject Classification: 11F03, 11F37, 11P82, 11P83, 33D15.
## 1. Introduction and statement of results
A sequence is _unimodal_ if it is weakly increasing up to a point and then weakly decreasing thereafter. Let \(u(n)\) denote the number of unimodal sequences of natural numbers having the form
\[a_{1}\leq\cdots\leq a_{r}\leq\overline{e}\geq b_{1}\geq\cdots\geq b_{s}, \tag{1.1}\]
with
\[n=c+\sum_{j=1}^{r}a_{j}+\sum_{j=1}^{s}b_{j}.\]
The distinguished point \(c\) is called the _peak_ of the sequence and the sum of the entries \(n\) is called the _weight_. For example, we have \(u(4)=12\), the twelve unimodal sequences of weight \(4\) being
\[\begin{split}\left(\overline{4}\right),\left(1,\overline{3} \right),\left(\overline{3},1\right),\left(2,\overline{2}\right),\left(1,1, \overline{2}\right),\left(1,\overline{2},1\right),\left(\overline{2},2\right), \left(\overline{2},1,1\right),\\ \left(1,1,1,\overline{1}\right),\left(1,1,\overline{1},1\right), \left(1,\overline{1},1,1\right),\left(\overline{1},1,1,1\right).\end{split} \tag{1.2}\]
A unimodal sequence is _strongly unimodal_ if the inequalities in (1.1) are strict. Let \(u^{*}(n)\) denote the number of strongly unimodal sequences of natural numbers with weight \(n\). For example, we have \(u^{*}(4)=4\), the four strongly unimodal sequences of weight \(4\) being1
Footnote 1: For strongly unimodal sequences we drop the overline notation since the peak cannot repeat.
\[(4),(1,3),(3,1),(1,2,1). \tag{1.3}\]
Unimodal sequences and strongly unimodal sequences have been the subject of a considerable amount of research, especially over the last two decades. The generating functions for these sequences are related to number-theoretic objects like mock theta functions, false theta functions, and quantum modular forms, whose theories can then be applied to deduce many interesting results (see Subsections 14.4, 15.7, and 21.4 of [11]; for some more recent work, see [13, 16]).
In this paper we initiate the study of odd unimodal sequences and odd strongly unimodal sequences, wherein all numbers must be odd.2 Let \(\operatorname{ou}(n)\) and \(\operatorname{ou}^{*}(n)\) denote the number of odd unimodal and odd strongly unimodal sequences of weight \(n\). Continuing the example in (1.2) and (1.3), we have \(\operatorname{ou}(4)=6\) and \(\operatorname{ou}^{*}(4)=2\).
Footnote 2: These were briefly treated in [4], where they were called convex and strictly convex compositions with odd parts.
We begin by computing the generating functions upon which all of our work is based. We use two-variable generating functions, where the second variable tracks the rank of the unimodal sequence.
In the notation of (1.1), the rank is defined to be \(r-s\), or the number of parts of the sequence to the left of the peak minus the number to the right. From the perspective of modular forms, this second variable is the "Jacobi variable". We use the standard \(q\)-hypergeometric notation,
\[(a;q)_{n}:=\prod_{j=0}^{n-1}\left(1-aq^{j}\right),\qquad(a_{1},\ldots,a_{\ell};q )_{n}:=(a_{1};q)_{n}\cdots(a_{\ell};q)_{n} \tag{1.4}\]
valid for \(\ell\in\mathbb{N}\), \(a,a_{1},\ldots,a_{\ell}\in\mathbb{C}\) and \(n\in\mathbb{N}_{0}\cup\{\infty\}\). Let \(\operatorname{ou}(m,n)\) denote the number of odd unimodal sequences of weight \(n\) with rank \(m\).
**Theorem 1.1**.: _We have_
\[\sum_{\begin{subarray}{c}n\geq 0\\ m\in\mathbb{Z}\end{subarray}}\operatorname{ou}(m,n)\zeta^{m}q^{n}=\sum_{n \geq 0}\frac{q^{2n+1}}{\left(\zeta q,\zeta^{-1}q;q^{2}\right)_{n+1}} \tag{1.5}\] \[=\sum_{n\geq 0}(-1)^{n+1}\zeta^{3n+1}q^{3n^{2}+2n}\left(1+\zeta q ^{2n+1}\right)+\frac{1}{(\zeta q,\zeta^{-1}q;q^{2})_{\infty}}\sum_{n\geq 0}(-1)^ {n}\zeta^{2n+1}q^{n^{2}+n}\] (1.6) \[=\frac{q}{(q^{2};q^{2})_{\infty}}\left(\sum_{n,r\geq 0}-\sum_{n,r<0 }\right)\frac{(-1)^{n+r}q^{n^{2}+3n+4rn+r^{2}+3r}}{1-\zeta q^{2r+1}}. \tag{1.7}\]
Let \(\operatorname{ou}^{*}(m,n)\) denote the number of odd strongly unimodal sequences of weight \(n\) with rank \(m\).
**Theorem 1.2**.: _We have, with \(Q(r,s):=\frac{r^{2}}{4}+\frac{7rs}{2}+\frac{s^{2}}{4}+\frac{3r}{2}+\frac{5s}{2} +1\),_
\[\sum_{\begin{subarray}{c}n\geq 0\\ m\in\mathbb{Z}\end{subarray}}\operatorname{ou}^{*}(m,n)\zeta^{m}q^{n} =\sum_{n\geq 0}\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{n}q^{2n+1} \tag{1.8}\] \[=-\frac{1}{(q^{2};q^{2})_{\infty}}\sum_{n\in\mathbb{Z}}\frac{(-1) ^{n}q^{3n^{2}+3n+1}}{1+\zeta q^{2n+1}}+\frac{1}{(q^{2};q^{2})_{\infty}}\sum_{ n\in\mathbb{Z}}\frac{\zeta^{-n}q^{n^{2}+2n+1}}{1+\zeta q^{2n+1}}\] (1.9) \[=\frac{q}{(q^{2};q^{2})_{\infty}}\left(\sum_{n,r\geq 0}-\sum_{n,r <0}\right)(-1)^{n}\zeta^{r}q^{3n^{2}+3n+4nr+r^{2}+2r}\] (1.10) \[=\frac{\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{\infty}}{(q^{2}; q^{2})_{\infty}^{2}}\left(\sum_{\begin{subarray}{c}r,s\geq 0\\ r\equiv s\pmod{2}\end{subarray}}-\sum_{\begin{subarray}{c}r,s<0\\ r\equiv s\pmod{2}\end{subarray}}\right)\frac{(-1)^{\frac{r-s}{2}}q^{Q(r,s)}}{ 1+\zeta q^{r+s+1}}. \tag{1.11}\]
**Remark**.: _The analogue of Theorem 1.1 for ordinary unimodal sequences is [22, Proposition 2.1]. For strongly unimodal sequences, the corresponding generating functions are dispersed in the literature. The analogues of (1.8) and (1.9) are [11, equation (14.18)] and [12, Lemma 3.1], the analogues of (1.10) and (1.11) are Theorems 4.1 and 1.3 of [19]._
Our first use of these generating functions is to find asymptotic estimates for \(\operatorname{ou}(n)\) and \(\operatorname{ou}^{*}(n)\).
**Theorem 1.3**.: _We have, as \(n\to\infty\),_
\[\operatorname{ou}(n)\sim\frac{e^{\pi\sqrt{\frac{2n}{3}}}}{2^{\frac{13}{4}}3^{ \frac{1}{4}}n^{\frac{3}{4}}}. \tag{1.12}\]
**Theorem 1.4**.: _We have, as \(n\to\infty\),_
\[\operatorname{ou}^{*}(n)\sim\frac{e^{\pi\sqrt{\frac{n}{3}}}}{2^{\frac{5}{2}} \frac{1}{3}^{\frac{1}{4}}n^{\frac{3}{4}}}. \tag{1.13}\]
**Remark**.: _By [7, 27], the analogue of (1.12) for the number of ordinary unimodal sequences \(u(n)\) is_
\[u(n)\sim\frac{e^{\pi\sqrt{\frac{4n}{3}}}}{2^{3}3^{\frac{3}{4}}n^{\frac{5}{4}}}.\]
_The analogue of (1.13) for the number of strongly unimodal sequences \(u^{*}(n)\) is_
\[u^{*}(n)\sim\frac{e^{\pi\sqrt{\frac{2n}{3}}}}{2^{\frac{13}{4}}3^{\frac{1}{4}}n ^{\frac{3}{4}}}, \tag{1.14}\]
_due to [26]. Note that the asymptotics in (1.12) and (1.14) agree._
As a second result, we prove congruences modulo \(4\) for the number of odd strongly unimodal sequences of weight \(n\). Here we are motivated by a corresponding result for strongly unimodal sequences, which says that if \(\ell\equiv 7,11,13,17\ (\mathrm{mod}\,24)\) is prime and \((\frac{j}{\ell})=-1\), then
\[u^{*}\left(\ell^{2}n+\ell j-\left(\tfrac{\ell^{2}-1}{24}\right)\right)\equiv 0 \ (\mathrm{mod}\,4).\]
This was conjectured by Bryson, Ono, Pitman, and Rhoades [14] and proved by Chen and Garvan [16]. Our analogue for \(\mathrm{ou}^{*}(n)\) is as follows; see Theorem 6.2 for a more general theorem.
**Theorem 1.5**.: _Let \(\ell\geq 5\) be prime. If \(\ell\equiv 7,13\ (\mathrm{mod}\,24)\) and \((\frac{3j}{\ell})=-1\) or if \(\ell\not\equiv 7,13\ (\mathrm{mod}\,24)\) and \(\ell\nmid j\), then we have:_
1. _If_ \(j\) _is odd, then_ \[\mathrm{ou}^{*}\left(4\ell^{2}n+2j\ell+\left(\tfrac{8\ell^{2}+1}{3}\right) \right)\equiv 0\ (\mathrm{mod}\,4).\]
2. _If_ \(j\) _is even, then_ \[\mathrm{ou}^{*}\left(4\ell^{2}n+2j\ell+\left(\tfrac{2\ell^{2}+1}{3}\right) \right)\equiv 0\ (\mathrm{mod}\,4).\]
The paper is organized as follows: In Section 2, we gather some necessary background on asymptotic methods and indefinite theta functions. In Section 3 we prove Theorems 1.1 and 1.2. Sections 4 and 5 contain proofs of Theorems 1.3 and 1.4. In Section 6, we show Theorem 6.2, which contains the congruences in Theorem 1.5 as a special case. We close in Section 7 with some open problems.
## Acknowledgements
The authors thank Caner Nazaroglu for help with numerical calculations. The first author has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001179).
## 2. Preliminaries
### A Tauberian Theorem
Recall the following3 special case \(\alpha=0\) of Theorem 1.1 of [10], which follows from Ingham's Theorem [20].
Footnote 3: The second condition is often dropped in (2.1) which makes the proposition unfortunately incorrect (see [10]).
**Proposition 2.1**.: _Suppose that \(B(q)=\sum_{n\geq 0}b(n)q^{n}\) is a power series with non-negative real coefficients and radius of convergence at least one and that the \(b(n)\) are weakly increasing. Assume that \(\lambda\), \(\beta\), \(\gamma\in\mathbb{R}\) with \(\gamma>0\) exist such that_
\[B\left(e^{-t}\right)\sim\lambda t^{\beta}e^{\frac{\gamma}{t}}\quad\text{as }t\to 0^{+}, \qquad B\left(e^{-z}\right)\ll|z|^{\beta}e^{\frac{\gamma}{|z|}}\quad\text{as }z\to 0, \tag{2.1}\]
_with \(z=x+iy\) (\(x,y\in\mathbb{R},x>0\)) in each region of the form \(|y|\leq\Delta x\) for \(\Delta>0\). Then_
\[b(n)\sim\frac{\lambda\gamma^{\frac{\beta}{2}+\frac{1}{4}}}{2\sqrt{\pi}n^{ \frac{\beta}{2}+\frac{3}{4}}}e^{2\sqrt{\gamma n}}\qquad\text{as }n\to\infty.\]
### The Euler-Maclaurin summation formula
For simplicity we only state the versions of the Euler-Maclaurin summation formula that we use in this paper; see [10] for a more general version for all dimensions. Let \(D_{\theta}:=\{re^{i\alpha}:r\geq 0\text{ and }|\alpha|\leq\theta\}\). A multivariable function \(f\) in \(\ell\) variables is of _sufficient decay_ in \(D\) if there exist \(\varepsilon_{1},\ldots,\varepsilon_{\ell}>0\) such that (we write vectors in bold letters) \(f(\mathbf{x})\ll(x_{1}+1)^{-1-\varepsilon_{1}}\cdots(x_{\ell}+1)^{-1-\varepsilon _{\ell}}\) uniformly as \(|x_{1}|+\ldots+|x_{\ell}|\to\infty\) in \(D\). We first require a one-dimensional version of the Euler-Maclaurin summation formula (see [28]).
**Proposition 2.2**.: _Suppose that \(0\leq\theta<\frac{\pi}{2}\). Let \(f:\mathbb{C}\to\mathbb{C}\) be holomorphic in a domain containing \(D_{\theta}\), so that in particular \(f\) is holomorphic at the origin, and assume that \(f\) and all of its derivatives are of sufficient decay. Then for \(a\in\mathbb{R}\) and \(N\in\mathbb{N}_{0}\), we have, uniformly as \(z\to 0\) in \(D_{\theta}\),_
\[\sum_{m\geq 0}f((m+a)z)=\frac{1}{z}\int_{0}^{\infty}f(w)dw-\sum_{n=0}^{N-1} \frac{B_{n+1}(a)f^{(n)}(0)}{(n+1)!}z^{n}+O_{N}\left(z^{N}\right),\]
_where \(B_{n}(x)\) denotes the \(n\)-th Bernoulli polynomial._
We also require the two-dimensional case of the Euler-Maclaurin summation formula (see [10]).
**Proposition 2.3**.: _Suppose that \(0\leq\theta_{j}<\frac{\pi}{2}\) for \(1\leq j\leq 2\), and that \(f:\mathbb{C}^{2}\to\mathbb{C}\) is holomorphic in a domain containing \(D_{\mathbf{\theta}}:=D_{\theta_{1}}\times D_{\theta_{2}}\). If \(f\) and all of its derivatives are of sufficient decay in \(D_{\mathbf{\theta}}\), then for \(\mathbf{a}\in\mathbb{R}^{2}\) and \(N\in\mathbb{N}_{0}\) we have uniformly, as \(w\to 0\) in \(D_{\mathbf{\theta}}\),_
\[\sum_{\mathbf{m}\in\mathbb{N}_{0}^{2}}f((\mathbf{m}+\mathbf{a})z) =\frac{1}{z^{2}}\int_{0}^{\infty}\int_{0}^{\infty}f(\mathbf{w})dw_{1 }dw_{2}-\frac{1}{z}\sum_{n_{1}=0}^{N-1}\frac{B_{n_{1}+1}(a_{1})}{(n_{1}+1)!}z ^{n_{1}}\int_{0}^{\infty}f^{(n_{1},0)}(0,w_{2})dw_{2}\] \[\qquad-\frac{1}{z}\sum_{n_{2}=0}^{N-1}\frac{B_{n_{2}+1}(a_{2})}{( n_{2}+1)!}z^{n_{2}}\int_{0}^{\infty}f^{(0,n_{2})}(w_{1},0)dw_{1}\] \[\qquad+\sum_{n_{1}+n_{2}<N}\frac{B_{n_{1}+1}(a_{1})B_{n_{2}+1}(a_ {2})f^{(n_{1},n_{2})}(\mathbf{0})}{(n_{1}+1)!(n_{2}+1)!}z^{n_{1}+n_{2}}+O_{N}\left( z^{N}\right).\]
### Indefinite theta functions
In this subsection, we recall results from Zwegers' thesis [29]. Fix a quadratic form \(Q\) of signature \((n,1)\) with associated matrix \(A\), so that \(Q(\mathbf{x})=\frac{1}{2}\mathbf{x}^{T}A\mathbf{x}\). Let \(B(\mathbf{x},\mathbf{y}):=Q(\mathbf{x}+\mathbf{y})-Q(\mathbf{x})-Q(\mathbf{y})\) denote the corresponding bilinear form. The set of vectors \(\mathbf{c}\in\mathbb{R}^{\ell}\) with \(Q(\mathbf{c})<0\) splits into two connected components. Two vectors \(\mathbf{c_{1}}\) and \(\mathbf{c_{2}}\) lie in the same component if and only if \(B(\mathbf{c_{1}},\mathbf{c_{2}})<0\). We fix one of the components and denote it by \(C_{Q}\). Picking any vector \(\mathbf{c_{0}}\in C_{Q}\), we have
\[C_{Q}=\left\{\mathbf{c}\in\mathbb{R}^{\ell}:Q(\mathbf{c})<0,\ B(\mathbf{c},\mathbf{c_{0}})<0 \right\}.\]
The cusps are elements from
\[S_{Q}:=\left\{\mathbf{c}\in\mathbb{Z}^{\ell}:\gcd(c_{1},c_{2},\ldots,c_{\ell})=1, \ Q(\mathbf{c})=0,\ B(\mathbf{c},\mathbf{c_{0}})<0\right\}.\]
Let \(\overline{C}_{Q}:=C_{Q}\cup S_{Q}\) and define for, \(\mathbf{c}\in\overline{C}_{Q}\)
\[R(\mathbf{c}):=\begin{cases}\mathbb{R}^{\ell}&\text{if }\mathbf{c}\in C_{Q},\\ \left\{\mathbf{a}\in\mathbb{R}^{\ell}:B(\mathbf{c},\mathbf{a})\notin\mathbb{Z}\right\}& \text{if }\mathbf{c}\in S_{Q}.\end{cases}\]
Let \(\mathbf{c_{1}},\mathbf{c_{2}}\in\overline{C}_{Q}\). We define the _theta function with characteristic \(\mathbf{a}\in R(\mathbf{c_{1}})\cap R(\mathbf{c_{2}})\)_ and \(\mathbf{b}\in\mathbb{R}^{\ell}\) by
\[\vartheta_{\mathbf{a},\mathbf{b}}(\tau):=\sum_{\mathbf{n}\in\mathbb{Z}^{\ell}+\mathbf{a}} \varrho(\mathbf{n};\tau)e^{2\pi iB(\mathbf{b},\mathbf{n})}q^{Q(\mathbf{n})},\]
where
\[\varrho(\boldsymbol{n};\tau)=\varrho_{Q}^{\boldsymbol{c_{1}},\boldsymbol{c_{2}}}( \boldsymbol{n};\tau):=\varrho^{\boldsymbol{c_{1}}}(\boldsymbol{n};\tau)-\varrho^ {\boldsymbol{c_{2}}}(\boldsymbol{n};\tau)\]
with \((\tau=u+iv)\)
\[\varrho^{\boldsymbol{c}}(\boldsymbol{n};\tau):=\begin{cases}E\left(\frac{B( \boldsymbol{c},\boldsymbol{n})\sqrt{v}}{\sqrt{-Q(\boldsymbol{c})}}\right)& \text{if }\boldsymbol{c}\in C_{Q},\\ \operatorname{sgn}(B(\boldsymbol{c},\boldsymbol{n}))&\text{if }\boldsymbol{c}\in S _{Q}.\end{cases}\]
Here the odd function \(E\) is defined as
\[E(w):=2\int_{0}^{w}e^{-\pi t^{2}}dt\]
with the usual convention that \(\operatorname{sgn}(w):=\frac{w}{|w|}\) for \(w\in\mathbb{R}\setminus\{0\}\) and \(\operatorname{sgn}(0):=0\). Note that
\[E(x)=\operatorname{sgn}(x)\left(1-\beta\left(x^{2}\right)\right),\text{ where } \beta(x):=\int_{x}^{\infty}w^{-\frac{1}{2}}e^{-\pi w}dw. \tag{2.2}\]
This in particular yields that \(E(x)\sim\operatorname{sgn}(x)\) as \(|x|\to\infty\).
The theta function satisfies the following transformation law.
**Theorem 2.4**.: _If \(\boldsymbol{a},\boldsymbol{b}\in R(\boldsymbol{c_{1}})\cap R(\boldsymbol{c_{2}})\), then_
\[\vartheta_{\boldsymbol{a},\boldsymbol{b}}\left(-\frac{1}{\tau}\right)=\frac{1} {\sqrt{-\det(A)}}(-i\tau)^{\frac{\ell}{2}}e^{2\pi iB(\boldsymbol{a}, \boldsymbol{b})}\sum_{\boldsymbol{\ell}\in A^{-1}\mathbb{Z}^{\ell}/\mathbb{Z} ^{\ell}}\vartheta_{\boldsymbol{b}+\boldsymbol{\ell},-\boldsymbol{a}}(\tau).\]
## 3. Generating functions and the proofs of Theorems 1.1 and 1.2
In this section we establish in Theorems 1.1 and 1.2. We begin with Theorem 1.1.
Proof of Theorem 1.1.: Equation (1.5) is a straightforward consequence of the fact that \((\zeta q;q^{2})_{n+1}^{-1}\) is the generating function for partitions into odd parts of size at most \(2n+1\), with the exponent of \(\zeta\) counting the number of parts. Namely, in the notation of (1.1), the term \(q^{2n+1}\) generates the peak \(\overline{c}\), the term \((\zeta q;q^{2})_{n+1}^{-1}\) generates the odd parts \((a_{1},\dots,a_{r})\) to the left of the peak, and the term \((\zeta^{-1}q;q^{2})_{n+1}^{-1}\) generates the odd parts \((b_{1},\dots,b_{s})\) to the right of the peak. The exponent of \(\zeta\) is \(r-s\). Equation (1.6) is a result in Ramanujan's lost notebook [6, Entry 6.3.4].
Equation (1.7) requires more work. For its proof, we require so-called Bailey pairs (for background see [25]). A pair of sequences \((\alpha_{n},\beta_{n})_{n\geq 0}\) is called a _Bailey pair relative to \((a,q)\)_ if
\[\beta_{n}=\sum_{k=0}^{n}\frac{\alpha_{k}}{(q;q)_{n-k}(aq;q)_{n+k}}.\]
If \((\alpha_{n},\beta_{n})\) is a Bailey pair relative to \((a,q)\), then by [24, equation (1.5)]
\[\sum_{n\geq 0}q^{n}\beta_{n}=\frac{1}{(aq,q;q)_{\infty}}\sum_{n,r\geq 0}(-a)^{n}q ^{\frac{n(n+1)}{2}+(2n+1)r}\alpha_{r}. \tag{3.1}\]
The following sequences form a Bailey pair relative to \((q^{2},q^{2})\)[23, pp. 727-728]:
\[\alpha_{n}=\frac{(-1)^{n}q^{n^{2}+n}\left(1-q^{4n+2}\right)}{(1-q^{2})\left(1 -\zeta q^{2n+1}\right)\left(1-\zeta^{-1}q^{2n+1}\right)},\quad\beta_{n}=\frac {1}{(\zeta q,\zeta^{-1}q;q^{2})_{n+1}}. \tag{3.2}\]
Inserting (3.2) into (3.1) and using the fact that
\[\frac{1-q^{4r+2}}{\left(1-\zeta q^{2r+1}\right)\left(1-\zeta^{-1}q^{2r+1} \right)}=\frac{1}{1-\zeta q^{2r+1}}+\frac{\zeta^{-1}q^{2r+1}}{1-\zeta^{-1}q^{2 r+1}}, \tag{3.3}\]
we compute
\[\sum_{n\geq 0}\frac{q^{2n+1}}{\left(\zeta q,\zeta^{-1}q;q^{2} \right)_{n+1}}=\frac{q}{\left(q^{2};q^{2}\right)_{\infty}^{2}}\sum_{n,r\geq 0 }\frac{(-1)^{n+r}q^{n^{2}+3n+4nr+r^{2}+3r}\left(1-q^{4r+2}\right)}{\left(1- \zeta q^{2r+1}\right)\left(1-\zeta^{-1}q^{2r+1}\right)}\] \[\qquad\qquad=\frac{q}{\left(q^{2};q^{2}\right)_{\infty}^{2}} \left(\sum_{n,r\geq 0}\frac{(-1)^{n+r}q^{n^{2}+3n+4nr+r^{2}+3r}}{1-\zeta q^{2r+1} }+\zeta^{-1}\sum_{n,r\geq 0}\frac{(-1)^{n+r}q^{n^{2}+3n+4nr+r^{2}+5r+1}}{1- \zeta^{-1}q^{2r+1}}\right)\] \[\qquad\qquad=\frac{q}{\left(q^{2};q^{2}\right)_{\infty}^{2}} \left(\sum_{n,r\geq 0}-\sum_{n,r<0}\right)\frac{(-1)^{n+r}q^{n^{2}+3n+4rn+r^{2}+ 3r}}{1-\zeta q^{2r+1}}.\]
In the last step we let \((n,r)\mapsto(-n-1,-r-1)\) and simplify. This completes the proof.
We now turn to the proof of Theorem 1.2.
Proof of Theorem 1.2.: We use the fact that \((-\zeta q;q^{2})_{n}\) is the generating function for partitions into distinct odd parts of size at most \(2n-1\), with the exponent of \(\zeta\) counting the number of parts.
For (1.9) we require two identities,
\[\sum_{n\geq 0}\frac{q^{2n^{2}+2n+1}}{\left(-\zeta q,-\zeta^{-1}q;q ^{2}\right)_{n+1}} =\frac{1}{\left(q^{2};q^{2}\right)_{\infty}}\sum_{n\in\mathbb{Z} }\frac{(-1)^{n}q^{3n^{2}+3n+1}}{1+\zeta q^{2n+1}}, \tag{3.4}\] \[\sum_{n\in\mathbb{Z}}\frac{(a,b;q)_{n}w^{n}}{(c,d;q)_{n}} =\frac{\left(aw,\frac{d}{a},\frac{c}{b},\frac{dq}{abw};q\right)_ {\infty}}{\left(w,d,\frac{q}{b},\frac{cd}{abw};q\right)_{\infty}}\sum_{n\in \mathbb{Z}}\frac{\left(a,\frac{abw}{d};q\right)_{n}\left(\frac{d}{a}\right)^{n }}{(aw,c;q)_{n}}. \tag{3.5}\]
Equation (3.4) may be found in [1, p. 397], while equation (3.5) is a bilateral transformation of Bailey [18, example 5.20 (i)]. The notation in (1.4) is extended to all integers via
\[(a;q)_{n}=\frac{(a;q)_{\infty}}{(aq^{n};q)_{\infty}}.\]
Note that we have the identity [18, (I.2)]
\[(a;q)_{-n}=\frac{(-1)^{n}q^{\frac{n(n+1)}{2}}}{a^{n}\left(\frac{q}{a};q\right) _{n}}. \tag{3.6}\]
We begin by letting \((a,b,w,q)=(-\zeta q,-\zeta^{-1}q,q^{2},q^{2})\) in (3.5) and then letting \(c,d\to 0\). Simplifying and exchanging left- and right-hand sides gives
\[\frac{1}{(q^{2};q^{2})_{\infty}}\sum_{n\in\mathbb{Z}}\frac{\zeta ^{-n}q^{n^{2}+2n+1}}{1+\zeta q^{2n+1}}=\sum_{n\in\mathbb{Z}}\left(-\zeta q,- \zeta^{-1}q;q^{2}\right)_{n}q^{2n+1}\] \[=\sum_{\begin{subarray}{c}n\geq 0\\ m\in\mathbb{Z}\end{subarray}}\operatorname{ou}^{*}(m,n)\zeta^{m}q^{n}+\sum_{n \geq 1}\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{-n}q^{-2n+1}\] \[=\sum_{\begin{subarray}{c}n\geq 0\\ m\in\mathbb{Z}\end{subarray}}\operatorname{ou}^{*}(m,n)\zeta^{m}q^{n}+\sum_{n \geq 1}\frac{q^{2n^{2}-2n+1}}{(-\zeta q,-\zeta^{-1}q;q^{2})_{n}}=\sum_{ \begin{subarray}{c}n\geq 0\\ m\in\mathbb{Z}\end{subarray}}\operatorname{ou}^{*}(m,n)\zeta^{m}q^{n}+\sum_{n \geq 0}\frac{q^{2n^{2}+2n+1}}{(-\zeta q,-\zeta^{-1}q;q^{2})_{n+1}}\]
\[=\sum_{\begin{subarray}{c}n\geq 0\\ m\in\mathbb{Z}\end{subarray}}\operatorname{ou}^{*}(m,n)\zeta^{m}q^{n}+\frac{1}{(q ^{2};q^{2})_{\infty}}\sum_{n\in\mathbb{Z}}\frac{(-1)^{n}q^{3n^{2}+3n+1}}{1+ \zeta q^{2n+1}}.\]
Here the final equality uses (3.4) and the antepenultimate equality uses (3.6). Comparing the extremes in this string of equations gives (1.9).
We now turn to (1.10). For this require the fact that if \((\alpha_{n},\beta_{n})\) is a Bailey pair relative to \((a,q)\), then we use [24, Corollary 1.3]
\[\sum_{n\geq 0}(aq;q)_{2n}q^{n}\beta_{n}=\frac{1}{(q;q)_{\infty}}\sum_{n,r\geq 0 }(-a)^{n}q^{\frac{3n(n+1)}{2}+(2n+1)r}\alpha_{r}, \tag{3.7}\]
along with the following Bailey pair relative to \((1,q)\)[3, Lemma 3]:
\[\alpha_{n}=\begin{cases}(-1)^{n}\left(w^{n}q^{\frac{n(n-1)}{2}}+w^{-n}q^{\frac {n(n+1)}{2}}\right)&\text{if }n\geq 1,\\ 1&\text{if }n=0,\end{cases}\qquad\beta_{n}=\frac{\left(w;\frac{q}{w};q \right)_{\underline{n}}}{(q;q)_{2n}}.\]
Using this Bailey pair with \((w,q)=(-\zeta q,q^{2})\) in (3.7) we compute
\[\sum_{n\geq 0}\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{n}q^{2n+1}\]
\[=\frac{q}{(q^{2};q^{2})_{\infty}}\left(\sum_{n,r\geq 0}(-1)^{n}\zeta^{r}q^{3n ^{2}+3n+4nr+r^{2}+2r}+\sum_{\begin{subarray}{c}n\geq 0\\ r\geq 1\end{subarray}}(-1)^{n}\zeta^{-r}q^{3n^{2}+3n+4nr+r^{2}+2r}\right)\] \[=\frac{q}{(q^{2};q^{2})_{\infty}}\left(\sum_{n,r\geq 0}(-1)^{n} \zeta^{r}q^{3n^{2}+3n+4nr+r^{2}+2r}-\sum_{n,r<0}(-1)^{n}\zeta^{r}q^{3n^{2}+3n+ 4nr+r^{2}+2r}\right),\]
where the last line follows upon replacing \((n,r)\) by \((-n-1,-r)\). This gives (1.10).
Finally, we treat (1.11). Again we use Bailey pairs. This time we require the fact that if \((\alpha,\beta_{n})\) is a Bailey pair relative to \((a,q)\), then [25, Theorem 10.1]
\[\sum_{n\geq 0}(b,c;q)_{n}\left(\frac{aq}{bc}\right)^{n}\beta_{n}=\frac{\left( \frac{aq}{b},\frac{aq}{c};q\right)_{\infty}}{\left(aq,\frac{aq}{bc};q\right)_ {\infty}}\sum_{n\geq 0}\frac{(b,c;q)_{n}\left(\frac{aq}{bc}\right)^{n}}{ \left(\frac{aq}{b},\frac{aq}{c};q\right)_{n}}\alpha_{n}, \tag{3.8}\]
along with the Bailey pair4 relative to \((q,q)\) from [2, equation (5.11)],
Footnote 4: We point out to the reader that the \(A_{n}\) in Andrews’ paper are related to the \(\alpha_{n}\) via \(\alpha_{n}=a^{n}q^{n^{2}}A_{n}\).
\[\alpha_{n}=\frac{q^{2n^{2}+n}\left(1-q^{2n+1}\right)}{1-q}\sum_{j=-n}^{n}(-1)^ {j}q^{-\frac{j(3j+1)}{2}},\qquad\beta_{n}=1.\]
Using this Bailey pair from (3.8) with \((b,c,q)=(-\zeta q,-\zeta^{-1}q,q^{2})\) and employing (3.3), we compute
\[\sum_{n\geq 0}\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{n}q^{2n+1}\] \[=\frac{q\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{\infty}}{(q^{2 };q^{2})_{\infty}^{2}}\sum_{n\geq 0}\frac{q^{4n^{2}+4n}\left(1-q^{4n+2} \right)}{(1+\zeta q^{2n+1})\left(1+\zeta^{-1}q^{2n+1}\right)}\sum_{j=-n}^{n}(- 1)^{j}q^{-j(3j+1)}\] \[=\frac{q\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{\infty}}{(q^{2 };q^{2})_{\infty}^{2}}\left(\sum_{n\geq 0}\sum_{j=-n}^{n}\frac{(-1)^{j}q^{4n^{2}+4n-j(3j+1)} }{1+\zeta q^{2n+1}}-\zeta^{-1}\sum_{n\geq 0}\sum_{j=-n}^{n}\frac{(-1)^{j}q^{4n^{2}+6n+1-j(3j+1)} }{1+\zeta^{-1}q^{2n+1}}\right)\]
\[=\frac{q\left(-\zeta q,-\zeta^{-1}q;q^{2}\right)_{\infty}}{\left(q^{2};q^{2} \right)_{\infty}^{2}}\left(\sum_{n\geq 0}\sum_{j=-n}^{n}\frac{(-1)^{j}q^{4n^{2}+4n-j(3j+1 )}}{1+\zeta q^{2n+1}}-\sum_{n\leq 0}\sum_{j=n}^{-n}\frac{(-1)^{j}q^{4n^{2}-4n-j(3j+1 )}}{1+\zeta q^{2n-1}}\right).\]
Now letting \((n,j)=(\frac{r+s}{2},\frac{r-s}{2})\) in the first sum on the right-hand side, letting \((n,j)=(\frac{r+s+2}{2},\frac{r-s}{2})\) in the second sum, and then simplifying gives (1.11). This completes the proof of Theorem 1.2.
## 4. Proof of Theorem 1.3
In this section, we prove Theorem 1.3.
Proof of Theorem 1.3.: From Theorem 1.1 we have that
\[\sum_{n\geq 0}\mathrm{ou}(n)q^{n}=\sum_{n\geq 0}\frac{q^{2n+1}}{\left(q;q^{2} \right)_{n+1}^{2}}=\sum_{n\geq 0}(-1)^{n+1}q^{n(3n+2)}\left(1+q^{2n+1}\right)+ \frac{1}{\left(q;q^{2}\right)_{\infty}^{2}}\sum_{n\geq 0}(-1)^{n}q^{n^{2}+n}. \tag{4.1}\]
We apply Proposition 2.1. To begin, it is not hard to see that the \(\mathrm{ou}(n)\) are monotonic, since
\[(1-q)\sum_{n\geq 0}\mathrm{ou}(n)q^{n}=\sum_{n\geq 0}\frac{q^{2n+1}}{(q^{3};q^{2 })_{n}(q;q^{2})_{n+1}},\]
and the right-hand side has non-negative coefficients. Alternatively, note that for any \(n\in\mathbb{N}\)
\[(a_{1},\ldots,a_{r},\overline{c},b_{1},\ldots,b_{s})\mapsto(1,a_{1},\ldots,a_ {r},\overline{c},b_{1},\ldots,b_{s})\]
is an injective mapping from the set of odd unimodal sequences of weight \(n\) to the set of odd unimodal sequences of weight \(n+1\).
Next, let \(F_{2}(q)\) denote the right-hand side of (4.1). We analyze each of the terms separately with the goal of showing that as \(z\to 0\),
\[F_{2}\left(e^{-z}\right)\sim\frac{e^{\frac{\pi^{2}}{6z}}}{4}.\]
The modularity of the Dedekind \(\eta\)-function implies that
\[\frac{1}{\left(e^{-z};e^{-z}\right)_{\infty}}\sim\sqrt{\frac{z}{2\pi}}e^{ \frac{\pi^{2}}{6z}}\qquad\text{as }z\to 0. \tag{4.2}\]
Thus, with \(q=e^{-z}\), we have
\[\frac{1}{\left(q;q^{2}\right)_{\infty}^{2}}=\frac{\left(q^{2};q^{2}\right)_{ \infty}^{2}}{\left(q;q\right)_{\infty}^{2}}\sim\frac{e^{\frac{\pi^{2}}{6z}}}{2}. \tag{4.3}\]
We apply Proposition 2.2 to the two sums in (4.1). We start by splitting the first sum according to the parity of the summation variable in order to rewrite it as
\[\sum_{n\geq 0}(-1)^{n+1}q^{n(3n+2)}\left(1+q^{2n+1}\right)=q^{-\frac{1}{3}} \sum_{n\geq 0}\left(q^{12\left(n+\frac{2}{3}\right)^{2}}+q^{12\left(n+\frac{5}{6} \right)^{2}}-q^{12\left(n+\frac{1}{6}\right)^{2}}-q^{12\left(n+\frac{1}{3} \right)^{2}}\right).\]
Now we can apply Proposition 2.2 with \(f(z):=e^{-12z^{2}}\) and \(a\in\{\frac{2}{3},\frac{5}{6},\frac{1}{6},\frac{1}{3}\}\). The main terms from Proposition 2.2 cancel and using that \(q^{-\frac{1}{3}}=O(1)\) we obtain that the first sum is \(O(1)\).
For the second sum we write, using Proposition 2.2
\[\sum_{n\geq 0}(-1)^{n}e^{-\left(n^{2}+n\right)z} =\sum_{n\geq 0}e^{-\left(4n^{2}+2n\right)z}-\sum_{n\geq 0}e^{- \left((2n+1)^{2}+(2n+1)\right)z}\] \[=e^{\frac{z}{4}}\left(\sum_{n\geq 0}\left(e^{-4\left(n+\frac{1}{4} \right)^{2}}-e^{-4\left(n+\frac{3}{4}\right)^{2}z}\right)\right)\sim-B_{1} \left(\frac{1}{4}\right)+B_{1}\left(\frac{3}{4}\right)=\frac{1}{2}.\]
Combining with (4.3), Proposition 2.1 with \(\lambda=\frac{1}{4}\), \(\alpha=\beta=0\) gives the claim.
## 5. Proof of Theorem 1.4
In this section, we prove Theorem 1.4. As in the previous section, we wish to apply Proposition 2.1, though in this case the details are much more involved. To begin, we record the monotonicity of the sequence \(\operatorname{ou}^{*}(n)\).
**Lemma 5.1**.: _For \(n\geq 3\) we have that \(\operatorname{ou}^{*}(n)\geq\operatorname{ou}^{*}(n-1)\)._
Proof.: We give two proofs, one employing \(q\)-series and one using a combinatorial argument. For the \(q\)-series proof, first observe that
\[(1-q)\sum_{n\geq 0}\operatorname{ou}^{*}(n)q^{n} =(1-q)\sum_{n\geq 0}\left(-q;q^{2}\right)_{n}^{2}q^{2n+1}=q(1-q)+(1-q )\sum_{n\geq 1}\left(-q;q^{2}\right)_{n}^{2}q^{2n+1}\] \[=q(1-q)+q^{3}\left(1-q^{2}\right)(1+q)\sum_{n\geq 0}\left(-q^{3}; q^{2}\right)_{n}^{2}q^{2n}. \tag{5.1}\]
We now require the \(q\)-binomial theorem [18, Exercise 1.2]
\[\sum_{m=0}^{n}\frac{(q;q)_{n}w^{n}q^{\frac{m(m-1)}{2}}}{(q;q)_{m}(q;q)_{n-m}}= (-w;q)_{n} \tag{5.2}\]
and a transformation of Jackson [18, Appendix (III.4)],
\[\sum_{n\geq 0}\frac{(a,b;q)_{n}w^{n}}{(c,q;q)_{n}}=\frac{(aw;q)_{\infty}}{(w;q)_{ \infty}}\sum_{n\geq 0}\frac{\left(a,\frac{c}{b};q\right)_{n}(-bw)^{n}q^{ \frac{n(n-1)}{2}}}{(c,aw,q;q)_{n}}. \tag{5.3}\]
Using these, we rewrite the final sum in (5.1) as follows:
\[\sum_{n\geq 0}\left(-q^{3};q^{2}\right)_{n}^{2}q^{2n}=\sum_{n\geq 0 }\left(-q^{3};q^{2}\right)_{n}q^{2n}\sum_{m=0}^{n}\frac{\left(q^{2};q^{2} \right)_{n}q^{m^{2}+2m}}{(q^{2};q^{2})_{m}\left(q^{2};q^{2}\right)_{n-m}}\] \[\qquad=\sum_{m\geq 0}\frac{q^{m^{2}+2m}}{(q^{2};q^{2})_{m}}\sum_{n \geq m}\frac{\left(-q^{3},q^{2};q^{2}\right)_{n}q^{2n}}{(q^{2};q^{2})_{n-m}}= \sum_{m\geq 0}q^{m^{2}+4m}\left(-q^{3};q^{2}\right)_{m}\sum_{n\geq 0}\frac{ \left(-q^{2m+3},q^{2m+2};q^{2}\right)_{n}q^{2n}}{(q^{2};q^{2})_{n}}\] \[\qquad=\sum_{m\geq 0}\frac{q^{m^{2}+4m}\left(-q^{3};q^{2}\right)_{m }}{(q^{2};q^{2})_{m}}\sum_{n\geq 0}\frac{q^{n^{2}+4n+2nm}}{(q^{2};q^{2})_{n} \left(1-q^{2n+2m+2}\right)}. \tag{5.4}\]
Here the first equality follows from (5.2) and the final equality implied by (5.3) with \((a,b,c,w,q)=(q^{2m+2},-q^{2m+3},0,q^{2},q^{2})\). Combining (5.1) and (5.4) gives
\[(1-q)\sum_{n\geq 0}\operatorname{ou}^{*}(n)q^{n}=q(1-q)+q^{2}\left(1-q^{2} \right)(1+q)\sum_{n,m\geq 0}\frac{q^{n^{2}+4n+m^{2}+4m+2nm}\left(-q^{3};q^{2} \right)_{m}}{\left(q^{2};q^{2}\right)_{n}\left(q^{2};q^{2}\right)_{m}\left(1-q ^{2n+2m+2}\right)}.\]
It is straightforward to see that the coefficient of \(q^{n}\) on the right-hand side is non-negative for \(n\geq 3\).
Alternatively, one may deduce the montonicity using a combinatorial argument. For \(n\geq 3\) we define a mapping on odd strongly unimodal sequences of weight \(n\) as follows:
\[(a_{1},\ldots,a_{r},\overline{c},b_{1},\ldots,b_{s})\mapsto\begin{cases}(1,a_{ 1},\ldots,a_{r},\overline{c},b_{1},\ldots,b_{s})&\text{if $a_{1}\neq 1$,}\\ \left(a_{2},\ldots,a_{r},\overline{c+2},b_{1},\ldots,b_{s}\right)&\text{if $a_{1}=1$.} \end{cases}\]
It is not hard to see that in either case the image is an odd strongly unimodal sequence of weight \(n+1\) and that the mapping is injective. This gives the desired inequality \(\operatorname{ou}^{*}(n)\geq\operatorname{ou}^{*}(n-1)\) for \(n\geq 4\), and the case \(n=3\) follows from the fact that \(\operatorname{ou}^{*}(3)=1\) and \(\operatorname{ou}^{*}(2)=0\)
We now turn to the proof of Theorem 1.4.
Proof of Theorem 1.4.: Lemma 5.1 implies that the \(\mathrm{ou}^{*}(n)\) are monotonic. The rest of the proof is devoted to showing that the remaining conditions of Proposition 2.1 are satisfied. By (1.10)
\[\sum_{n\geq 0}\mathrm{ou}^{*}(n)q^{n}=\frac{q}{(q^{2};q^{2})_{\infty}}\left( \sum_{r,n\geq 0}-\sum_{r,n<0}\right)(-1)^{n}q^{3n^{2}+4nr+r^{2}+3n+2r}.\]
Denoting by \(F_{1}(q)\) the right-hand side, we aim to prove that
\[F_{1}\left(e^{-t}\right)\sim\frac{e^{\frac{\pi^{2}}{12t}}}{2}\quad\text{as }t\to 0, \qquad F\left(e^{-z}\right)\ll e^{\frac{\pi^{2}}{12|z|}}\quad\text{as }z\to 0, \tag{5.5}\]
with \(z=x+iy\) (\(x,y\in\mathbb{R}\), \(x>0\), \(|y|\leq\Delta x\), \(\Delta>0\)). We first consider the outside factor. By (4.2) with \(q=e^{-z}\) we have, as \(z\to 0\),
\[\frac{q}{(q^{2};q^{2})_{\infty}}\sim\sqrt{\frac{z}{\pi}}e^{\frac{\pi^{2}}{12z}}.\]
Next define
\[G(q):=\frac{1}{2}\sum_{n,r\in\mathbb{Z}}\left(\mathrm{sgn}\left(n+\tfrac{1}{2} \right)+\mathrm{sgn}\left(r+\tfrac{1}{2}\right)\right)(-1)^{n}q^{3n^{2}+4nr+r ^{2}+3n+2r},\]
so \(F_{1}(q)=\frac{G(q)}{(q^{2};q^{2})_{\infty}}\). We realize \(G\) as "holomorphic part" of an indefinite theta function. For this set
\[g(\tau):=2q^{\frac{3}{4}}G(q)=i\sum_{\mathbf{n}\in\mathbb{Z}^{2}+\mathbf{a}}(\mathrm{ sgn}(B(\mathbf{c_{1}},\mathbf{n}))-\mathrm{sgn}(B(\mathbf{c_{2}},\mathbf{n})))e^{2\pi iB(\mathbf{b}, \mathbf{n})}q^{Q(\mathbf{n})},\]
where \(Q(\mathbf{n}):=3n_{1}^{2}+4n_{1}n_{2}+n_{2}^{2}\), \(\mathbf{c_{1}}:=(1,-2)^{T}\), \(\mathbf{c_{2}}:=(2,-3)^{T}\), \(\mathbf{b}:=(-\tfrac{1}{4},\tfrac{1}{2})^{T}\), and \(\mathbf{a}:=(\tfrac{1}{2},0)^{T}\).
Using (2.2), we may decompose
\[g(\tau)=\Theta(\tau)+\Theta^{-}(\tau),\]
where
\[\Theta(\tau) :=i\sum_{\mathbf{n}\in\mathbb{Z}^{2}+\mathbf{a}}\left(E\left(\frac{B(\bm {c_{1}},\mathbf{n})}{\sqrt{-Q(\mathbf{c_{1}})}}\sqrt{v}\right)-E\left(\frac{B(\mathbf{c_{2 }},\mathbf{n})}{\sqrt{-Q(\mathbf{c_{2}})}}\sqrt{v}\right)\right)e^{2\pi iB(\mathbf{b},\mathbf{ n})}q^{Q(\mathbf{n})},\] \[\Theta^{-}(\tau) :=\sum_{\mathbf{n}\in\mathbb{Z}^{2}+\mathbf{a}}\left(\mathrm{sgn}(n_{1}) \beta\left(4n_{1}^{2}v\right)+\mathrm{sgn}(n_{2})\beta\left(\tfrac{4n_{2}^{2} v}{3}\right)\right)(-1)^{n_{1}-\tfrac{1}{2}}q^{3n_{1}^{2}+4n_{1}n_{2}+n_{2}^{2}}.\]
In fact the identity holds termwise and we use that
\[3n_{1}^{2}+4n_{1}n_{2}+n_{2}^{2} =Q(\mathbf{n}),\quad(-1)^{n_{1}-\tfrac{1}{2}}=-ie^{2\pi iB(\mathbf{b}, \mathbf{n})},\quad n_{1}=-\tfrac{1}{2}B(\mathbf{c_{1}},\mathbf{n}),\quad n_{2}=\tfrac{1}{2 }B(\mathbf{c_{2}},\mathbf{n}),\] \[Q(\mathbf{c_{1}}) =-1,\qquad Q(\mathbf{c_{2}})=-3.\]
We determine the asymptotic behavior of \(\Theta\) and \(\Theta^{-}\) separately: For \(\Theta\) we use modularity and for \(\Theta^{-}\) the Euler-Maclaurin summation formula.
We start with \(\Theta\). We have that (in the notation of Subsection 2.2)
\[\Theta(\tau)=i\vartheta_{\mathbf{a},\mathbf{b}}(\tau).\]
We apply Theorem 2.4 to obtain
\[\vartheta_{\mathbf{a},\mathbf{b}}(\tau)=-\frac{1}{2\tau}\sum_{\mathbf{\ell}\in\left\{ \mathbf{0},(\tfrac{1}{2},0),(0,\tfrac{1}{2}),(\tfrac{1}{2},\tfrac{1}{2})\right\}} \vartheta_{\mathbf{b}+\mathbf{\ell},-\mathbf{a}}\left(-\frac{1}{\tau}\right).\]
Now write
\[\vartheta_{\boldsymbol{b}+\boldsymbol{\ell},-\boldsymbol{a}}(\tau)=-\sum_{ \boldsymbol{n}\in\mathbb{Z}^{2}+\boldsymbol{b}+\boldsymbol{\ell}}\left(E\left(2n_ {1}\sqrt{v}\right)+E\left(\tfrac{2n_{2}\sqrt{v}}{\sqrt{3}}\right)\right)e^{2 \pi iB(-\boldsymbol{a},\boldsymbol{n})}q^{Q(\boldsymbol{n})}. \tag{5.6}\]
Using that \(E(x)\sim\operatorname{sgn}(x)\) as \(|x|\to\infty\) the terms in (5.6) exponentially decay.
We next investigate \(\Theta^{-}\) and write it as
\[\Theta^{-}(\tau) =\sum_{\boldsymbol{n}\in\mathbb{Z}^{2}+\boldsymbol{a}}\left( \operatorname{sgn}(n_{1})\beta\left(4n_{1}^{2}v\right)+\operatorname{sgn}(n_ {2})\beta\left(\tfrac{4n_{2}^{2}v}{3}\right)\right)(-1)^{n_{1}-\tfrac{1}{2}}q^ {3n_{1}^{2}+4n_{1}n_{2}+n_{2}^{2}}\] \[=\sum_{\boldsymbol{n}\in\mathbb{Z}^{2}}\left(\operatorname{sgn} \left(n_{1}+\tfrac{1}{2}\right)\beta\left(4\left(n_{1}+\tfrac{1}{2}\right)^{2 }v\right)+\operatorname{sgn}(n_{2})\beta\left(\tfrac{4n_{2}^{2}v}{3}\right) \right)(-1)^{n_{1}}\] \[\times q^{3\left(n_{1}+\tfrac{1}{2}\right)^{2}+4\left(n_{1}+ \tfrac{1}{2}\right)n_{2}+n_{2}^{2}}\] \[=\sum_{\begin{subarray}{c}n_{1}\geq 0\\ n_{2}\geq 1\end{subarray}}(-1)^{n_{1}}\left(\beta\left(4\left(n_{1}+\tfrac{1} {2}\right)^{2}v\right)+\beta\left(\tfrac{4}{3}n_{2}^{2}v\right)\right)q^{3 \left(n_{1}+\tfrac{1}{2}\right)^{2}+4\left(n_{1}+\tfrac{1}{2}\right)n_{2}+n_{ 2}^{2}}\] \[+\sum_{\begin{subarray}{c}n_{1}\geq 0\\ n_{2}\geq 1\end{subarray}}(-1)^{-n_{1}-1}\left(-\beta\left(4\left(-n_{1}-1+ \tfrac{1}{2}\right)^{2}v\right)+\beta\left(\tfrac{4}{3}n_{2}^{2}v\right) \right)q^{3\left(-n_{1}-1+\tfrac{1}{2}\right)^{2}+4\left(-n_{1}-1+\tfrac{1}{2 }\right)n_{2}+n_{2}^{2}}\] \[+\sum_{\begin{subarray}{c}n_{1}\geq 0\\ n_{2}\geq 1\end{subarray}}(-1)^{n_{1}}\left(\beta\left(4\left(n_{1}+\tfrac{1} {2}\right)^{2}v\right)-\beta\left(\tfrac{4}{3}(-n_{2})^{2}v\right)\right)q^{3 \left(n_{1}+\tfrac{1}{2}\right)^{2}+4\left(n_{1}+\tfrac{1}{2}\right)(-n_{2})+ (-n_{2})^{2}}\] \[+\sum_{\begin{subarray}{c}n_{1}\geq 0\\ n_{2}\geq 1\end{subarray}}(-1)^{-n_{1}-1}\left(-\beta\left(4\left(-n_{1}-1+ \tfrac{1}{2}\right)^{2}v\right)-\beta\left(\tfrac{4}{3}(-n_{2})^{2}v\right)\right)\] \[\times q^{3\left(-n_{1}-1+\tfrac{1}{2}\right)^{2}+4\left(-n_{1}- 1+\tfrac{1}{2}\right)(-n_{2})+(-n_{2})^{2}}\] \[=2\sum_{\begin{subarray}{c}n_{1}\geq 0\\ n_{2}\geq 1\end{subarray}}(-1)^{n_{1}}\left(\beta\left(4\left(n_{1}+\tfrac{1} {2}\right)^{2}v\right)+\beta\left(\tfrac{4n_{2}^{2}v}{3}\right)\right)q^{3 \left(n_{1}+\tfrac{1}{2}\right)^{2}+4\left(n_{1}+\tfrac{1}{2}\right)n_{2}+n_{ 2}^{2}}\] \[+2\sum_{\begin{subarray}{c}n_{1}\geq 0\\ n_{2}\geq 1\end{subarray}}(-1)^{n_{1}}\left(\beta\left(4\left(n_{1}+\tfrac{1} {2}\right)^{2}v\right)-\beta\left(\tfrac{4n_{2}^{2}v}{3}\right)\right)q^{3 \left(n_{1}+\tfrac{1}{2}\right)^{2}-4\left(n_{1}+\tfrac{1}{2}\right)n_{2}+n_{ 2}^{2}}\] \[=2\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{1},n_{2} \geq 0}\left(\beta\left(16\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right) ^{2}v\right)\pm\beta\left(\tfrac{4}{3}(n_{2}+1)^{2}v\right)\right)\] \[\times q^{12\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{2} \pm 8\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right)(n_{2}+1)+(n_{2}+1)^{2}}.\]
We now first show the first asymptotic in (5.5). For this, let \(\tau=\tfrac{it}{2\pi}\). Then
\[\Theta^{-}\left(\tfrac{it}{2\pi}\right)=2\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{ \delta}\sum_{n_{1},n_{2}\geq 0}f_{\pm}\left(\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4},n_{2}+1 \right)\sqrt{t}\right),\]
where
\[f_{\pm}(x_{1},x_{2}):=\left(\beta\left(\tfrac{8x_{1}^{2}}{\pi}\right)\pm\beta \left(\tfrac{2x_{2}^{2}}{3\pi}\right)\right)e^{-12x_{1}^{2}\mp 8x_{1}x_{2}-x_{2}^{2}}.\]
We now use Proposition 2.3. The term with the double integral term vanishes (the two \(\delta\)-terms cancel). The second term contributes
\[-\frac{2}{\sqrt{t}}\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{1}=0}^{ N-1}\frac{B_{n_{1}+1}\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)t^{\frac{n_{1}}{2}} }{(n_{1}+1)!}\int_{0}^{\infty}f_{\pm}^{(n_{1},0)}(0,w_{2})dw_{2}.\]
By combining the term for \(\delta=0\) and \(\delta=1\), using properties of Bernoulli polynomials it is not hard to see that only \(n_{1}\) even survive. The terms from \(n_{1}\geq 1\) yield a contribution overall that is \(O(t)\). Using that \(\beta(0)=1\), the term \(n_{1}=0\) gives
\[-\frac{4}{\sqrt{t}}\sum_{\pm}B_{1}\left(\tfrac{1}{4}\right)\int_{0}^{\infty} \left(1\pm\beta\left(\tfrac{2x_{2}^{2}}{3\pi}\right)\right)e^{-x_{2}^{2}}dx_{2 }=\frac{2}{\sqrt{t}}\int_{0}^{\infty}e^{-w_{2}^{2}}dw_{2}=\sqrt{\tfrac{\pi}{t}}.\]
For the third term, we have
\[-\frac{2}{\sqrt{t}}\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{2}=0 }^{N-1}\frac{B_{n_{2}+1}(1)}{(n_{2}+1)!}t^{\frac{n_{2}}{2}}\int_{0}^{\infty}f_ {\pm}^{(0,n_{2})}(w_{1},0)dw_{1}=0,\]
because the \(\delta\)-terms cancel. The final term in Proposition 2.3 is in \(O(t)\). This gives that the first asymptotic in (5.5) holds.
We next need to show that the second asymptotic in (5.5) holds. For this, we need to prove that
\[\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{1},n_{2}\geq 0 }\left(\beta\left(\tfrac{8}{\pi}\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4} \right)^{2}x\right)\pm\beta\left(\tfrac{2}{3\pi}(n_{2}+1)^{2}x\right)\right)\\ \times e^{-\left(12\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4} \right)^{2}\pm 8\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right)(n_{2}+1)+(n_{2} +1)^{2}\right)z}\ll|z|^{\frac{1}{2}}. \tag{5.7}\]
The proof follows by a lengthy calculation from the following refinement of Proposition 2.3 in the one-dimensional case, namely (see (5.8) of [10])
\[\sum_{n\geq 0}f((n+a)z)=\frac{1}{z}\int_{0}^{\infty}f(w)dw-\sum_{n=0}^{N-1} \frac{B_{n+1}(a)f^{(n)}(0)}{(n+1)!}z^{n}+\mathcal{E}(a;z), \tag{5.8}\]
where (\(C_{R}(0)\) is the circle around \(0\) with radius \(R\), \(\widetilde{B}_{N}(x):=B_{N}(x-\lfloor x\rfloor)\))
\[\mathcal{E}(a;z):=-\sum_{k\geq N}\frac{f^{(k)}(0)a^{k+1}}{(k+1)!} z^{k}-\frac{z^{N}}{2\pi i}\sum_{n=0}^{N-1}\frac{B_{n+1}(0)a^{N-n}}{(n+1)!} \int_{C_{R}(0)}\frac{f^{(n)}(w)}{w^{N-n}(w-az)}dw\\ -\frac{(-1)^{N}z^{N-1}}{N!}\int_{az}^{z\infty}f^{(N)}(w)\widetilde {B}_{N}\left(\frac{w}{z}-a\right)dw.\]
For the reader's convenience we defer the full proof of (5.7) to Appendix A.
Combining and using Proposition 2.1 with \(\gamma=\tfrac{\pi^{2}}{12}\), \(\beta=0\), and \(\lambda=\tfrac{1}{2}\) gives the claim.
## 6. Congruences for \(\operatorname{ou}^{*}(n)\) modulo \(4\) and the proof of Theorem 1.5
In this section we prove Theorem 6.2 below. Note that this reduces to Theorem 1.5 for \(k=0\). First, we determine the parity of \(\operatorname{ou}^{*}(n)\).
**Proposition 6.1**.: _For \(n\in\mathbb{N}\), we have that \(\operatorname{ou}^{*}(n)\) is odd if and only if \(6n-2\) is a square._
Proof.: We require a classical \(q\)-series identity from Ramanujan's lost notebook [5, Entry 9.5.2],
\[\sum_{n\geq 0}\left(q;q^{2}\right)_{n}q^{n}=\sum_{n\geq 0}(-1)^{n}q^{3n^{2}+2 n}\left(1+q^{2n+1}\right).\]
Using this along with (1.8), we have
\[\sum_{n\geq 0}\operatorname{ou}^{*}(n)q^{n} =\sum_{n\geq 0}\left(-q;q^{2}\right)_{n}^{2}q^{2n+1}\equiv\sum_{n \geq 0}\left(q^{2};q^{4}\right)_{n}q^{2n+1}=\sum_{n\geq 0}(-1)^{n}q^{6n^{2}+4n+1} \left(1+q^{4n+2}\right)\] \[\equiv\sum_{n\in\mathbb{Z}}q^{6n^{2}+4n+1}\ (\operatorname{mod}2).\]
Now in the extremes of this string of equations we replace \(q\) by \(q^{6}\) and multiply by \(q^{-2}\) to obtain
\[\sum_{n\geq 0}\operatorname{ou}^{*}(n)q^{6n-2}\equiv\sum_{n\in\mathbb{Z}}q^{(6n +2)^{2}}\ (\operatorname{mod}2),\]
and the result follows.
We now state the main result of this section.
**Theorem 6.2**.: _Let \(k\in\mathbb{N}\) and for \(r\) with \(1\leq r\leq k+1\), let \(p_{r}\geq 5\) be prime. For any \(j\not\equiv 0\ (\operatorname{mod}p_{k+1})\), if \(p_{k+1}\not\equiv 7,13\ (\operatorname{mod}24)\) or \(p_{k+1}\equiv 7,13\ (\operatorname{mod}24)\) and \((\frac{3j}{p_{k+1}})=-1\), then we have:_
1. _If_ \(j\) _is odd, then we have_ \[\operatorname{ou}^{*}\left(4p_{1}^{2}\cdots p_{k+1}^{2}n+2p_{1}^{2}\cdots p_{k }^{2}p_{k+1}j+\frac{8p_{1}^{2}\cdots p_{k+1}^{2}+1}{3}\right)\equiv 0\ \ (\operatorname{mod}4)\,.\]
2. _If_ \(j\) _is even, then we have_ \[\operatorname{ou}^{*}\left(4p_{1}^{2}\cdots p_{k+1}^{2}n+2p_{1}^{2}\cdots p_{k }^{2}p_{k+1}j+\frac{2p_{1}^{2}\cdots p_{k+1}^{2}+1}{3}\right)\equiv 0\ \ (\operatorname{mod}4)\,.\]
To make the proof smoother, we first prove a simple lemma.
**Lemma 6.3**.: _For \(n\in\mathbb{N}\) we have that modulo \(4\),_
\[\left(-q;q^{2}\right)_{n}^{2}-\left(-q^{2};q^{4}\right)_{n}\]
_is an odd polynomial._
Proof.: We prove the claim by induction. The case \(n=1\) is clear.
Now assume that the claim holds for \(n\in\mathbb{N}\). Then
\[\left(-q;q^{2}\right)_{n+1}^{2}-\left(-q^{2};q^{4}\right)_{n+1}=\left(1+q^{4n +2}\right)\left(\left(-q;q^{2}\right)_{n}^{2}-\left(-q^{2};q^{4}\right)_{n} \right)+2q^{2n+1}\left(-q;q^{2}\right)_{n}^{2}.\]
By the induction assumption the first term is an odd polynomial \((\operatorname{mod}4)\). Thus we are left to show that \((-q;q^{2})_{n}^{2}\) is an even polynomial \((\operatorname{mod}2)\). Now we have the even polynomial
\[\left(-q;q^{2}\right)_{n}^{2}\equiv\left(-q^{2};q^{4}\right)_{n}\ ( \operatorname{mod}2).\qed\]
Next we use a result of Chen and Chen [15], who employed the theory of class numbers to prove congruences modulo \(4\) for \(\mathcal{EO}(n)\), defined via the infinite product
\[\sum_{n\geq 0}\mathcal{EO}(n)q^{n}:=\frac{\left(q^{4};q^{4}\right)_{\infty}^{3}} {\left(q^{2};q^{2}\right)_{\infty}^{2}}. \tag{6.1}\]
**Theorem 6.4** ([15]).: _Let \(k\in\mathbb{N}_{0}\). For \(i\) with \(1\leq i\leq k+1\), let \(p_{i}\geq 5\) be prime. For \(j\not\equiv 0\ (\operatorname{mod}p_{k+1})\), if \(p_{k+1}\not\equiv 7,13\ (\operatorname{mod}24)\) or \(p_{k+1}\equiv 7,13\ (\operatorname{mod}24)\) and \((\frac{3j}{p_{k+1}})=-1\), then for \(n\in\mathbb{N}_{0}\)_
\[\mathcal{EO}\left(p_{1}^{2}\cdots p_{k+1}^{2}n+p_{1}^{2}\cdots p_{k}^{2}p_{k+1} j+\frac{p_{1}^{2}\cdots p_{k+1}^{2}-1}{3}\right)\equiv 0\ (\operatorname{mod}4). \tag{6.2}\]
We are now ready to prove Theorem 6.2.
Proof of Theorem 6.2.: First recall the third order mock theta function,
\[\nu(q):=\sum_{n\geq 0}\left(q;q^{2}\right)_{n}(-q)^{n}=:\sum_{n\geq 0}c(n)q^{n}.\]
Using Lemma 6.3 we have that
\[\sum_{n\geq 0}\operatorname{ou}^{*}(n)q^{n}-\sum_{n\geq 0}(-1)^{n}c(n)q^{2n+1}= \sum_{n\geq 0}\left(-q;q^{2}\right)_{n}^{2}q^{2n+1}-\sum_{n\geq 0}\left(-q^{2};q^{ 4}\right)_{n}q^{2n+1}\]
is supported on even exponents of \(q\) modulo \(4\). Therefore
\[\operatorname{ou}^{*}(2n+1)\equiv(-1)^{n}c(n)\ \left(\operatorname{mod}4 \right). \tag{6.3}\]
Next we note that by [17, equation (26.88)] we have
\[\sum_{n\geq 0}c(2n)q^{2n}=\frac{\left(q^{4};q^{4}\right)_{\infty}^{3}}{\left(q ^{2};q^{2}\right)_{\infty}^{2}}.\]
This is the same product as in (6.1), and so \(c(n)=\mathcal{EO}(n)\) if \(n\) is even. Therefore the congruences for \(\mathcal{EO}(n)\) in Theorem 6.4 imply congruences for \(c(n)\). The argument on the left-hand side of (6.2) is even if and only if \(n\equiv j\ (\operatorname{mod}2)\). So for \(j\) even, we let \(n\mapsto 2n\) and for \(j\) odd, we let \(n\mapsto 2n+1\) to obtain:
1. If \(j\) is odd, then \[c\left(2p_{1}^{2}\cdots p_{k+1}^{2}n+p_{1}^{2}\cdots p_{k}^{2}p_{k+1}j+\tfrac{ 4p_{1}^{2}\cdots p_{k+1}^{2}-1}{3}\right)\equiv 0\ (\operatorname{mod}4).\]
2. If \(j\) is even, then \[c\left(2p_{1}^{2}\cdots p_{k+1}^{2}n+p_{1}^{2}\cdots p_{k}^{2}p_{k+1}j+\tfrac{ p_{1}^{2}\cdots p_{k+1}^{2}-1}{3}\right)\equiv 0\ (\operatorname{mod}4).\]
By (6.3) the theorem follows.
## 7. Conclusion
This paper contains a preliminary investigation of odd unimodal sequences, establishing generating functions, basic asymptotics, and some congruence properties modulo \(4\). While other variants of unimodal sequences have arisen in the literature (e.g. [8, 9, 21]), odd unimodal sequences are perhaps the most natural. Below we leave two ideas for further study. The motivated reader will surely find many more.
First, improve upon the asymptotics in Theorems 1.3 and 1.4 to find Rademacher-type formulas for \(\operatorname{ou}(n)\) and \(\operatorname{ou}^{*}(n)\). Up to simple terms the generating function for \(\operatorname{ou}(n)\) and is a mixed false theta function while the generating function for \(\operatorname{ou}^{*}(n)\) is a mixed mock theta function. In both cases the weight is \(\frac{1}{2}\). One could now use the Circle Method to deduce asymptotic formulas for \(\operatorname{ou}(n)\) and \(\operatorname{ou}^{*}(n)\). However, an exact formula is out of reach with this method because the weight is too large. To obtain an exact formula one would need to find new methods (like Poincare type series).
Second, it appears that the arithmetic progressions in some of the congruences in Theorem 6.2 can be enlarged. For example, with \(k=0\) (i.e., in the case of Theorem 1.5) the congruences corresponding to the primes \(5,7,\) and \(11\) are
\[\operatorname{ou}^{*}(100n+r) \equiv 0\ \left(\operatorname{mod}4\right)\text{for }r\in\{37,57,77,97\}, \tag{7.1}\] \[\operatorname{ou}^{*}(196n+r) \equiv 0\ \left(\operatorname{mod}4\right)\text{for }r\in\{61,89,145\},\] (7.2) \[\operatorname{ou}^{*}(484n+r) \equiv 0\ \left(\operatorname{mod}4\right)\text{for }r\in\{125,169,213,257,301,345,389,433,477,521\}. \tag{7.3}\]
Computations suggest that the cases \(r\in\{37,97\}\) of (7.1) are special cases of the congruences
\[\operatorname{ou}^{*}(50n+r)\equiv 0\;\;(\operatorname{mod}4)\,\text{for}\;r\in\{37, 47\},\]
the cases \(r\in\{61,145\}\) of (7.2) are special cases of the congruences
\[\operatorname{ou}^{*}(98n+r)\equiv 0\;\;(\operatorname{mod}4)\,\text{for}\;r\in\{4 7,61\},\]
and all of the congruences in (7.3) are special cases of congruences in the corresponding arithmetic progressions modulo \(242\). We leave it as an open problem to establish exactly which of the congruences in Theorem 6.2 (or Theorem 1.5) can be strengthened in this way.
## Appendix A
Here we prove the second bound from (5.7). We require the relation \(\beta(x)=\operatorname{erfc}(\sqrt{\pi x})\) and use the bound \(\operatorname{erfc}(x)\ll 1\). To use (5.8), we write
\[\Theta^{-}\left(\tfrac{i\pi}{2\pi}\right)= -2\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\] \[\left(\sum_{n_{2}\geq 0}e^{-(n_{2}+1)^{2}z}\sum_{n_{1}\geq 0} \beta\left(\tfrac{8}{\pi}\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{2 }x\right)e^{-12\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{2}z}e^{ \mp 8\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4}\right)(n_{2}+1)z}\right.\] \[\left.\qquad\pm\sum_{n_{2}\geq 0}\beta\left(\tfrac{2}{3\pi}(n_{2} +1)^{2}x\right)e^{-(n_{2}+1)^{2}z}\sum_{n_{1}\geq 0}e^{-12\left(n_{1}+\tfrac{ \delta}{2}+\tfrac{1}{4}\right)^{2}z\mp 8\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4} \right)(n_{2}+1)z}\right).\] (A.1)
We write the first term as
\[-2\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{2}\geq 0}e^{-(n_{2}+1)^ {2}z}\sum_{n_{1}\geq 0}H_{n_{2},\pm}\left(\left(n_{1}+\tfrac{\delta}{2}+ \tfrac{1}{4}\right)\sqrt{x}\right),\]
where
\[H_{n_{2},\pm}(w_{1}):=\beta\left(\tfrac{8w_{1}^{2}}{\pi}\right)e^{-12\tfrac{z} {x}w_{1}^{2}\mp 8(n_{2}+1)\tfrac{z}{\sqrt{x}}w_{1}}.\]
Now (5.8) with \(N=0\), gives that
\[\sum_{n_{1}\geq 0}H_{n_{2},\pm}\left(\left(n_{1}+\tfrac{\delta}{2}+\tfrac{1}{4} \right)\sqrt{x}\right)=\frac{1}{\sqrt{x}}\int_{0}^{\infty}H_{n_{2},\pm}(w_{1} )dw_{1}+\mathcal{E}\left(\tfrac{\delta}{2}+\tfrac{1}{4};x\right),\]
where
\[\mathcal{E}(a;x):=-\sum_{k_{1}\geq 0}\frac{H_{n_{2},\pm}^{(k_{1})}(0)}{(k_{1}+1 )!}a^{k_{1}+1}x^{\frac{k_{1}}{2}}-\frac{1}{\sqrt{x}}\int_{a\sqrt{x}}^{\infty} H_{n_{2},\pm}(w_{1})\widetilde{B}_{0}\left(\tfrac{w_{1}}{\sqrt{x}}-a\right)dw_{1}.\] (A.2)
Note that \(\widetilde{B}_{0}(x)=1\). The contribution from the main term vanishes (because of the \((-1)^{\delta}\)).
The first term in the error contributes
\[2\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{2}\geq 0 }e^{-(n_{2}+1)^{2}z}\sum_{k_{1}\geq 0}\left[\left(\tfrac{\partial}{\partial w_{1}} \right)^{k_{1}}H_{n_{2},\pm}(w_{1})\right]_{w_{1}=0}\frac{\left(\tfrac{\delta} {2}+\tfrac{1}{4}\right)^{k_{1}+1}}{(k_{1}+1)!}x^{\frac{k_{1}}{2}}\\ =2\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{k_{1}\geq 0} \frac{\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1}}{(k_{1}+1)!}x^{ \frac{k_{1}}{2}}\sum_{\begin{subarray}{c}0\leq\ell_{1},\ell_{2}\leq k_{1}\\ \ell_{1}+\ell_{2}=k_{1}\end{subarray}}\binom{k_{1}}{\ell_{1}}\left[\left( \tfrac{\partial}{\partial w_{1}}\right)^{\ell_{1}}\left(\beta\left(\tfrac{8w_ {1}^{2}}{\pi}\right)e^{-12\tfrac{z}{x}w_{1}^{2}}\right)\right]_{w_{1}=0}\\ \times\sum_{n_{2}\geq 0}\left(\mp 8(n_{2}+1)\frac{z}{\sqrt{x}} \right)^{\ell_{2}}e^{-(n_{2}+1)^{2}z}.\] (A.3)
Now only \(\ell_{2}\) even survive (otherwise the \(\pm\) cancels). We now determine the asymptotic behaviors of (\(\ell_{2}\) even)
\[\sum_{n_{2}\geq 0}(n_{2}+1)^{\ell_{2}}e^{-(n_{2}+1)^{2}z}=(-1)^{\frac{\ell_{2}}{2 }}\left(\frac{\partial}{\partial z}\right)^{\frac{\ell_{2}}{2}}\sum_{n\geq 1}e^{-n ^{2}z}.\]
For this recall the modular theta function
\[\vartheta(\tau):=\sum_{n\in\mathbb{Z}}e^{\pi in^{2}\tau}.\]
It satisfies
\[\vartheta(\tau)=(-i\tau)^{-\frac{1}{2}}\vartheta\left(-\tfrac{1}{\tau}\right).\]
Thus
\[\sum_{n\geq 1}e^{-n^{2}z}=\tfrac{1}{2}\sum_{n\in\mathbb{Z}}e^{-n^{2}z}- \tfrac{1}{2}=\tfrac{1}{2}\vartheta\left(\tfrac{iz}{\pi}\right)-\tfrac{1}{2}= \tfrac{1}{2}\left(\tfrac{z}{\pi}\right)^{-\frac{1}{2}}\vartheta\left(\tfrac{ \pi i}{z}\right)-\tfrac{1}{2}.\] (A.4)
The second term contributes to (A.3) (it only survives if \(\ell_{2}=0\))
\[-\tfrac{1}{2}\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_ {k_{1}\geq 0}\frac{\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1}}{(k_{1 }+1)!}x^{\frac{k_{1}}{2}}\left[\left(\tfrac{\partial}{\partial w_{1}}\right) ^{k_{1}}\left(\beta\left(\tfrac{8w_{1}^{2}}{\pi}\right)e^{-12\frac{x}{x}w_{1} ^{2}}\right)\right]_{w_{1}=0}\\ \ll\sum_{k_{1}\geq 0}\frac{x^{\frac{k_{1}}{2}}}{(k_{1}+1)!} \left(1+\left|\frac{z}{x}\right|^{k_{1}}\right)\left[\left(\tfrac{\partial}{ \partial w_{1}}\right)^{k_{1}}\left(\beta\left(\tfrac{8w_{1}^{2}}{\pi}\right) e^{-12w_{1}^{2}}\right)\right]_{w_{1}=0}\ll 1.\]
The first term from (A.4) contributes to (A.3) (noting that \(\ell\), \(k_{1}\) need to be even)
\[\ll\sum_{k_{1}\geq 0}\frac{x^{k_{1}}}{(2k_{1}+1)!}\sum_{0\leq\ell\leq k_{1}} \binom{2k_{1}}{2\ell}\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{2k_ {1}-2\ell}\left(\beta\left(\tfrac{8w_{1}^{2}}{\pi}\right)e^{-12\frac{z}{x}w_{1 }^{2}}\right)\right]_{w_{1}=0}\left(\tfrac{8z}{\sqrt{x}}\right)^{2\ell}\frac{ \partial^{2\ell}}{\partial z^{2\ell}}\frac{\vartheta\left(\tfrac{\pi i}{z} \right)}{z^{\frac{1}{2}}}.\] (A.5)
Now assume that \(\frac{x}{|z|^{2}}>1\) (which is true as \(x\to 0\)). Then
\[z^{2\ell}\frac{\partial^{2\ell}}{\partial z^{2\ell}}\frac{\vartheta\left( \tfrac{\pi i}{z}\right)}{z^{\frac{1}{2}}}\ll\left[\frac{\partial^{2\ell}}{ \partial w_{2}^{2\ell}}\frac{\vartheta\left(\tfrac{\pi i}{w_{2}}\right)}{w_{2 }^{\frac{1}{2}}}\right]_{w_{2}=1}.\]
Moreover, as above,
\[\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{2k_{1}-2\ell}\left( \beta\left(\tfrac{8w_{1}^{2}}{\pi}\right)e^{-12\frac{z}{x}w_{1}^{2}}\right) \right]_{w_{1}=0}\ll\left(1+\Delta^{2k_{1}-2\ell}\right)\left[\left(\tfrac{ \partial}{\partial w_{1}}\right)^{2k_{1}-2\ell}\left(\beta\left(\tfrac{8w_{1} ^{2}}{\pi}\right)e^{-12w_{1}^{2}}\right)\right]_{w_{1}=0}.\]
This yields that (A.5) can be bound against \(O(1)\). Thus the first term in (A.2) is overall \(O(1)\).
The second term of (A.2) contributes
\[\tfrac{2}{\sqrt{x}}\sum_{\pm}\sum_{\delta\in\{0,1\}}(-1)^{\delta} \sum_{n_{2}\geq 0}e^{-(n_{2}+1)^{2}z}\int_{\left(\tfrac{\delta}{2}+\tfrac{1}{4} \right)\sqrt{x}}^{\infty}H_{n_{2},\pm}(w_{1})dw_{1}\\ =\tfrac{2}{\sqrt{x}}\sum_{\pm}\sum_{n_{2}\geq 0}e^{-(n_{2}+1)^{2 }z}\left(\int_{\frac{\sqrt{x}}{4}}^{\infty}-\int_{\frac{3\sqrt{x}}{4}}^{\infty} \right)H_{n_{2},\pm}(w_{1})dw_{1}\\ =\tfrac{2}{\sqrt{x}}\sum_{\pm}\int_{\frac{\sqrt{x}}{4}}^{\frac{3 \sqrt{x}}{4}}\beta\left(\tfrac{8}{\pi}w_{1}^{2}\right)e^{-12\frac{z}{x}w_{1}^{2 }}\sum_{n_{2}\geq 0}e^{-(n_{2}+1)^{2}z\mp 8(n_{2}+1)\frac{z}{\sqrt{x}}w_{1}}dw_{1}.\] (A.6)
The sum on \(n_{2}\) may be written as
\[\sum_{n_{2}\geq 0}h_{[1]}\left((n_{2}+1)\sqrt{x}\right),\]
where \(h_{[1]}(w_{2}):=e^{-\frac{z}{x}w_{2}^{2}\mp 8\frac{z}{x}w_{1}w_{2}}\). Using (5.8), with \(N=0\), we have
\[\sum_{n_{2}\geq 0}h_{[1]}\left((n_{2}+1)\sqrt{x}\right)=\frac{1}{\sqrt{x}} \int_{0}^{\infty}h_{[1]}(w_{2})dw_{2}+\mathcal{E}^{[1]}(x),\]
where
\[\mathcal{E}^{[1]}(x):=-\sum_{k_{2}\geq 0}\frac{h_{[1]}^{(k_{2})}(0)}{(k_{2}+1)!} x^{\frac{k_{2}}{2}}-\frac{1}{\sqrt{x}}\int_{\sqrt{x}}^{\infty}h_{[1]}(w_{2})dw_ {2}.\]
The main term contributes to (A.6) as
\[\frac{2}{x}\sum_{\pm}\int_{\frac{\sqrt{x}}{4}}^{\frac{3\sqrt{x}}{4}}\beta \left(\frac{8w_{1}^{2}}{\pi}\right)e^{-12\frac{z}{x}w_{1}^{2}}\int_{0}^{ \infty}h_{[1]}(w_{2})dw_{2}dw_{1}=\frac{2}{x}\sum_{\pm}\int_{\frac{\sqrt{x}}{ 4}}^{\frac{3\sqrt{x}}{4}}\beta\left(\frac{8w_{1}^{2}}{\pi}\right)e^{4\frac{z} {x}w_{1}^{2}}\int_{0}^{\infty}e^{-\frac{z}{x}(w_{2}\pm 4w_{1})^{2}}dw_{2}.\]
Using that \(\int_{4w_{1}}^{\infty}+\int_{-4w_{1}}^{\infty}=2\int_{0}^{\infty}\), the above is in \(O(\frac{1}{\sqrt{z}})\).
We next consider the first term in \(\mathcal{E}^{[1]}(x)\) which contributes
\[-\frac{2}{\sqrt{x}}\sum_{\pm}\int_{\frac{\sqrt{x}}{4}}^{\frac{3 \sqrt{x}}{4}}\beta\left(\frac{8w_{1}^{2}}{\pi}\right)e^{-12\frac{z}{x}w_{1}^{ 2}}\sum_{k_{2}\geq 0}\left[\frac{\partial^{k^{2}}}{\partial w_{2}^{k^{2}}}e^{- \frac{z}{x}w_{2}^{2}\mp 8\frac{z}{x}w_{1}w_{2}}\right]_{w_{2}=0}dw_{1} \frac{x^{\frac{k_{2}}{2}}}{(k_{2}+1)!}.\]
We have
\[\left[\frac{\partial^{k_{2}}}{\partial w_{2}^{k_{2}}}e^{-\frac{z}{ x}w_{2}^{2}\mp 8\frac{z}{x}w_{1}w_{2}}\right]_{w_{2}=0}=\sum_{\ell=0}^{k}{k_{2} \choose\ell}\left(\mp 8\frac{z}{x}w_{1}\right)^{\ell}\left(\frac{\sqrt{z}}{ \sqrt{x}}\right)^{k_{2}-\ell}\left[\frac{\partial^{k_{2}-\ell}}{\partial w_{2 }^{k_{2}-\ell}}e^{-w_{2}^{2}}\right]_{w_{2}=0}.\]
Now the \(\pm\) enforces \(\ell\), \(k_{2}-\ell\) to be even because of the \(\pm\) and bound overall against \(O(\frac{1}{\sqrt{z}})\).
We next consider the second term in \(\mathcal{E}^{[1]}(x)\). This contributes
\[\frac{1}{x}\int_{\frac{\sqrt{x}}{4}}^{\frac{3\sqrt{x}}{4}}\beta \left(\frac{8w_{1}^{2}}{\pi}\right)e^{-12\frac{z}{x}w_{1}^{2}}\int_{\sqrt{x}} ^{\infty}e^{-\frac{z}{x}w_{2}^{2}\mp 8\frac{z}{x}w_{1}w_{2}}dw_{2}dw_{1}.\] (A.7)
We write
\[e^{-12\frac{z}{x}w_{1}^{2}}\int_{\sqrt{x}}^{\infty}e^{-\frac{z}{x}w_{2}^{2} \mp 8\frac{z}{x}w_{1}w_{2}}dw_{2}\ll e^{4\frac{z}{x}w_{1}^{2}}\int_{\sqrt{x}} ^{\infty}e^{-\frac{z}{x}(w_{2}\pm 4w_{1})^{2}}dw_{2}\ll e^{4w_{1}^{2}}\int_{ \mathbb{R}}e^{-w_{2}^{2}}dw_{2}.\]
Thus (A.7) may be bound against \(O(\frac{1}{\sqrt{z}})\). Combining gives that the first term is \(O(\frac{1}{\sqrt{z}})\).
We next turn to the second term in (A.1) and proceed as before, again first considering the sum on \(n_{1}\). We have
\[\sum_{n_{1}\geq 0}e^{-12\left(n_{1}+\frac{\delta}{2}+\frac{1}{4}\right)^{2}z \mp 8\left(n_{1}+\frac{\delta}{2}+\frac{1}{4}\right)(n_{2}+1)z}=\sum_{n_{1} \geq 0}G_{n_{2},\pm}\left(\left(n_{1}+\frac{\delta}{2}+\frac{1}{4}\right) \sqrt{z}\right),\]
where \(G_{n_{2},\pm}(w_{1}):=e^{-12w_{1}^{2}\mp 8(n_{2}+1)\sqrt{z}w_{1}}\). Now (5.8) gives (with \(N=0\)) that
\[\sum_{n_{1}\geq 0}G_{n_{2},\pm}\left(\left(n_{1}+\frac{\delta}{2}+\frac{1}{4} \right)\sqrt{z}\right)=\frac{1}{\sqrt{z}}\int_{0}^{\infty}G_{n_{2},\pm}(w_{1}) dw_{1}+E\left(\frac{\delta}{2}+\frac{1}{4};z\right),\]
where
\[E(a;z):=-\sum_{k_{1}\geq 0}\frac{G_{n_{2},\pm}^{(k_{1})}(0)}{(k_{1}+1)!}a^{k_{1 }+1}z^{\frac{k_{1}}{2}}-\frac{1}{\sqrt{z}}\int_{a\sqrt{z}}^{\sqrt{z}\infty}G_{n_ {2},\pm}(w_{1})\widetilde{B}_{0}\left(\tfrac{w_{1}}{\sqrt{z}}-a\right)dw_{1}.\]
The main term contributes overall
\[-\tfrac{2}{\sqrt{z}}\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{2 }\geq 0}\beta\left(\tfrac{2}{3\pi}(n_{2}+1)^{2}x\right)e^{-(n_{2}+1)^{2}z}\int_{0 }^{\infty}G_{n_{2},\pm}(w_{1})dw_{1}=0.\]
The first term in the error \(E(\tfrac{\delta}{2}+\tfrac{1}{4};z)\) contributes
\[2\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{2} \geq 0}\beta\left(\tfrac{2}{3\pi}(n_{2}+1)^{2}x\right)e^{-(n_{2}+1)^{2}z} \sum_{k_{1}\geq 0}\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{1}}G_{n _{2},\pm}(w_{1})\right]_{w_{1}=0}\\ \times\frac{\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1 }}{(k_{1}+1)!}z^{\frac{k_{1}}{2}}=2\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{ \delta}\sum_{k_{1}\geq 0}\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1 }\frac{z^{\frac{k_{1}}{2}}}{(k_{1}+1)!}\\ \times\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{1} }\left(e^{-12w_{1}^{2}}\sum_{n_{2}\geq 0}\beta\left(\tfrac{2}{3\pi}(n_{2}+1)^{2}x \right)e^{-(n_{2}+1)^{2}z\mp 8(n_{2}+1)\sqrt{z}w_{1}}\right)\right]_{w_{1}=0}.\]
We write the sum on \(n_{2}\) as
\[\sum_{n_{2}\geq 0}g_{[1],\pm}\left((n_{2}+1)\sqrt{x}\right),\]
where \(g_{[1],\pm}(w_{2}):=\beta(\tfrac{2w_{2}^{2}}{3\pi})e^{-\frac{z}{x}w_{2}^{2} \mp 8\frac{\sqrt{z}}{\sqrt{z}}w_{1}w_{2}}\). Now (5.8), with \(N=0\), gives that
\[\sum_{n_{2}\geq 0}g_{[1],\pm}\left((n_{2}+1)\sqrt{x}\right)=\frac{1}{\sqrt{x}} \int_{0}^{\infty}g_{[1],\pm}(w_{2})dw_{2}+E^{[1]}(x),\]
where
\[E^{[1]}(x):=-\sum_{k_{2}\geq 0}\frac{g_{[1],\pm}^{(k_{2})}(0)}{(k_{2}+1)!}x^{ \frac{k_{2}}{2}}-\frac{1}{\sqrt{x}}\int_{\sqrt{x}}^{\infty}g_{[1],\pm}(w_{2}) \widetilde{B}_{0}\left(\tfrac{w_{2}}{\sqrt{x}}-1\right)dw_{2}.\]
The main term contributes
\[\tfrac{2}{\sqrt{x}}\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{k_{ 1}\geq 0}\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1}\frac{z^{\frac{k_{1 }}{2}}}{(k_{1}+1)!}\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{1 }}\left(e^{-12w_{1}^{2}}\int_{0}^{\infty}g_{[1],\pm}(w_{2})dw_{2}\right) \right]_{w_{1}=0}.\] (A.8)
We rewrite
\[\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{1}}\left( e^{-12w_{1}^{2}}\int_{0}^{\infty}g_{[1],\pm}(w_{2})dw_{2}\right)\right]_{w_{1}=0}\\ =\sum_{j=0}^{k_{1}}\left(\begin{smallmatrix}k_{1}\\ j\end{smallmatrix}\right)\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k _{1}-j}e^{-12w_{1}^{2}}\right]_{w_{1}=0}\int_{0}^{\infty}\beta\left(\tfrac{2w_{ 2}^{2}}{3\pi}\right)e^{-\frac{z}{x}w_{2}^{2}}\left(\mp 8\sqrt{\tfrac{z}{x}}w_{2} \right)^{j}dt_{2}.\]
The \(\pm\) forces \(j\) to be even. Also \(k_{1}\) is even. Thus (A.8) is \(O(\tfrac{1}{\sqrt{z}})\).
The first term from \(E^{[1]}\) contributes
\[2\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{k_{1}\geq 0}\left( \tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1}\left[\left(\tfrac{\partial}{ \partial w_{1}}\right)^{k_{1}}\right.\]
\[\left(e^{-12w_{1}^{2}}\sum_{k_{2}\geq 0}\left[\left(\tfrac{\partial}{ \partial w_{2}}\right)^{k_{2}}\left(g_{[1],\pm}(w_{2})\right)\right]_{w_{2}=0} \right)\right]_{w_{1}=0}\frac{z^{\frac{k_{1}}{2}}}{(k_{1}+1)!}\frac{x^{\frac{k_{ 2}}{2}}}{(k_{2}+1)!}\ll 1.\]
The second term from \(E^{[1]}\) contributes
\[\tfrac{2}{\sqrt{x}}\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{k_{1} \geq 0}\left(\tfrac{\delta}{2}+\tfrac{1}{4}\right)^{k_{1}+1}\frac{z^{\frac{k_{ 1}}{2}}}{(k_{1}+1)!}\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{1} }\left(e^{-12w_{1}^{2}}\int_{\sqrt{x}}^{\infty}g_{[1],\pm}(w_{2})dw_{2}\right) \right]_{w_{1}=0}.\]
We compute
\[\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{1}}e^{-12w_{1}^{2} \mp 8\tfrac{\sqrt{x}}{\sqrt{x}}w_{1}w_{2}}\right]_{w_{1}=0}=\sum_{\ell=0}^{k_{1} }\binom{k_{1}}{\ell}\left[\left(\tfrac{\partial}{\partial w_{1}}\right)^{k_{ 1}-\ell}e^{-12w_{1}^{2}}\right]_{w_{1}=0}\left(\mp 8\tfrac{\sqrt{x}}{\sqrt{x}}w_{2} \right)^{\ell}.\]
Arguing as above we need \(\ell\), \(k_{1}\) to be even and obtain overall \(O(\frac{1}{\sqrt{z}})\).
The second term in the error \(E(\frac{\delta}{2}+\frac{1}{4};z)\) contributes
\[\tfrac{2}{\sqrt{z}}\sum_{\pm}\pm\sum_{\delta\in\{0,1\}}(-1)^{\delta}\sum_{n_{ 2}\geq 0}\beta\left(\tfrac{2}{3\pi}(n_{2}+1)^{2}x\right)e^{-(n_{2}+1)^{2}z} \int_{\left(\frac{\delta}{2}+\frac{1}{4}\right)\sqrt{z}}^{\sqrt{z}\infty}e^{- 12w_{1}^{2}\mp 8(n_{2}+1)\sqrt{z}w_{1}}dw_{1}\\ =\tfrac{2}{\sqrt{z}}\sum_{\pm}\pm\sum_{n_{2}\geq 0}\beta \left(\tfrac{2}{3\pi}(n_{2}+1)^{2}x\right)e^{-(n_{2}+1)^{2}z}\int_{\frac{ \sqrt{x}}{4}}^{\frac{3\sqrt{x}}{4}}e^{-12w_{1}^{2}\mp 8(n_{2}+1)\sqrt{z}w_{1}}dw_{1}.\]
Now the sum on \(n_{2}\) is
\[\sum_{n_{2}\geq 0}g_{[2]}\left((n_{2}+1)\sqrt{x}\right),\]
where
\[g_{[2]}(w_{2}):=\beta\left(\tfrac{2w_{2}^{2}}{3\pi}\right)e^{-\frac{z}{x}w_{2 }^{2}\mp 8\tfrac{\sqrt{x}}{\sqrt{x}}w_{1}w_{2}}.\]
Thus (5.8), with \(N=0\), gives
\[\sum_{n_{2}\geq 0}g_{[2]}\left((n_{2}+1)\sqrt{x}\right)=\frac{1}{\sqrt{x}}\int_{0 }^{\infty}g_{[2]}(t_{2})dt_{2}+E^{[4]}(x),\]
where
\[E^{[4]}(x):=-\sum_{k_{2}\geq 0}\frac{g_{[2]}^{(k_{2})}(0)}{(k_{2}+1)!}x^{ \frac{k_{2}}{2}}-\frac{1}{\sqrt{x}}\int_{\sqrt{x}}^{\infty}g_{[2]}(w_{2}) \widetilde{B}_{0}\left(\tfrac{w_{2}}{\sqrt{x}}-1\right)dw_{2}.\]
The main term contributes
\[\tfrac{2}{\sqrt{z}\sqrt{x}}\sum_{\pm}\pm\int_{\tfrac{\sqrt{x}}{4}}^{\frac{3 \sqrt{x}}{4}}e^{-12w_{1}^{2}}\int_{0}^{\infty}g_{[2]}(w_{2})dw_{2}dw_{1}.\]
We write
\[e^{-12w_{1}^{2}}\int_{0}^{\infty}g_{[2]}(w_{2})dw_{2}=e^{4w_{1}^{2}}\int_{0}^ {\infty}\beta\left(\tfrac{2w_{2}^{2}}{3\pi}\right)e^{-\frac{z}{x}\left(w_{2} \mp 4\tfrac{\sqrt{x}}{\sqrt{x}}\right)^{2}}dw_{2}\ll e^{4w_{1}^{2}}.\]
Thus we have overall \(O(\frac{1}{\sqrt{z}})\).
The first term in the error \(E^{[4]}\) contributes
\[-\tfrac{2}{\sqrt{z}}\sum_{\pm}\pm\int_{\tfrac{\sqrt{x}}{4}}^{\frac{3\sqrt{z}}{ 4}}e^{-12w_{1}^{2}}\sum_{k_{2}\geq 0}\left[\left(\tfrac{\partial}{ \partial w_{2}}\right)^{k_{2}}\beta\left(\tfrac{2w_{2}^{2}}{3\pi}\right)e^{- \frac{z}{x}w_{2}^{2}\mp 8\tfrac{\sqrt{x}}{\sqrt{x}}w_{1}w_{2}}\right]_{w_{2}=0}\frac{x^{ \frac{k_{2}}{2}}}{(k_{2}+1)!}.\]
We bound, since \(|z|\leq(1+\Delta)x\),
\[\left[\left(\frac{\partial}{\partial w_{2}}\right)^{k_{2}}\left(\beta\left(\frac{2 w_{2}^{2}}{3\pi}\right)e^{-\frac{z}{w_{2}^{2}}\mp 8\frac{\sqrt{z}}{\sqrt{z}}w_{1}w_{2}} \right)\right]_{w_{2}=0}\ll\left(1+\Delta^{\frac{k_{2}}{2}}\right)\left[\left( \frac{\partial}{\partial w_{2}}\right)^{k_{2}}\beta\left(\frac{2w_{2}^{2}}{3 \pi}\right)e^{-w_{2}^{2}\mp 8w_{1}w_{2}}\right]_{w_{2}=0}.\]
Then overall we have
\[\ll\frac{1}{\sqrt{z}}\sum_{k_{2}\geq 0}\left(1+\Delta^{\frac{k_{2}}{2}} \right)\left[\left(\frac{\partial}{\partial w_{2}}\right)^{k_{2}}\left(\beta \left(\frac{2w_{2}^{2}}{3\pi}\right)e^{-w_{2}^{2}}\int_{\frac{\sqrt{z}}{4}}^{ \frac{3\sqrt{z}}{4}}e^{-12w_{1}^{2}\mp 8w_{1}w_{2}}dw_{1}\right)\right]_{w_{2}=0} \frac{x^{\frac{k_{2}}{2}}}{(k_{2}+1)!}\ll 1.\]
The second term in the error \(E^{[4]}\) can be estimated as before as \(O(\frac{1}{\sqrt{z}})\).
|
2306.16795
|
Joint constraint on the jet structure from the short GRB population and
GRB 170817A
|
The nearest GRB 170817A provided an opportunity to probe the angular
structure of the jet of this short gamma-ray burst (SGRB), by using its
off-axis observed afterglow emission. It is investigated that whether the
afterglow-constrained jet structures can be consistent with the luminosity of
the prompt emission of GRB 170817A. Furthermore, by assuming that all SGRBs
including GRB 170817A have the same explosive mechanism and jet structure, we
apply the different jet structures into the calculation of the flux and
redshfit distributions of the SGRB population, in comparison with the
observational distributions of the Swift and Fermi sources. As a result, it is
found that the single-Gaussian structure can be basically ruled out, whereas
the power-law and two-Gaussian models can in principle survive.
|
Xiao-Feng Cao, Wei-Wei Tan, Yun-Wei Yu, Zhen-Dong Zhang
|
2023-06-29T09:03:55Z
|
http://arxiv.org/abs/2306.16795v1
|
# Joint constraint on the jet structure from the short GRB population and GRB 170817A
###### Abstract
The nearest GRB 170817A provided an opportunity to probe the angular structure of the jet of this short gamma-ray burst (SGRB), by using its off-axis observed afterglow emission. It is investigated that whether the afterglow-constrained jet structures can be consistent with the luminosity of the prompt emission of GRB 170817A. Furthermore, by assuming that all SGRBs including GRB 170817A have the same explosive mechanism and jet structure, we apply the different jet structures into the calculation of the flux and redshift distributions of the SGRB population, in comparison with the observational distributions of the Swift and Fermi sources. As a result, it is found that the single-Gaussian structure can be basically ruled out, whereas the power-law and two-Gaussian models can in principle survive.
gamma-ray burst (629)
## 1 Introduction
Gamma-ray bursts (GRBs) are generated by highly-beamed relativistic jets, which are driven by rapidly rotating black hole or neutron star engines. Before the gamma-ray emission is produced, the jets should first propagate through dense progenitor material, which can be a stellar envelope for long GRBs (Zhang et al., 2003; Matzner, 2003; Lazzati and Begelman, 2005; Bromberg et al., 2011, 2014; Suwa and Ioka, 2011; Yu, 2020; Gottlieb and Nakar, 2022; Urrutia et al., 2023, 20) or a merger ejecta for short GRBs (SGRBs; Nagakura et al., 2014; Lazzati et al., 2017; Yu, 2020; Hamidani and Ioka, 2021; Nathanail et al., 2021; Nativi et al., 2022; Gottlieb and Nakar, 2022; Pavan et al., 2022).
As a result of its interaction with the progenitor material, a GRB jet breaking out from the progenitor can finally own an angular structure for its energy and velocity distributions. Generally speaking, the breakout jet can consist of a core region with an opening angle of few degrees and a relatively wider and less energetic wing region (Lazzati et al., 2018; Salafia and Ghirlanda, 2022). In addition, the jet can also be surrounded by a much wider cocoon that is contributed by the shocked material. Nevertheless, it seems unnecessary to treat the jet and cocoon separately, due to the mixing of the material and the continuous distribution of the energy. Instead, we can simply treat the cocoon as a part of the jet wing. From the core to the wing, the Lorentz factor and energy density of the jet can decrease quickly with the increasing angle relative to the jet axis. Three empirical analytical functions have been usually suggested to describe the angular structure of GRB jets, including a power law (Dai and Gou, 2001; Zhang and Meszaros, 2002; Kumar and Granot, 2003; Lazzati and Begelman, 2005), a Gaussian (Zhang and Meszaros, 2002; Rossi et al., 2002, 2004; Kumar and Granot, 2003; Granot and Kumar, 2003) and, sometimes, two Gaussians (Tan and Yu, 2020; Luo et al., 2022; Wei et al., 2022).
Observational constraints on the jet structures can in principle provide a clue to understand the nature and interior of GRB progenitors. First of all, a direct implication for the angular structure can be derived from the afterglow emission of GRBs, particularly, when the viewing direction deviates from the jet axis significantly (Kumar and Granot, 2003). Because of the relativistic beaming effect, the emission from the jet material deviating from the line of sight (LOS) can be detected only after the material is decelerated to have an emission beaming angle larger than the viewing angle. Therefore, it can be expected that the more luminous emission from more energetic jet material can be detected later for an off-axis observation. In this case, the peak of the afterglow light curves can appear when the core emission comes into sight and the increasing light curves before the peak just reflect the angular distribution of the jet energy density.
The problem is, for the majority of observed GRBs, their cosmological distances always prevent us to detect them on a large viewing angle, because of the rapid
decrease of the jet energy with the angle. And, without a GRB trigger, it will be very difficult to capture the orphan afterglow emission of the GRBs. Following such a consideration, the observed GRBs are always assumed to be on-axis and a "top-hat" structure is generally appropriate for the afterglow modelings, at most, by further invoking an opening angle for the jet if a so-called jet break feature appears in the light curves. Nevertheless, this situation has being changed since the detection of the nearest GRB 170817A of a distance of \(\sim 40\) Mpc. The viewing angle of this GRB was quickly constrained to be about \(\theta_{\rm obs}\leq 31^{\circ}\) by the gravitational wave detection of GW 170817 (Abbott et al., 2017). This special multi-messenger event provided the first opportunity to constrain the angular structure of the GRB jet by its afterglow emission and various jet structure models had been investigated widely (Lamb and Kobayashi, 2017; Xiao et al., 2017; Margutti et al., 2017, 2018; Troja et al., 2017, 2018; D'Avanzo et al., 2018; Lazzati et al., 2018; Mooley et al., 2018; Granot et al., 2018; Resmi et al., 2018; Xie et al., 2018; Nynka et al., 2018; He et al., 2018; Ziaeepour, 2019; Huang et al., 2019; Kathirgamaraju et al., 2019; Lamb et al., 2019; Beniamini and Nakar, 2019; Beniamini et al., 2019, 2020; Takahashi and Ioka, 2020, 2021; Wei et al., 2022). Besides explaining the afterglow emission, the off-axis observation of GRB 170817A also provides a natural but qualitative explanation for its ultra-low luminosity of \(\sim 10^{47}\)erg s\({}^{-1}\)(Abbott et al., 2017; Zhang et al., 2018). Then, it needs to be checked whether the observed prompt luminosity can be quantitatively consistent with the jet structure derived from the afterglow modelings.
Furthermore, a large viewing angle of GRB 170817A is necessary for understanding the very high event rate of nearby GRBs inferred from GRB 170817A. Meanwhile, a question arises here: how can we connect this nearby SGRB rate in a natural way with the rates of the other SGRBs? In more detail, if all SGRBs including GRB 170817A share a common geometry for their jets, then it is crucial to ask how this common jet structure influence our understanding of the observational redshift and energy distributions of all SGRBs as well as the determination of the luminosity function (LF) and event rate of the SGRBs, as previously investigated by Salafia et al. (2020, 2022); Tan and Yu (2020); Luo et al. (2022). Obviously, the random distribution of the viewing angles of SGRBs can lead to different luminosity for different SGRBs, even though their jets are actually identical. Therefore, a flat low-luminosity component would appear in the apparent LF for a top-hat structure assumption, because the LOS for most SGRBs must not be strictly parallel to the jet axis and the angle-dependence of the luminosity exists even within the jet core region (but not uniform as assumed in the top-hat model). It was then implied that the intrinsic LF corresponding to the distribution of the central luminosity of SGRB jets could be simply described by a single power law (Tan and Yu, 2020).
However, in the previous works, the constraints on the jet structure from the GRB 170817A observations and population statistics were usually treated separately, but have not been quantitatively confronted with each other. Therefore, this paper is devoted to investigate the consistency between these two types of observational constraints. In the next section, we briefly introduce the afterglow model and display the constrained model parameters for three typical structure functions. In Section 3, on the one hand, we derive the angular dependence of the equivalent isotropic emission energy (\(E_{\gamma,{\rm iso}}\)) from the energy density (\(\varepsilon\)) distribution of the jets, in comparison with the prompt luminosity of GRB 170817A. On the other hand, we compare the model-predicted redshift and flux distributions of SGRBs with the observational ones. A summary is given in Section 4.
## 2 Constraining jet structure by GRB 170817A
As usual, for calculating the afterglow emission from a structured jet, we can separate the jet into a series of differential rings and consider the dynamical evolution of each ring independently. By ignoring the possible lateral expansion/motion of the jet rings, the dynamical equation can be written as (Huang et al., 1999; Li et al., 2019):
\[\frac{d\Gamma_{\theta}}{dM_{\rm sw,\theta}}=-\frac{\Gamma_{\theta}^{2}-1}{M_{ \rm ej,\theta}+2\Gamma_{\theta}M_{\rm sw,\theta}}, \tag{1}\]
where \(\Gamma_{\theta}\) is the Lorentz factor of the ring of an half-opening angle \(\theta\), \(M_{\rm ej,\theta}\) and \(M_{\rm sw,\theta}\) are the masses per solid angle of the GRB ejecta and the swept-up interstellar medium (ISM), respectively. By denoting the angular distribution of the jet kinetic energy by \(\varepsilon_{\theta}\equiv dE_{\rm k}/d\Omega\), we can have \(M_{\rm ej,\theta}=\varepsilon_{\theta}/\Gamma_{\theta,{\rm i}}c^{2}\), where the subscript "i" of \(\Gamma_{\theta,{\rm i}}\) represents its initial value. Meanwhile, the increase of the swept-up mass is determined by
\[\frac{dM_{\rm sw,\theta}}{dr_{\theta}}=r_{\theta}^{2}nm_{\rm p}, \tag{2}\]
where \(r_{\theta}\) is the radius of the jet external shock, \(n\) is the number density of the ISM, and \(m_{\rm p}\) is the mass of proton.
Following Sari et al. (1998), the synchrotron luminosity contributed by a differential element of a mass \(M_{\rm sw,\theta}\)
can be calculated analytically as (Yu et al., 2022)
\[I^{\prime}_{\nu^{\prime}}(r,\theta,\varphi) = \frac{M_{\rm sw,\theta}}{r_{\theta}^{2}m_{\rm p}}\frac{m_{\rm e}c^ {2}\sigma_{\rm T}B^{\prime}}{12\pi e}S(\nu^{\prime}), \tag{3}\]
where the superscript prime indicates the quantities are measured in the comoving frame of the shocked region, \(m_{\rm e}\) is the mass of electron, \(c\) the speed of light, \(\sigma_{\rm T}\) the Thomson cross section, \(e\) the electron charge, and \(B^{\prime}\) represents the comoving magnetic field strength. The dimensionless synchrotron spectrum \(S(\nu^{\prime})\) can be expressed as a broken-power-law function, which is characterized by two broken frequencies that are determined by the acceleration and cooling of electrons (see Sari et al. (1998) for details). Then, for an observer of a viewing angle \(\theta_{\rm obs}\) relative to the jet axis, the observed flux of the afterglow emission can be obtained by integrating over the whole solid angle of the jet as (Huang et al., 2000; Yu et al., 2007, 2022)
\[F_{\nu}(t)=\frac{r_{\theta}^{2}}{d_{\rm L}^{2}}\int\frac{I^{{}^{\prime}}_{\nu ^{\prime}}(r,\theta,\phi)}{\Gamma_{\theta}^{3}(1-\beta_{\theta}\cos\alpha)^{ 3}}\cos\alpha d\Omega(\theta,\phi), \tag{4}\]
where \(d_{\rm L}\) is the luminosity distance of the GRB, \(\beta_{\theta}=(1-\Gamma_{\theta}^{-2})^{1/2}\), and \(\alpha\) is defined as the angle between the emitting differential element and the LOS, which can be expressed as (e.g., Li et al., 2019)
\[\cos\alpha=\cos\theta\cos\theta_{\rm obs}+\sin\theta\sin\theta_{\rm obs}\cos\phi. \tag{5}\]
Finally, the connection between the radius of emitting material and the observational time is given by \(dr_{\theta}/dt=\beta_{\theta}c/(1-\beta_{\theta}\cos\alpha)\).
In this paper, the following three representative functions are taken to describe the possible jet structures:
1. Power-law jet \[\varepsilon_{\theta} = \varepsilon_{\rm c}\Theta^{-k_{1}},\] (6) \[\Gamma_{\theta,\rm i} = \Gamma_{\rm c}\Theta^{-k_{2}}+1,\] (7) with \(\Theta=\left[1+(\theta/\theta_{\rm c})^{2}\right]^{1/2}\);
2. Single-Gaussian jet \[\varepsilon_{\theta} = \varepsilon_{\rm c}\exp\left(-\frac{\theta^{2}}{2\theta_{\rm c}^ {2}}\right),\] (8) \[\Gamma_{\theta,\rm i} = \Gamma_{\rm c}\exp\left(-\frac{\theta^{2}}{2\theta_{\rm c}^{2}} \right)+1;\] (9)
3. Two-Gaussian jet \[\varepsilon_{\theta} = \varepsilon_{\rm c}\left[\exp\left(-\frac{\theta^{2}}{2\theta_{\rm c }^{2}}\right)+\mathcal{C}_{\rm E}\exp\left(-\frac{\theta^{2}}{2\theta_{\rm out }^{2}}\right)\right],\] (10) \[\Gamma_{\theta,\rm i} = \Gamma_{\rm c}\left[\exp\left(-\frac{\theta^{2}}{2\theta_{\rm c} ^{2}}\right)+\mathcal{C}_{\rm\Gamma}\exp\left(-\frac{\theta^{2}}{2\theta_{\rm out }^{2}}\right)\right],\] (11)
where the free parameters \(\varepsilon_{\rm c}\), \(\Gamma_{\rm c}\), \(\theta_{\rm c}\), \(C_{\rm E}\), \(C_{\rm\Gamma}\), and \(\theta_{\rm out}\) can be constrained by fitting the observed afterglows of GRB 170817A. To be specific, we substitute these jet structure functions into the dynamical equation and then calculate the afterglow light curves for arbitrary viewing angles. Within a wide range of the model parameters and with the Markov Chain Monte Carlo (MCMC) method, we can constrain the model parameters by comparing the theoretical light curves with the observational data and evaluating their goodness. Finally, the parameter values are listed in Table 1, which are taken from Yu (2020) and Wei et al. (2022). The corresponding fitting results are presented in Figure 1 for a direct impression. Here, the microphysical parameters are defined as usual: \(p\) is the spectral index of the shock-accelerated electrons, \(\epsilon_{\rm B}\) is the equipartition factor of the magnetic fields in the shocked material and, meanwhile, the equipartition factor for electrons is taken as \(\epsilon_{\rm e}=\sqrt{\epsilon_{\rm B}}\).
## 3 Confronting the jet structures with the population statistics of Sgrbs
### The direction-dependence of the emission energy
Because of the jet structure and the relativistic beaming of the jet emission, the emission energy inferred from observations should highly depend on the viewing angle, which could not always trace the angular distribution of the kinetic energy of the jet (Salafia et al., 2015).
Figure 1: The fits of the multi-wavelength afterglow light curves of GRB 170817A for a power-law (left), single-Gaussian (middle), or two-Gaussian (right) jet structure (Wei et al., 2022).
Specifically, for a viewing angle \(\theta_{\rm obs}\), the isotropically-equivalent energy of the GRB prompt emission can be calculated by (Salafia et al., 2015)
\[E_{\gamma,{\rm iso}}(\theta_{\rm obs})=\eta_{\gamma}\int\frac{\varepsilon_{\theta} }{\Gamma_{\theta}^{4}[1-\beta_{\theta}\cos\alpha]^{3}}d\Omega(\theta,\phi), \tag{12}\]
where \(\eta_{\gamma}\) is the radiation efficiency which is assumed to be a constant for different time and different directions. As shown in Figure 2, we find that the \(\theta_{\rm obs}\)-dependence of the emission energy can roughly trace the kinetic energy for viewing angles not much larger than the jet opening angle. However, when the observational direction is far away from the jet axis, the emission energy could become much higher than the kinetic energy at the same direction, particularly, in the Gaussian structure cases. The large angle emission is actually contributed
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & Power-law & Single-Gaussian & Two-Gaussian & Range \\ \hline \(\theta_{\rm obs}(^{\circ})\) & \(22.89^{+3.71}_{-3.03}\) & \(23.21^{+3.95}_{-3.60}\) & \(19.37^{+2.21}_{-1.53}\) & (17, 35) \\ \(\log(\varepsilon_{\rm c}/{\rm erg})\) & \(50.92^{+0.35}_{-0.54}\) & \(51.76^{+0.85}_{-0.98}\) & \(51.04^{+0.24}_{-0.31}\) & (48, 53) \\ \(\log(C_{\rm E})\) & / & / & \(-1.33^{+0.19}_{-0.18}\) & (-5, 0) \\ \(\Gamma_{\rm c}\) & \(352^{+265}_{-174}\) & \(507^{+66}_{-103}\) & \(456^{+236}_{-235}\) & (100, 600) \\ \(C_{\rm T}\) & / & / & \(0.49^{+0.33}_{-0.30}\) & (0, 1) \\ \(\theta_{\rm c}(^{\circ})\) & \(2.56^{+1.08}_{-0.62}\) & \(3.64^{+0.63}_{-0.53}\) & \(1.48^{+0.41}_{-0.32}\) & (1, 10) \\ \(\theta_{\rm out}(^{\circ})\) & / & / & \(4.16^{+0.87}_{-0.54}\) & (2, 15) \\ \(k_{1}\) & \(3.97^{+0.64}_{-0.52}\) & / & / & (3, 8) \\ \(k_{2}\) & \(2.64^{+0.25}_{-0.42}\) & / & / & (0.1, 3) \\ \(\log(n/{\rm cm}^{-3})\) & \(-2.98^{+0.49}_{-0.57}\) & \(-2.06^{+0.94}_{-1.01}\) & \(-3.19^{+0.40}_{-0.43}\) & (-4, 0) \\ \(\log(\epsilon_{\rm B})\) & \(-2.36^{+0.63}_{-0.46}\) & \(-3.98^{+1.31}_{-1.11}\) & \(-2.59^{+0.41}_{-0.28}\) & (-6, -1) \\ \(p\) & \(2.13^{+0.01}_{-0.01}\) & \(2.12^{+0.01}_{-0.01}\) & \(2.13^{+0.01}_{-0.01}\) & (2, 2.3) \\ \hline \end{tabular}
\end{table}
Table 1: Model parameters constrained from the afterglow modeling of GRB170817A.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Model & \(L_{\rm on,GRB170817A}\) & \(\theta_{b1}\) & \(\theta_{b2}\) & \(\theta_{b3}\) & \(s\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\alpha_{3}\) & \(\mathcal{R}\) \\ \hline Power law & 51 & 2.5 & 18 & / & 0.6 & 0. & 4.3 & 1.45 & / \\ Single Gaussian & 51.5 & 6.0 & 14.3 & / & 0.4 & 0.4 & 9.5 & 5.9 & / \\ Two Gaussian & 51.62 & 2.1 & 22.5 & 3.65 & 1.4 & 0.0 & 4.9 & 1.75 & 0.048 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters for the empirical description of the \(E_{\rm iso}-\theta_{\rm obs}\) relation
Figure 2: The isotropically-equivalent emission energy of SGRBs as a function of viewing angle (solid dots). The angular distribution of the jet kinetic energy are displayed by the gray lines for a comparison. The radiation coefficient \(\eta_{\gamma}\) can be constrained by according to the observational emission energy of GRB 170817A as shown by the blue data.
from the jet material at smaller angles and the prompt emission of GRB 170817A is just in such a situation1. By comparing with the isotropic emission energy of GRB 170817A, the radiation efficiency can be constrained to be \(\eta_{\gamma}=0.08\), 0.013, and 0.2 for the power-law, single-Gaussian, and two-Gaussian jet structures, respectively.
Footnote 1: Since the single Gaussian cannot directly explain the prompt emission of GRB 170817A, Tan and Yu (2020) suggested an outer Gaussian component. However, as shown here, the GRB 170817A emission actually could be explained by the large-angle emission effect, which indicates the outer Gaussian may be not indispensable. Nevertheless, we keep considering the two-Gaussian situation in this paper, because the two-Gaussian structure could still be a natural result of the jet propagation and it is also helpful for improving the afterglow modeling (Wei et al., 2022).
Furthermore, by taking a constant emission duration of \(T\sim 1\) s for all SGRBs and all emission directions, we can connect the observational isotropic luminosity with the emission energy directly as \(L_{\rm iso}=E_{\rm iso}/T\). Then, the direction-dependence of the isotropic luminosity can be described analytically by the following functions:
\[L_{\rm iso}(\theta_{\rm obs})=L_{\rm on}\frac{1+\left(\frac{\theta_{\rm obs}} {\theta_{\rm 2}}\right)^{\alpha_{3}}}{\left[\left(\frac{\theta_{\rm obs}}{\theta_{ \rm bi}}\right)^{-\alpha_{1}s}+\left(\frac{\theta_{\rm obs}}{\theta_{\rm bi}} \right)^{\alpha_{2}s}\right]^{1/s}}, \tag{13}\]
for the power-law and single-Gaussian jets and
\[L_{\rm iso}(\theta_{\rm obs}) = L_{\rm on}\left\{\frac{1+\left(\frac{\theta_{\rm obs}}{\theta_{ \rm 2}}\right)^{\alpha_{3}}}{\left[\left(\frac{\theta_{\rm obs}}{\theta_{\rm bi }}\right)^{-\alpha_{1}s}+\left(\frac{\theta_{\rm obs}}{\theta_{\rm bi}} \right)^{\alpha_{2}s}\right]^{1/s}}\right.\] \[\left.+\mathcal{R}\exp\left(-\frac{\theta_{\rm obs}^{2}}{2\theta_ {\rm 3}^{2}}\right)\right\},\]
for the two-Gaussian jets, which are obtained by fitting the \(E_{\rm iso}-\theta_{\rm obs}\) relation displayed in Figure 2, where \(L_{\rm on}\) is the luminosity value for the on-axis (i.e., \(\theta_{\rm obs}=0^{\circ}\)) observation.
As found in Salafia et al. (2015) and Tan and Yu (2020), the \(\theta_{\rm obs}\)-dependence of the GRB luminosity can substantially influence the determination of the LF of SGRBs, which was usually found to have an apparent broken-power-law form (Wanderman and Piran, 2015; Ghirlanda et al., 2016; Tan and Yu, 2020), where the flat low-luminosity component within the luminosity range (\(L_{\rm iso}\sim 10^{50}-10^{52}\)erg s\({}^{-1}\) can just be resulted from the angular structure of the jet. Therefore, following Tan and Yu (2020), the intrinsic LF of SGRBs which describes the probability distribution of the on-axis luminosity \(L_{\rm on}\) of different SGRB jets is assumed to have a simple power law form as
\[\Phi(L_{\rm on})=\Phi_{*}\left(\frac{L_{\rm on}}{L_{\rm on}^{*}}\right)^{- \gamma}\exp\left(-\frac{L_{\rm on}^{*}}{L_{\rm on}}\right), \tag{16}\]
where \(\Phi_{*}\) is the normalization coefficient. Then, for an observational isotropic luminosity \(L_{\rm iso}\), its detection probability should be calculated by integrating over the arbitrary observational directions as
\[p(L_{\rm iso})=\int\Phi(L_{\rm on}){\rm sin}\,\theta_{\rm obs}d\theta_{\rm obs}, \tag{17}\]
where the fact that the jets can be paired is considered. It is assumed that all SGRB jets have the angular profile identical to that of GRB 170817A, but the total energy of the jets and thus the on-axis luminosity can still be different from each other. In the above integration, the value of \(L_{\rm on}\) can be determined by using Eq. (13) or (15) for adopted values of \(L_{\rm iso}\) and \(\theta_{\rm obs}\).
### Modeling the flux and redshift distributions of SGRBs
For a comparison with the the observational distributions of SGRBs on their fluxes and redshifts, we calculate the model-predicted SGRB numbers in different flux ranges and different redshift ranges by the following integrations (Tan and Yu, 2020)
\[N(P_{1},P_{2}) = \Delta\Omega T\int_{0}^{z_{\rm max}}\int_{P_{1}}^{P_{2}}\eta(P) \tag{18}\] \[\times \dot{R}_{\rm SGRB}(z)p(L_{\rm iso})dP\frac{dV(z)}{1+z},\]
and
\[N(z_{1},z_{2}) = \Delta\Omega T\int_{z_{1}}^{z_{2}}\int_{0}^{P_{\rm max}}\eta(P) \vartheta_{z}(z,P) \tag{19}\] \[\times \dot{R}_{\rm SGRB}(z)p(L_{\rm iso})dP\frac{dV(z)}{1+z},\]
respectively, where \(\Delta\Omega\) is the field of view of a telescope, \(T\) is the working time with a duty cycle of \(\sim\)50%, \(\eta(P)\) and \(\vartheta(z,P)\) are the trigger efficiency and the probability of redshift measurement, respectively, and \(dV(z)\) is the comoving cosmological volume element. The limit values of the redshift \(z_{\rm max}\) and the flux \(P_{\rm max}\) are taken according to the boundaries of the observational ranges. The isotropic luminosity is calculated by \(L_{\rm iso}=4\pi d_{L}^{2}Pk\), where \(k\) is a correction factor that converts the observational photon flux in the detector band to the energy flux in a fixed rest-frame band of \((1,10^{4})\) keV. The most crucial input of the above integrations
Figure 4: Same to Figure 3 but for a single-Gaussian jet structure.
Figure 5: Same to Figure 3 but for a two-Gaussian jet structure.
Figure 3: Comparison between the model-predicted and observational flux and redshift distributions of SGRBs for a power-law jet structure. The solid and open data circles are the results of the Swift and Fermi observations, respectively, which are taken from Tan and Yu (2020).
is the cosmic rate of SGRBs, which can usually be connected with the cosmic star formation rates (CSFRs) by a delay time since the SGRBs are produced by mergers of compact binaries, where the delay time is determined by the formation process of the compact binaries and the orbital decay through gravitational radiation. By assuming the delay time \(\tau\) distributes with a probability function \(F(\tau)\), we can express the cosmic SGRB rates by (e.g., Regimbau and Hughes, 2009; Zhu et al., 2013; Regimbau et al., 2015):
\[\dot{R}_{\rm SGRB}(z)\propto(1+z)\int_{\tau_{\rm min}}^{t(z)-t(z_ {\rm b})}\frac{\dot{\rho}_{\star}[t(z)-\tau]}{1+z[t(z)-\tau]}F(\tau)d\tau\] \[\propto (1+z)\int_{z[t(z)-\tau_{\rm min}]}^{z_{\rm b}}\frac{\dot{\rho}_{ \star}(z^{\prime})}{1+z^{\prime}}F[t(z)-t(z^{\prime})]\frac{dt}{dz^{\prime}}dz^ {\prime}, \tag{20}\]
where \(\dot{\rho}_{\star}(z)\) is the CSFR, \(t(z)=\int_{z}^{\infty}[(1+z^{\prime})H(z^{\prime})]^{-1}dz^{\prime}\), \(dt/dz=-[(1+z)H(z)]^{-1}\), and \(z_{\rm b}\) represents the redshift at which the binaries started to be formed. Finally, please see Cao et al. (2011) and Tan and Yu (2020) for the details of the expressions of \(\eta(P)\), \(\vartheta(z,P)\), \(k\), \(\dot{\rho}_{\star}(z)\), and \(F(\tau)\).
The comparison between the model-predicted distributions with the observational ones are presented in Figures 3, 4, and 5, where the observational data are taken from Tan and Yu (2020). By according to the goodness of the fits presented in Table 3, we can exclude the single-Gaussian model, just as found by Tan and Yu (2020), even though the large-angle emission effect is taken into account in this paper. In comparison, the goodness of the power-law and two-Gaussian models are basically comparable to each other, but the latter one is still relatively better. With the obtained values of the intrinsic LF parameters, the typical delay time, and the total rates of SGRBs, we plot the luminosity dependence of the event rates of SGRBs (e.g., the apparent LF) in Figure 6, in comparison with the event rate of \(190^{+440}_{-160}\rm{yr}^{-1}Gpc^{-3}\) inferred by GRB 170817A (Zhang et al., 2018). As shown, the power-law structure can determine an event rate for \(L_{\rm iso}\gtrsim 10^{47}\rm{erg\ s}^{-1}\) well consistent with the central value of the GRB 170817A rate, whereas the two-Gaussian model can only reach the marginal value of the error range.
## 4 Summary
The uncovering of the jet structure is one of the most crucial aims of the GRB researches, which can help to understand the nature of their progenitors and central engines. It has been long expected to constrain the jet structure by using the observed afterglow emission. However, usually, only an effective half-opening angle of the jet can be derived from the data, if a so-called jet break characteristic can be identified from the afterglow light curves. The discovery of GRB 170817A had changed this awkward situation, because it was observed off-axis significantly. Then, an immediate question is that whether the jet structures inferred from the afterglows of GRB 170817A can be compatible with the statistical properties of the SGRB population, if it is assumed all SGRBs including GRB 170817A have a common origin and explosive mechanism. Therefore, in this paper, we investigate three typical empirical jet structures including the power-law, single-Gaussian, and two-Gaussian cases, the parameters of which are firstly constrained by according to the afterglow data of GRB 170817A. It is further demonstrated that these three types of jet structures can all account for the prompt luminosity of GRB 170817A, with the consideration of the large-angle emission effect of relativistic jets. However, the single-Gaussian structure is failed to reproduce the redshift and flux distributions of SGRBs, in particular, for the Swift data. By comparison, the power-law structure is most favored by the statistical results and, furthermore, predicts an event rate closest to the value inferred from GRB 170817A.
This work is supported by the National Key R&D Program of China (2021YFA0718500), the China Manned Spaced Project (CMS-CSST-2021-A12), the National Natural Science Foundation of China (grant Nos. 11833003 and U1838203), Hubei Provincial Outstanding Young and Middle-aged Science and Technology Innovation Team Project of China (T2021026), and the Key Laboratory Opening Fund (MOE) of China (grant No. QLPL2021P01).
Figure 6: The apparent LF given by the power-law and the two-Gaussian jet models, in comparison with the event rate \(190^{+440}_{-160}\rm{yr}^{-1}Gpc^{-3}\) inferred from the GRB 170817A observations represented by the shaded band (Zhang et al., 2018).
|
2303.13033
|
Federated Uncertainty-Aware Aggregation for Fundus Diabetic Retinopathy
Staging
|
Deep learning models have shown promising performance in the field of
diabetic retinopathy (DR) staging. However, collaboratively training a DR
staging model across multiple institutions remains a challenge due to non-iid
data, client reliability, and confidence evaluation of the prediction. To
address these issues, we propose a novel federated uncertainty-aware
aggregation paradigm (FedUAA), which considers the reliability of each client
and produces a confidence estimation for the DR staging. In our FedUAA, an
aggregated encoder is shared by all clients for learning a global
representation of fundus images, while a novel temperature-warmed uncertainty
head (TWEU) is utilized for each client for local personalized staging
criteria. Our TWEU employs an evidential deep layer to produce the uncertainty
score with the DR staging results for client reliability evaluation.
Furthermore, we developed a novel uncertainty-aware weighting module (UAW) to
dynamically adjust the weights of model aggregation based on the uncertainty
score distribution of each client. In our experiments, we collect five publicly
available datasets from different institutions to conduct a dataset for
federated DR staging to satisfy the real non-iid condition. The experimental
results demonstrate that our FedUAA achieves better DR staging performance with
higher reliability compared to other federated learning methods. Our proposed
FedUAA paradigm effectively addresses the challenges of collaboratively
training DR staging models across multiple institutions, and provides a robust
and reliable solution for the deployment of DR diagnosis models in real-world
clinical scenarios.
|
Meng Wang, Lianyu Wang, Xinxing Xu, Ke Zou, Yiming Qian, Rick Siow Mong Goh, Yong Liu, Huazhu Fu
|
2023-03-23T04:41:44Z
|
http://arxiv.org/abs/2303.13033v2
|
# Federated Uncertainty-Aware Aggregation
###### Abstract
Deep learning models have shown promising performance in the field of diabetic retinopathy (DR) staging. However, collaboratively training a DR staging model across multiple institutions remains a challenge due to non-iid data, client reliability, and confidence evaluation of the prediction. To address these issues, we propose a novel federated uncertainty-aware aggregation paradigm (FedUAA), which considers the reliability of each client and produces a confidence estimation for the DR staging. In our FedUAA, an aggregated encoder is shared by all clients for learning a global representation of fundus images, while a novel temperature-warmed uncertainty head (TWEU) is utilized for each client for local personalized staging criteria. Our TWEU employs an evidential deep layer to produce the uncertainty score with the DR staging results for client reliability evaluation. Furthermore, we developed a novel uncertainty-aware weighting module (UAW) to dynamically adjust the weights of model aggregation based on the uncertainty score distribution of each client. In our experiments, we collect five publicly available datasets from different institutions to conduct a dataset for federated DR staging to satisfy the real non-iid condition. The experimental results demonstrate that our FedUAA achieves better DR staging performance with higher reliability compared to other federated learning methods. Our proposed FedUAA paradigm effectively addresses the challenges of collaboratively training DR staging models across multiple institutions, and provides a robust and reliable solution for the deployment of DR diagnosis models in real-world clinical scenarios.
Keywords:Federated learning Uncertainty estimation DR staging.
## 1 Introduction
In the past decade, numerous deep learning-based methods for DR staging have been explored and achieved promising results [10, 11, 20, 28]. However, most cur
rent studies focus on centralized learning, which necessitates data collection from multiple institutions to a central server for model training. This approach poses significant data privacy security risks. Additionally, in clinical practice, different institutions may have their own DR staging criteria [3]. Consequently, it is difficult for the previous centralized DR staging method to utilize data of varying DR staging criteria to train a unified model.
Federated learning (FL) is a collaborative learning framework that enables training a model without sharing data between institutions, thereby ensuring data privacy [15, 22]. In the FL paradigm, FedAvg [25] and its variants [1, 4, 9, 16, 23, 24, 19] are widely used and have achieved excellent performance in various medical tasks. However, these FL methods assign each client a static weight for model aggregation, which may lead to the global model not learning sufficient knowledge from clients with large heterogeneous features and ignoring the reliability of each client. In clinical practice, the data distributions of DR datasets between institutions often vary significantly due to medical resource constraints, population distributions, collection devices, and morbidity [26, 29]. This variation poses great challenges for the exploration of federated DR staging methods. Moreover, most existing DR staging methods and FL paradigms mainly focus on performance improvement and ignore the exploration of the confidence of the prediction. Therefore, it is essential to develop a new FL paradigm that can provide reliable DR staging results while maintaining higher performance. Such a paradigm would reduce data privacy risks and increase user confidence in AI-based DR staging systems deployed in real-world clinical settings.
To address the issues, we propose **a novel FL paradigm, named FedUAA**, that employs a personalized structure to handle collaborative DR staging among multiple institutions with varying DR staging criteria. We utilize uncertainty to evaluate the reliability of each client's contribution. While uncertainty is a proposed measure to evaluate the reliability of model predictions [12, 14, 30], it remains an open topic in FL research. In our work, we introduce **a temperature-warmed evidential uncertainty (TWEU)** head to enable the model to generate a final result with uncertainty evaluation without sacrificing performance. Additionally, based on client uncertainty, we developed **an uncertainty-aware weighting module (UAW)** to dynamically aggregate models according to each client's uncertainty score distribution. This can improve collaborative DR staging across multiple institutions, particularly for clients with large data heterogeneity. Finally, we construct a **dataset for federated DR staging** based on five public datasets with different staging criteria from various institutions to satisfy the real non-iid condition.4 The comprehensive experiments demonstrate that FedUAA provides outstanding DR staging performance with a high degree of reliability, outperforming other state-of-the-art FL approaches.
Footnote 4: The code and dataset setting will be released upon acceptance.
## 2 Methodology
Fig. 1 (a) shows the overview of our proposed FedUAA. During training, local clients share the encoder (\(\varphi\)) to the cloud server for model aggregation, while the TWEU (\(\psi\)) head is retained locally to generate DR staging results with uncertainty evaluation based on features from the encoder to satisfy local-specific DR staging criteria. The algorithm of our proposed FedUAA is detailed in **Supplementary A**. Therefore, the target of our FedUAA is:
\[\min_{\varphi\in\Phi,\psi\in\Psi}\sum_{i=1}^{N}\pounds\left(f_{i}\left(\varphi_ {i},\psi_{i}|X_{i}\right),Y_{i}\right), \tag{1}\]
where \(\pounds\) is the total loss for optimizing the model, \(f_{i}\) is the model of \(i\)-th client, while \(X_{i}\) and \(Y_{i}\) are the input and label of \(i\)-th client. Different from previous personalized FL paradigms [2, 4], our FedUAA dynamically adjusts the weights for model aggregation according to the reliability of each client, i.e., the client with larger distributional heterogeneity tends to have larger uncertainty distribution and should be assigned a larger weight for model aggregation to strengthen attention on the client with data heterogeneity. Besides, by introducing TWEU, our FedUAA can generate a reliable prediction with an estimated uncertainty, which makes the model more reliable without losing DR staging performance.
### Temperature-warmed evidential uncertainty head
To make the model more reliable without sacrificing DR staging performance, we propose a novel temperature-warmed evidence uncertainty head (TWEU), which
Figure 1: The overview of FedUAA (a) with TWEU module (b). An aggregated encoder is shared by all clients for learning a global representation of fundus images, while a novel TWEU head is kept on the local client for local personalized staging criteria. Furthermore, a novel UAW module is developed to dynamically adjust the weights for model aggregation based on the reliability of each client.
can directly generate DR staging results with uncertainty score based on the features from the encoder. The framework of TWEU is illustrated in Fig. 1 (b). Specifically, we take one of the client models as an example and we assume that the staging criteria of this client is \(K\) categories. Correspondingly, given a color fundus image input, we can obtain its _K_+1 non-negative mass values, whose sum is 1. This can be defined as \(\sum_{i=1}^{K}b_{i}+u=1\), where \(b_{i}\geq 0\) is the probability of _i_-th category, while \(u\) represent the overall uncertainty score. Specifically, as shown in Fig. 1 (b), a local fully connected layer (FC) is used to learn the local DR category-related features \(F_{V}\), and the _Softplus_ activation function is adopted to obtain the evidence \(E=[e_{1},...,e_{K}]\) of \(K\) staging categories based on \(F_{V}\), so as to ensure that its feature value is greater than 0. Then, \(E\) is re-parameterized by Dirichlet concentration [5], as: \(\mathbf{\alpha}=E+1,\ i.e,\ \alpha_{k}=e_{k}+1\) where \(\alpha_{k}\) and \(e_{k}\) are the _k_-th category Dirichlet distribution parameters and evidence, respectively. Further calculating the belief masses (\(\mathbf{b}\)) and corresponding uncertainty score (\(u\)) by \(b_{k}=\frac{e_{k}}{S}=\frac{\alpha_{k}-1}{S},\ u=\frac{K}{S}\), where \(S=\sum_{k=1}^{K}\alpha_{i,j}^{k}\) is the Dirichlet intensities. Therefore, the probability assigned to category \(k\) is proportional to the observed evidence for category \(k\). Conversely, if less total evidence is obtained, the greater the uncertainty score will be. As shown in Fig. 1 (b), \(L_{Uce}\) is used to guide the model optimization based on the belief masses (\(\mathbf{b}\)) and their corresponding uncertainty score (\(u\)). Finally, temperature coefficients \(\tau\) is introduced to further enhance the classifier's confidence in belief masses, i.e, \(b_{Ti}=\frac{e^{(b_{i}/\tau)}}{\sum_{i=1}^{K}e^{(b_{i}/\tau)}}\), where \(\mathbf{b_{T}}=[b_{T1},...,b_{Tk}]\) is the belief masses that were temperature-warmed. As shown in Fig. 1 (b), \(L_{Tce}\) is adopted to guide the model optimization based on the temperature-warmed belief features of \(\mathbf{b_{T}}\).
### Uncertainty-aware weighting module
Most existing FL paradigms aggregate model parameters by assigning a fixed weight to each client, resulting in limited performance on those clients with large heterogeneity in their data distributions. To address this issue, as shown in Fig. 1 (a), we propose a novel uncertainty-aware weighting (UAW) module that can dynamically adjust the weights for model aggregation based on the reliability of each client, which enables the model to better leverage the knowledge from different clients and further improve the DR staging performance. Specifically, at the end of a training epoch, each client-side model produces an uncertainty value distribution (\(U\)), and the ground truth for incorrect prediction of \(U^{GT}\) also can be calculated based on the final prediction \(P\) by,
\[u_{i}^{GT}=1-\mathbf{1}\left\{P_{i},Y_{i}\right\},\ \text{where}\ \mathbf{1} \left\{P_{i},Y_{i}\right\}=\begin{cases}1&\text{if}\ P_{i}=Y_{i}\\ 0&\text{otherwise}\end{cases}, \tag{2}\]
where \(P_{i}\) and \(Y_{i}\) are the final prediction result and ground truth of _i_-th sample in local dataset. Based on \(U\) and \(U^{GT}\), we can find the optimal uncertainty score \(\theta\), which can well reflect the reliability of the local client. To this end, we calculate the ROC curve between \(U\) and \(U^{GT}\), and obtain all possible sensitivity (\(Sens\)
and specificity (\(Spes\)) values corresponding to each uncertainty score (\(u\)) used as a threshold. Then, Youden index (\(J\)) [7] is adopted to obtain the optimal uncertainty score \(\theta\) by:
\[\theta=\arg\max_{u}J\left(u\right),\text{ with }\ J\left(u\right)=Sens\left(u \right)+Spes\left(u\right)-1. \tag{3}\]
More details about Youden index are given in **Supplementary B**. Finally, the optimal uncertainty scores \(\Theta=[\theta_{1},...,\theta_{N}]\) of all clients are sent to the server, and a Softmax function is introduced to normalize \(\Theta\) to obtain the weights for model aggregation as \(w_{i}=e^{\theta_{i}}/\sum_{i=1}^{N}e^{\theta_{i}}\). Therefore, the weights for model aggregation are proportional to the optimal threshold of the client. Generally, local dataset with larger uncertainty distributions will have a higher optimal uncertainty score \(\theta\), indicating that it is necessary to improve the feature learning capacity of the client model to further enhance its confidence in the feature representation, and thus higher weights should be assigned during model aggregation.
## 3 Loss function
As shown in Fig. 1 (b), the loss function of client model is:
\[L=L_{Uce}+L_{Tce}, \tag{4}\]
where \(L_{Uce}\) is adopted to guide the model optimization based on the features (\(\mathbf{b}\) and \(u\)) which were parameterized by Dirichlet concentration. Given the evidence of \(E=[e_{1},...,e_{k}]\), we can obtain Dirichlet distribution parameter \(\mathbf{\alpha}=E+1\), category related belief mass \(\mathbf{b}=[b_{1},...,b_{k}]\) and uncertainty score of \(u\). Therefore, the original cross-entropy loss is improved as,
\[L_{Ice}\!=\!\int\left[\sum_{k=1}^{K}-y_{k}\log\left(b_{k}\right)\right]\frac{ 1}{\beta\left(\alpha\right)}\prod_{k=1}^{K}b_{k}^{\alpha_{k}-1}db=\sum_{k=1}^ {K}y_{k}\left(\Phi\left(S\right)-\Phi\left(\alpha_{k}\right)\right), \tag{5}\]
where \(\Phi(\cdot)\) is the digamma function, while \(\beta\left(\alpha\right)\) is the multinomial beta function for the Dirichlet concentration parameter \(\alpha\). Meanwhile, the _KL_ divergence function is introduced to ensure that incorrect predictions will yield less evidence:
\[L_{KL}=\log\left(\frac{\Gamma\left(\sum_{k=1}^{K}\left(\tilde{\alpha}_{k} \right)\right)}{\Gamma\left(K\right)\sum_{k=1}^{K}\Gamma\left(\tilde{\alpha}_ {i}\right)}\right)+\sum_{k=1}^{K}\left(\tilde{\alpha}_{k}-1\right)\left[\Phi \left(\tilde{\alpha}_{k}\right)-\Phi\left(\sum_{i=1}^{K}\tilde{\alpha}_{k} \right)\right], \tag{6}\]
where \(\Gamma(\cdot)\) is the gamma function, while \(\tilde{\alpha}=y+\left(1-y\right)\odot\alpha\) represents the adjusted parameters of the Dirichlet distribution which aims to avoid penalizing the evidence of the ground-truth class to 0. In summary, the loss function \(L_{Uce}\) for the model optimization based on the features that were parameterized by Dirichlet concentration is as follows:
\[L_{Uce}=L_{Ice}+\lambda*L_{KL},\]
where \(\lambda\) is the balance factor for \(L_{KL}\). To prevent the model from focusing too much on KL divergence in the initial stage of training, causing a lack of exploration for the parameter space, we initialize \(\lambda\) as 0 and increase it gradually to 1 with the number of training iterations. And, seen from Sec. 2.1, Dirichlet concentration alters the original feature distribution of \(F_{v}\), which may reduce the model's confidence in the category-related evidence features, thus potentially leading to a decrease in performance. Aiming at this problem, as shown in Fig. 1 (b), we introduce temperature coefficients to enhance confidence in the belief masses, and the loss function \(L_{Tce}\) to guide the model optimization based on the temperature-warmed belief features \(\mathbf{b_{T}}\) is formalized as:
\[L_{Tce}=-\sum_{i=1}^{K}y_{i}log\left(b_{Ti}\right). \tag{8}\]
## 4 Experimental results
**Dataset and Implementation:** We construct a database for federated DR staging based on 5 public datasets, including APTOS (3,662 samples) 5, Messidor (1,200 samples) [6], DDR (13,673 samples) [21], KaggleDR (35,126 samples) (DRR) 6, and IDRiD (516 samples) [27], where each dataset is regarded as a client, More details of datasets are given in **Supplementary C**.
Footnote 5: [https://www.kaggle.com/datasets/mariaherreroot/aptos2019](https://www.kaggle.com/datasets/mariaherreroot/aptos2019)
Footnote 6: [https://www.kaggle.com/competitions/diabetic-retinopathy-detection](https://www.kaggle.com/competitions/diabetic-retinopathy-detection)
We conduct experiments on the Pytorch with 3090 GPU. The SGD with a learning rate of 0.01 is utilized. The batch size is set to 32, the number of epochs is 100, and the temperature coefficient \(\tau\) is empirically set to 0.05. To facilitate training, the images are resized to 256\(\times\)256 before feeding to the model.
**Performance for DR Staging:** Table 1 shows the DR staging AUC for different FL paradigms on different clients. Our FedUAA achieves the highest AUC scores on all clients, with a 1.48% improvement in average AUC compared to FedBN [24], which achieved the highest average AUC score among the compared
\begin{table}
\begin{tabular}{l||c|c|c|c|c||c} \hline Methods & APTOS & DDR & DRR & Messidor & IDRiD & Average \\ \hline SingleSet & 0.9059 & 0.8776 & 0.8072 & 0.7242 & 0.7168 & 0.8063 \\ \hline FedRep [4] & 0.9372 & 0.8964 & 0.8095 & 0.7843 & 0.8047 & 0.8464 \\ \hline FedBN [24] & 0.9335 & 0.9003 & 0.8274 & 0.7792 & 0.8193 & 0.8519 \\ \hline FedProx [23] & 0.9418 & 0.8950 & 0.8127 & 0.7877 & 0.8049 & 0.8484 \\ \hline FedDyn [1] & 0.9352 & 0.8778 & 0.8022 & 0.7264 & 0.5996 & 0.7882 \\ \hline SCAFFOLD [16] & 0.9326 & 0.8590 & 0.7251 & 0.7288 & 0.6619 & 0.7815 \\ \hline FedDC [9] & 0.9358 & 0.8858 & 0.7969 & 0.7390 & 0.7581 & 0.8236 \\ \hline Moon [19] & 0.9436 & 0.8995 & 0.8117 & 0.7907 & 0.8115 & 0.8514 \\ \hline Proposed & **0.9445** & **0.9044** & **0.8379** & **0.8012** & **0.8299** & **0.8636** \\ \hline \end{tabular}
\end{table}
Table 1: AUC results for different FL methods applied to DR staging.
methods. Meanwhile, most FL based approaches achieve higher DR staging performance than SingleSet, suggesting that collaborative training across multiple institutions can improve the performance of DR staging with high data privacy security. Moreover, as shown in Table 1, FL paradigms such as FedDyn [1] and SCAFFOLD [17] exhibit limited performance in our collaborative DR staging task due to the varying staging criteria across different clients, as well as significant differences in label distribution and domain features. These results indicate that our FedUAA is more effective than other FL methods for collaborative DR staging tasks. Furthermore, although all FL methods achieve comparable performance on APTOS and DDR clients with distinct features, our FedUAA approach significantly improves performance on clients with small data volumes or large heterogeneity distribution, such as DRR, Messidor, and IDRiD, by 1.27%, 1.33%, and 1.29% over suboptimal results, respectively, which further demonstrates the effectiveness of our core idea of adaptively adjusting aggregation weights based on the reliability of each client.
**Reliability Analysis:** Providing reliable evaluation for final predictions is crucial for AI models to be deployed in clinical practice. As illustrated in Fig. 2 (b), the model without introducing uncertainty (Backbone) assigns high probability values for incorrect staging results without any alert messages, which is also a significant cause of low user confidence in the deployment of AI models to medical practices. Interestingly, our FedUAA can evaluate the reliability of the final decision through the uncertainty score. For example, for the data with obvious features (Fig. 2 (a)), our FedUAA produces a correct prediction result with a low uncertainty score, indicating that the decision is reliable. Conversely, even if our FedUAA gives an incorrect decision for the data with ambiguous features (Fig. 2 (b)), it can indicate that the diagnosis result may be unreliable by assigning a higher uncertainty score, thus suggesting that the subject should seek a double-check from an ophthalmologist to avoid mis-diagnosis. Furthermore, as shown in Fig. 2 (c), we degraded the quality of the input image by adding different levels of Gaussian noise \(\sigma^{2}\) to further verify the robustness of FedUAA. Seen from Fig. 2 (c), the performance of all methods decreases as the level of added
Figure 2: (a) Instance of being correctly predicted (b) Sample with incorrect prediction result (c) Average AUC of different methods with increasing noise levels (\(\sigma^{2}\)).
noise increases, however, our FedUAA still maintains a higher performance than other comparison methods, demonstrating the robustness of our FedUAA.
**Ablation Study:** We also conduct ablation experiments to verify the effectiveness of the components in our FedUAA. In this paper, the pre-trained ResNet50 [13] is adopted as our backbone (BC) for SingleSet DR staging, while employing FedBN [24] as the FL BC. Furthermore, most ensemble-based [18] and MCDropout-based [8] uncertainty methods are challenging to extend to our federated DR staging task across multiple institutions with different staging criteria. Therefore, we compare our proposed method with the commonly used evidential based uncertainty approach (EU (\(L_{Uce}\))) [12].
For training model with SingleSet, as shown in Table 2, since Dirichlet concentration alters the original feature distribution of the backbone [12], resulting in a decrease in the model's confidence in category-related evidence, consequently, a decrease in performance when directly introducing EU (BC+EU (\(L_{Uce}\))) for DR staging. In contrast, our proposed BC+TWEU (\(L_{Uce}\)+\(L_{Tce}\)) achieves superior performance compared to BC and BC+EU (\(L_{Uce}\)), demonstrating that TWEU (\(L_{Uce}\)+\(L_{Tce}\)) enables the model to generate a reliable final decision without sacrificing performance. For training model with FL, as shown in Table 2, BC+FL outperforms SingleSet, indicating that introducing FL can effectively improve the performance for DR staging while maintaining high data privacy security. Besides, FL+EU (\(L_{Uce}\)) and FL+TWEU (\(L_{Uce}\)+\(L_{Tce}\)) also obtain a similar conclusion as in SingleSet, further proving the effectiveness of TWEU. Meanwhile, the performance of our FedUAA (FL+TWEU (\(L_{Uce}\)+\(L_{Tce}\))+UAW) achieves higher performance than FL+TWEU (\(L_{Uce}\)+\(L_{Tce}\)) and FL backbone, especially for clients with large data distribution heterogeneity such as DRR, Messidor, and IDRiD. These results show that our proposed UAW can further improve the performance of FL in collaborative DR staging tasks.
## 5 Conclusion
In this paper, focusing on the challenges in the collaborative DR staging between institutions with different DR staging criteria, we propose a novel FedUAA by combining the FL with evidential uncertainty theory. Compared to other FL methods, our FedUAA can produce reliable and robust DR staging results with
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Strategy & BC & EU & TWEU & UAW & APTOS & DDR & DRR & Messidor & IDRiD & Average \\ \hline \multirow{3}{*}{SingleSet} & ✓ & ✗ & ✗ & ✗ & 0.9059 & 0.8776 & 0.8072 & 0.7242 & 0.7168 & 0.8063 \\ \cline{2-11} & ✓ & ✓ & ✗ & ✗ & 0.9286 & 0.8589 & 0.8001 & 0.7404 & 0.6928 & 0.8042 \\ \cline{2-11} & ✓ & ✗ & ✓ & ✗ & 0.9414 & 0.8912 & 0.8279 & 0.7309 & 0.7616 & 0.8306 \\ \hline \multirow{3}{*}{FL} & ✓ & ✗ & ✗ & ✗ & 0.9335 & 0.9003 & 0.8274 & 0.7792 & 0.8193 & 0.8519 \\ \cline{2-11} & ✓ & ✓ & ✗ & ✗ & 0.9330 & 0.8572 & 0.7938 & 0.7860 & 0.7783 & 0.8297 \\ \cline{1-1} \cline{2-11} & ✓ & ✗ & ✓ & ✗ & 0.9445 & 0.8998 & 0.8229 & 0.8002 & 0.8231 & 0.8581 \\ \cline{1-1} \cline{2-11} & ✓ & ✗ & ✓ & ✓ & **0.9445** & **0.9044** & **0.8379** & **0.8012** & **0.8299** & **0.8636** \\ \hline \end{tabular}
\end{table}
Table 2: AUC results for different FL paradigms applied to DR staging.
uncertainty evaluation, and further enhance the collaborative DR staging performance by dynamically aggregating knowledge from different clients based on their reliability. Comprehensive experimental results show that our FedUAA addresses the challenges in collaborative DR staging across multiple institutions, and achieves a robust and reliable DR staging performance.
|
2307.06877
|
The complexity of non-stationary reinforcement learning
|
The problem of continual learning in the domain of reinforcement learning,
often called non-stationary reinforcement learning, has been identified as an
important challenge to the application of reinforcement learning. We prove a
worst-case complexity result, which we believe captures this challenge:
Modifying the probabilities or the reward of a single state-action pair in a
reinforcement learning problem requires an amount of time almost as large as
the number of states in order to keep the value function up to date, unless the
strong exponential time hypothesis (SETH) is false; SETH is a widely accepted
strengthening of the P $\neq$ NP conjecture. Recall that the number of states
in current applications of reinforcement learning is typically astronomical. In
contrast, we show that just $\textit{adding}$ a new state-action pair is
considerably easier to implement.
|
Christos Papadimitriou, Binghui Peng
|
2023-07-13T16:25:04Z
|
http://arxiv.org/abs/2307.06877v1
|
# The complexity of non-stationary reinforcement learning
###### Abstract
The problem of continual learning in the domain of reinforcement learning, often called non-stationary reinforcement learning, has been identified as an important challenge to the application of reinforcement learning. We prove a worst-case complexity result, which we believe captures this challenge: Modifying the probabilities or the reward of a single state-action pair in a reinforcement learning problem requires an amount of time almost as large as the number of states in order to keep the value function up to date, unless the strong exponential time hypothesis (SETH) is false; SETH is a widely accepted strengthening of the P \(\neq\) NP conjecture. Recall that the number of states in current applications of reinforcement learning is typically astronomical. In contrast, we show that just _adding_ a new state-action pair is considerably easier to implement.
Introduction
Reinforcement learning (RL) [11], the branch of machine learning seeking to create machines that react to a changing environment so as to maximize long-term utility, has recently seen tremendous advances through deep learning [12, 13], as well as a vast expansion of its applicability and reach to many application domains, including board games, robotics, self-driving cars, control, and many more. As with most aspects of deep learning, one of the most important current challenges in deep RL lies in handling situations in which the model undergoes changes. Variably called _non-stationary RL, continual RL, multi-task RL, or life-long RL_, the problem of enabling RL to react effectively and gracefully to sequences of changes in the underlying Markov model has been identified as an important open problem in practice, see the prior work subsection for many references, and [14] for a recent survey of the challenge and the available remedies.
When it becomes clear that a particular computational problem is difficult, the field of _computational complexity_[15, 16, 1] comes into play: the search for mathematical obstacles to the efficient solution of problems. The identification of such obstacles is often informative about the kinds of remedies one needs to apply to the problem. As far as we can tell, the computational complexity of non-stationary RL (NSRL) has not been explored in the past; in contrast, see [2] for an example of recent progress in identifying complexity obstacles in continual learning of _classification_ tasks.
_In this paper, we initiate the analysis of NSRL from the standpoint of computational complexity._ We consider finite horizon MDPs -- it is easy to see that our results can be extended very easily to infinite horizon MDPs. We ask the following question: Suppose that we have already solved a finite-horizon MDP, and that the MDP changes in some small way; how difficult is it to modify the solution? If the solution we want to update is an explicit mapping from states to actions, then it is not hard to see that this is hopeless: a small local change can cause a large proportion of the values of this map to change1. However, recall that deep RL is not about computing explicitly the optimum solution of the problem; it is about maintaining an implicit representation of a good _approximation_ of the optimum solution. An efficient NSRL algorithm only needs to update the value or policy efficiently when visiting the state. Our results address precisely this aspect of the difficulty.
Footnote 1: For example, consider the extreme example where a change in an action increases the value of the next state, and this in turn changes the optimum actions in almost all other states.
We consider elementary local changes to the RL problem, which we believe capture well the nature of the NSRL problem: We choose a state-action pair and we modify somehow its parameters: the reward, and the transition probability distribution. Our results hold for the most elementary possible change: We only modify two transition probabilities in this state-action pair. (Notice that it is impossible to modify only one probability in a distribution...) We prove that, under widely accepted complexity assumptions to be explained soon, the amount of computation needed to update an \(\epsilon\)-optimal value approximation in the face of such an elementary change is, in the worst case, comparable to the number of states (the precise result is stated below). Since in the problems currently solved by deep RL the number of states of the underlying MDP is typically astronomical, such a prediction is bad indeed -- it means that we essentially have to start all over because of a small change. Now, in deep learning we know well that a worst-case result is never the last word on the difficulty of a problem. However, we believe that an alarming worst-case result, established for an aspect of the problem which has been identified in practice to be a challenge, is a warning sign which may yield valuable hints about the corrective action that needs to be taken in order to overcome the current bottleneck.
We complement this lower bound with a positive result for a different kind of change: _adding a
_new_ action to a state. It turns out that this is a simpler problem, and an \(\epsilon\)-approximate solution can be updated in time polynomial in \(\frac{1}{\epsilon}\) and the horizon.
### Related work
Non-stationary MDPs have been studied extensively in recent years from the point of view of dynamic regret [1, 2, 3, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 2, 3]; In [1] an algorithm with total regret \(\widetilde{O}(S^{1/3}A^{1/3}\Delta^{1/3}HT^{2/3})\) is provided, where \(T\) is the total number of iteration, \(\Delta\) is the variational budget that measures the total change of MDP. Another line of work focuses on the statistical problem of detecting the changes in the environment, see [1, 2, 1, 1], and [1, 1] for recent surveys; in particular, [1] mentions the computational difficulty of the change problem addressed in this paper. Several approaches to NSRL -- e.g [1, 2] -- resort to _restarting_ the learning process if enough change has accumulated; our results suggest that, indeed, restarting may be preferable to updating. Additional literature can be found at Appendix A.
### A brief overview of the main result
Our main result (Theorem 3.1) states that, in the worst case, an elementary change in an MDP -- just updating two transition probabilities in one action at one state of the MDP -- requires time \((SAH)^{1-o(1)}\), where \(S\) is the number of states, \(A\) is the number of action and \(H\) is the horizon. The proof is based on the Strong Exponential Time Hypothesis (SETH), which is a central conjecture in complexity, a refinement of \(\mathsf{P}\neq\mathsf{NP}\). SETH has many applications in graph algorithms [1, 2, 1, 3, 1, 1, 2], edit distance [1], nearest neighbor search [1], kernel estimation [1, 1] and many other domain; see [1] for a comprehensive survey. SETH states that, if the \(k\)-SAT problem (the Boolean satisfiability problem when each clause contains at most \(k\) literals) can be solved in time \(O(2^{c_{k}n})\), then the limit of \(c_{k}\) as \(k\) grows is one. Our work is based on the important result of [1] on the hardness, under SETH, of approximating the bichromatic Maximum Inner Product (Max-IP) problem. Subsequent work has improved the approximation parameter [1, 2] and applied the technique to the Dynamic Coverage problem [1, 2].
We reduce from the Max-IP problem, where we are given two collections of sets \(B_{1},\ldots,B_{n}\) and \(C_{1},\ldots,C_{n}\), over a small universe \([m]\) with \(m=n^{o(1)}\). It is known from [1] that it is hard to distinguish between the following two scenaria: (a) \(B_{i}\subseteq C_{j}\) for some \(i,j\in[n]\), and (b) \(|B_{i}\cap C_{j}|\leq|C_{j}|/2^{\log(n)^{1-o(1)}}\) for all \(i,j\in[n]\). That is, it is hard to tell the difference between the case of a complete containment and the case of tiny intersections. The first step of our reduction is to construct a finite-horizon MDP such that the state of the first step (\(h=1\)) corresponds to the sets \(B_{1},\ldots,B_{n}\) and the state of the second step (\(h=2\)) corresponds to the universe \([m]\). The state of the second step has either high reward or low reward, depending on the time \(t\). By applying a sequence of changes to the state-action transition in the second step, based on the structure of the sets \(C_{1},\ldots,C_{n}\), one obtains a reduction from Max-IP establishing a lower bound of \(S^{2-o(1)}\) for this sequence. However, since this sequence is of length \(S^{1+o(1)}\) (because of the size of the \(C_{j}\) sets), we obtain an \(\Omega(S^{1-o(1)})\) amortized lower bound for each step of the sequence, and this complete the reduction to the NSRL problem.
The construction so far yields an approximation \(\epsilon\) that is very small (about \(S^{-o(1)}\)). We need a second stage of our construction to amplify \(\epsilon\) to some constant such as \(0.1\). This is achieved by stacking multiple layers of the basic construction outlined above. Finally, by spreading the state-actions across multiple steps, we improve the lower bound to \(\Omega((SAH)^{1-o(1)})\).
The complete proof can be found at Section 3.
## 2 Preliminary Definitions
Here we shall define non-stationary MDPs. Let \(\mathcal{S}\) be a state space (\(|\mathcal{S}|=S\)), \(\mathcal{A}\) an action space (\(|\mathcal{A}|=A\)), \(H\in\mathbb{Z}_{+}\) the planning horizon. Next let \(T\in\mathbb{Z}_{+}\) be the number of _rounds:_ The intention is that the MDP will repeated \(T\) times, with action parameters changed between rounds.
A _non-stationary finite horizon MDP_ is a set of \(T\) MDPs (\(\{\mathcal{S}_{h},\mathcal{A}_{h},P_{t,h},r_{t,h}\}_{t\in[T],h\in[H]},s_{\text{ init}}\)). \(\mathcal{S}_{h}\subseteq\mathcal{S}\) is the state space and \(\mathcal{A}_{h}\subseteq\mathcal{A}\) is the action space at the \(h\)-th step (\(h\in[H]\)), and \(P_{t,h}:\mathcal{S}_{h}\times\mathcal{A}_{h}\rightarrow\Delta_{\mathcal{S}_{h +1}}\) is the transition function, where \(\Delta_{\mathcal{S}_{h}}\) is the set of all probability distributions over \(\mathcal{S}_{h}\), and \(r_{t,h}:\mathcal{S}_{h}\times\mathcal{A}_{h}\rightarrow[0,1]\) is the reward function at the \(h\)-th step of the \(t\)-th round (\(h\in[H],t\in[T]\)). We use \(s_{\text{init}}\in\mathcal{S}_{1}\) to denote the initial state.
We focus on deterministic non-stationary policies \(\pi=(\pi_{1},\ldots,\pi_{T})\), though our results apply for randomized policies as well. Let \(\pi_{t}=(\pi_{t,1},\ldots,\pi_{t,H})\) be the policy of the \(t\)-th round (\(t\in[T]\)) and \(\pi_{t,h}:\mathcal{S}_{h}\rightarrow\mathcal{A}_{h}\) (\(h\in[H]\)) be the decision at the \(h\)-th step. Given a policy \(\pi\), the \(Q\)-value of a state-action pair \((s,a)\in\mathcal{S}_{h}\times\mathcal{A}_{h}\) at the \(t\)-round can be determined
\[Q_{t,h}^{\pi_{t}}(s,a)=r_{t,h}(s,a)+\mathbb{E}\left[\sum_{\ell=h+1}^{H}r_{t,h} (s_{t,\ell},\pi_{t,\ell}(s_{t,\ell}))\mid s_{t,h}=s,a_{t,h}=a\right]\quad \forall s\in\mathcal{S}_{h},a\in\mathcal{A}_{h}\]
and the \(V\)-value
\[V_{t,h}^{\pi_{t}}(s)=\mathbb{E}\left[\sum_{\ell=h}^{H}r_{t,h}(s_{t,\ell},\pi _{t,\ell}(s_{t,\ell}))\mid s_{t,h}=s\right]\quad\forall s\in\mathcal{S}_{t,h}.\]
Let \(\pi_{t}^{*}\) be the optimal policy at the \(t\)-round, and \(Q_{t}^{*}\), \(V_{t}^{*}\) be its \(Q\)-value and \(V\)-value. The goal is to maintain an \(\epsilon\)-approximated value function. In particular, we require the algorithm to maintain an \(\epsilon\)-approximated estimation \(V_{t}\), of the initial state \(s_{\text{init}}\), such that for all rounds \(t\in[T]\),
\[\big{|}V_{t}-V_{t,1}^{*}(s_{\text{init}})\big{|}\leq\epsilon.\]
Updates.All \(T\) MDPs of our definition must be solved, one after the other, despite the fact that their parameters change from one round to the next. The updates are meant to be extremely simple and local: For the \(t\)-th update, an adversary picks an arbitrary state-action pair \((s_{h},a_{h})\in\mathcal{S}_{h}\times\mathcal{A}_{h}\), and changes the transition function from \(P_{t-1,h}(s_{h},a_{h})\) to \(P_{t,h}(s_{h},a_{h})\) and the reward from \(r_{t-1,h}(s_{h},a_{h})\) to \(r_{t,h}(s_{h},a_{h})\). It also changes the transition function from \(P_{t-1,h}(s_{h},a_{h})\) to \(P_{t,h}(s_{h},a_{h})\), such that these two distributions differ in _exactly two states._ That is, the change in the distribution is the smallest kind imaginable: _Two next states are chosen, and the probability mass of the first is transferred to the second_ -- obviously, two discrete distributions cannot differ in exactly one probability.
**Remark 2.1**.: _Implementing an elementary change of this kind takes constant time: If the distribution is represented in a tabular form, the two entries of the table are changed. It holds similarly when the MDP is accessed via a sampling oracle (a.k.a. the generative model), all one has to do is change the output states._
**Remark 2.2**.: _Notice that the kind of changes we consider is the simplest possible, and yet a sequence of such changes can simulate any desirable change. Hence, by showing in the next section that even these changes are computationally intractable, we establish that NSRL is intractable._
Incremental action change.We also consider a different type of NSRL, where the MDP changes only through the introduction of a new action.2 The setup is similar to NSRL: we assume the initial MDP has \(S\) states, \(H\) steps but the action set is empty. Then in each round \(t\in[T]\), a new state-action pair \((s_{h},a_{h})\) is added to the MDP, together with its transition probability \(P_{h}(s_{h},a_{h})\) and reward \(r_{h}(s_{h},a_{h})\). Note the crucial difference with NSRL is that the there is no change occurs on any existing state-action pair. There are a total of \(T\) rounds, and therefore, \(T\) state-action pairs at the end. The incremental action model captures application scenario that involves explorations or expansion of environments (e.g. incremental training).
Footnote 2: Note the introduction of a new _state_ can be achieved through a sequence of action additions.
## 3 Hardness of NSRL
The main result is the following:
**Theorem 3.1** (Main result, hardness of Nrscl).: _Let \(S,A,H,T\) be sufficiently large integers, the horizon \(H\geq(SA)^{o(1)}\). Then, unless \(\mathsf{SETH}\) is false, there is no algorithm with amortized runtime \(O((SAH)^{1-o(1)})\) per update that can approximate the optimal value of a non-stationary MDP over a sequence of \(T\) updates. In particular, any algorithm with better runtime fails to distinguish between these two cases:_
* _The optimal policy has value at least_ \(\frac{H}{4}\) _at some round_ \(t\in[T]\)_;_
* _The optimal policy has value at most_ \(\frac{H}{100}\) _for all_ \(T\) _rounds._
Our result is based on the widely accepted Strong Exponential Time Hypothesis (\(\mathsf{SETH}\)).
**Conjecture 3.2** (Strong Exponential Time Hypothesis (\(\mathsf{SETH}\)), [16]).: _For any \(\epsilon>0\), there exists \(k\geq 3\) such that the \(k\)-SAT problem on \(n\) variables cannot be solved in time \(O(2^{(1-\epsilon)n})\)._
Note that \(\mathsf{SETH}\) is stronger than the \(\mathsf{P}\neq\mathsf{NP}\) assumption, a strengthening that allows the proof of _polynomial_ lower bounds on problems that have a polynomial-time algorithm -- such as NSRL.
The starting point of our reduction is the following hardness result for the Bichromatic Maximum Inner Product (Max-IP) problem, whose proof is based on the machinery of distributed PCP.
**Theorem 3.3** (Bichromatic Maximum Inner Product (Max-IP) [1]).: _Let \(\gamma>0\) be any constant, and let \(n\in\mathbb{Z}_{+}\), \(m=n^{o(1)}\), \(w=2^{(\log(n))^{1-o(1)}}\). Given two collections of sets \(\mathcal{B}=\{B_{1},\ldots,B_{n}\}\) and \(\mathcal{C}=\{C_{1},\ldots,C_{n}\}\) over universe \([m]\), satisfying \(|B_{1}|=\cdots=|B_{n}|=b\) and \(|C_{1}|=\cdots=|C_{n}|=c\) for some \(b,c\in[m]\). Unless \(\mathsf{SETH}\) is false, no algorithm can distinguish the following two cases in time \(O(n^{2-\gamma})\):_
* **YES instance.** _There exists two sets_ \(B\in\mathcal{B}\)_,_ \(C\in\mathcal{C}\) _such that_ \(C\subseteq B\)_;_
* **NO instance.** _For every_ \(B\in\mathcal{B}\) _and_ \(C\in\mathcal{C}\)_,_ \(|B\cap C|\leq c/w\)_._
Parameters.We reduce Max-IP to NSRL. For any sufficiently large parameters \(S,A,H,T\), let
\[n=T^{1/2-o(1)}\cdot(SAH)^{1/2}\quad\text{and}\quad m=n^{o(1)}\]
be the input parameters of Max-IP. Given a Max-IP instance with sets \(B_{1},\ldots,B_{n}\) and \(C_{1},\ldots,C_{n}\) over a ground set \([m]\), recall \(b,c\in[m]\) are the size of set \(\{B_{i}\}_{i\in[n]}\) and \(\{C_{i}\}_{i\in[n]}\). Let
\[L=\lceil b/c\rceil\quad\text{and}\quad N=\frac{SAH}{16L(\log_{2}(S)+2)}.\]
We shall divide \(\{B_{i}\}_{i\in[n]}\) into \(K=n/N\) batches and each batch contains \(N\) sets. That is, \(\{B_{i}\}_{i\in[n]}=\{B_{k,\nu}\}_{k\in[K],\nu\in[N]}\). In the proof, we assume the total number of updates \(SAH\leq T\leq\operatorname{poly}(SAH)\), i.e., it is polynomially bounded.
### Construction of a hard instance
We first describe the MDP at the initial stage (\(t=0\)), with state space \(\{\mathcal{S}_{h}\}_{h\in[H]}\), action space \(\{\mathcal{A}_{h}\}_{h\in[H]}\), transition function \(\{P_{h}\}_{h\in[H]}\) and reward function \(\{r_{h}\}_{h\in[H]}\). A (simplified) illustration can be found at Figure 1. We omit the subscript of \(t=0\) for simplicity.
Horizon.We divide the entire horizon into two phases
\[[H]=\mathcal{H}_{1}\cup\mathcal{H}_{2},\quad\text{where}\quad\mathcal{H}_{1}= [H/2]\quad\text{and}\quad\mathcal{H}_{2}=[H/2:H]\,.\]
The second phase is relatively simple and involves only two terminal states that provide rewards. The first phase is more involved and determines the destination state.
The first phase contains \(L\) layers, and each layer contains \(H/2L\) steps
\[\mathcal{H}_{1}=\mathcal{H}_{1,1}\cup\cdots\cup\mathcal{H}_{1,L},\quad\text{ where}\quad\mathcal{H}_{1,\ell}=\left[(\ell-1)\cdot\frac{H}{2L}+1:\ell\cdot\frac{H}{2L} \right]\,\forall\ell\in[L].\]
The layers are used for amplifying the difference between good and bad policies. The structure of the MDP is for identical for each layer, except the last step at the last layer.
For each layer \(\ell\in[L]\), we further divide it into \(G:=\frac{H}{2L(\log_{2}(S)+2)}\) groups, and each group contains \(\log_{2}(S)+2\) steps,
\[\mathcal{H}_{1,\ell}=\mathcal{H}_{1,\ell,1}\cup\cdots\cup\mathcal{H}_{1,\ell,G}\]
where
\[\mathcal{H}_{1,\ell,g}=\left[(\ell-1)\cdot\frac{H}{2L}+(g-1)(\log_{2}(S)+2)+1 :(\ell-1)\cdot\frac{H}{2L}+g\cdot(\log_{2}(S)+2)\right]\,\forall g\in[G].\]
We set \(h(\ell,g,\tau):=(\ell-1)(H/2L)+(g-1)(\log_{2}(S)+2)+\tau\) be the \(\tau\)-step, at the \(g\)-th group of the \(\ell\)-th layer, where \(\tau\in[\log_{2}(S)+2],g\in[G],\ell\in[L]\). For simplicity, we also write \(h(\ell,g)=h(\ell,g,\log_{2}(S)+2)\) and \(h(\ell)=h(\ell,G)\) be the last step of each group and layer.
Figure 1: A snapshot of the hard instance
States.There are five types of states: terminal states, element states, set states, routing states and the pivotal state.
* **Terminal states.** There are two terminal states \(s^{\mathsf{t}}_{1}\) and \(s^{\mathsf{t}}_{2}\), and they appear at every steps \(h\in[H]\). We use \(s^{\mathsf{t}}_{h,1},s^{\mathsf{t}}_{h,2}\) to denote the terminal states at \(\mathcal{S}_{h}\).
* **Element states.** There are \(m\) element states \(\{s^{\mathsf{e}}_{u}\}_{u\in[m]}\) that appear at every step \(h\in\mathcal{H}_{1}\) of phase one. We use \(s^{\mathsf{s}}_{h,u}\) to denote the \(u\)-th element state at \(\mathcal{S}_{h}\).
* **Set states.** There are \(S/4\) set states \(\{s^{\mathsf{b}}_{i}\}_{i\in[S/4]}\). The set states only appear on the second last step of each group \(\mathcal{H}_{\ell,g}\). In particular, for each layer \(\ell\in[L]\), group \(g\in[G]\), let \(s^{\mathsf{b}}_{h(\ell,g)-1,i}\) denote the \(i\)-th (\(i\in[S/4]\)) set state at \(\mathcal{S}_{h(\ell,g)-1}\).
* **Pivotal state** There is one pivotal state \(s^{\mathsf{p}}\) that appears at every step \(h\in\mathcal{H}_{1}\) of Phase 1, denoted as \(s^{\mathsf{p}}_{h}\). The MDP start with the pivotal state, i.e., \(s_{\text{init}}:=s^{\mathsf{p}}_{1}\).
* **Routing states.** The routing states are used for reaching set states. There \(S/4\) routing states \(\{s^{\mathsf{r}}_{\alpha}\}_{\alpha\in[S/4]}\) that appear at the \([2:\log_{2}(S)]\)-th step of each group. In particular, at layer \(\ell\in[L]\), group \(g\in[G]\), step \(\tau\in[2:\log_{2}(S)]\), let \(\{s^{\mathsf{r}}_{h(\ell,g,\tau),\alpha}\}_{\alpha\in[1:2^{\tau-2}]}\) be the collection of routing states at \(\mathcal{S}_{h(\ell,g,\tau)}\).
The total number of possible states is at most \(2+m+S/4+S/4+1\leq S\).
ActionsThere are five types of actions. The terminal action \(a^{\mathsf{t}}\), the element actions \(a^{\mathsf{e}}\), the set actions \(\{a^{\mathsf{b}}_{j}\}_{j\in[A/2]}\), the pivotal action \(\{a^{\mathsf{p}}_{1},a^{\mathsf{p}}_{2}\}\) and the routing actions \(\{a^{\mathsf{r}}_{1},a^{\mathsf{r}}_{2}\}\). The total number of action is at most \(A/2+6\leq A\), and we assume these actions appear at every step \(h\in[H]\).
RewardThe only state that returns non-zero reward is the terminal state \(\{s^{\mathsf{t}}_{h,1}\}_{h\in\mathcal{H}_{2}}\). Formally, we set
\[r_{h}(s,a)=0\quad\text{when}\quad h\in\mathcal{H}_{1}\quad\text{and}\quad r_{h }(s,a)=\begin{cases}1&s=s^{\mathsf{t}}_{h,1}\\ 0&\text{otherwise}\end{cases}\quad\text{when}\quad h\in\mathcal{H}_{2}. \tag{1}\]
TransitionsWe next specify the transition probability of the initial MDP.
**(a) Terminal states.** The transition of terminal states is deterministic and always keeps the state terminal, that is
\[P_{h}(s^{\mathsf{t}}_{h,1},a)=\mathbf{1}\{s^{\mathsf{t}}_{h+1,1}\}\quad\text{ and}\quad P_{h}(s^{\mathsf{t}}_{h,2},a)=\mathbf{1}\{s^{\mathsf{t}}_{h+1,2}\} \quad\forall h\in[H-1],a\in\mathcal{A}. \tag{2}\]
Here we use \(\mathbf{1}\{s\}\in\Delta_{\mathcal{S}_{h+1}}\) to denote the one-hot vector that is \(1\) at state \(s\) and \(0\) otherwise. Combining with the definition of reward functions, the MDP guarantees that a policy receives \(H/2\) reward once it goes to the first terminal state \(s^{\mathsf{t}}_{h,1}\) at some step \(h\in\mathcal{H}_{2}\). Meanwhile, it receives \(0\) reward if it ever goes to the second terminal state \(s^{\mathsf{t}}_{h,2}\).
**(b) Element states.** At step \(h<H/2\), for any element \(u\in[m]\), the transition function of \(s^{\mathsf{e}}_{h,u}\) equals
\[P_{h}(s^{\mathsf{e}}_{h,u},a^{\mathsf{e}})=\begin{cases}\mathbf{1}\{s^{ \mathsf{p}}_{h+1}\}&h=h(\ell)\text{ for some }\ell\in[L-1]\\ \mathbf{1}\{s^{\mathsf{e}}_{h+1,u}\}&\text{otherwise}\end{cases} \tag{3}\]
and
\[P_{h}(s^{\mathsf{e}}_{h,u},a)=\mathbf{1}\{s^{\mathsf{e}}_{h+1,u}\},\quad \forall a\in\mathcal{A}\backslash\{a^{\mathsf{e}}\}. \tag{4}\]
That is, the element state \(s^{\mathsf{e}}_{h,u}\) always stays on itself, except at the end of each layer \(\ell\in[L]\), it can go to the pivotal state.
At the end of the first phase, the transition of element state is determined by the set \(C\). In the initialization stage (\(t=0\)), let \(C_{0}\subseteq[m]\) be an arbitrary set of size \(c\) and it would be replace later, let
\[P_{H/2}(s^{\mathsf{e}}_{H/2,u},a^{\mathsf{e}})=\begin{cases}\mathbf{1}\{s^{ \mathsf{t}}_{H/2+1,1}\}&u\in C_{0}\\ \mathbf{1}\{s^{\mathsf{t}}_{H/2+1,2}\}&u\notin C_{0}\end{cases}\]
and
\[P_{H/2}(s^{\mathsf{e}}_{H/2,u},a)=\mathbf{1}\{s^{\mathsf{t}}_{H/2+1,2}\}, \quad\forall a\in\mathcal{A}\backslash\{a^{\mathsf{e}}\}.\]
That is, if the element \(u\in C_{0}\), then it can go to a high reward terminal state \(s^{\mathsf{t}}_{H/2+1,1}\); otherwise it goes to the no-reward terminal \(s^{\mathsf{t}}_{H/2+1,2}\). Looking ahead, we would update the state-action pairs \(\{(s^{\mathsf{e}}_{H/2,u},a^{\mathsf{e}})\}_{u\in[m]}\) according to sets \(\{C_{i}\}_{i\in[n]}\) periodically.
**(c) Set states.** The transition function of set states is determined by the sets \(\{B_{k,\nu}\}_{k\in[K],\nu\in[N]}\). In the initialization stage (\(t=0\)), let \(\{B_{0,\nu}\}_{\nu\in[N]}\) be arbitrary sets of size \(b\) and they would be replaced later in the update sequence. Recall that a set state would appear at the second last step of a group \(\mathcal{H}_{\ell,g}\), for some layer \(\ell\in[L]\) and group \(g\in[G]\). Let
\[N(g,i,j):=(g-1)(S/4)(A/2)+(i-1)(A/2)+j,\]
and therefore,
\[\{N(g,i,j):g\in[G],i\in[S/4],j\in[A/2]\}=[N].\]
The transition function of state-action pair \((s^{\mathsf{b}}_{h(\ell,g)-1,i},a^{\mathsf{b}}_{j})\) equals
\[P_{h(\ell,g)-1}(s^{\mathsf{b}}_{h(\ell,g)-1,i},a^{\mathsf{b}}_{j})=\text{unif }(s^{\mathsf{e}}_{h(\ell,g),u}:u\in B_{0,N(g,i,j)})\quad\forall g\in[G],i\in[ S/4],j\in[A/2]. \tag{5}\]
Here the RHS is the uniform distribution over the element states \(s^{\mathsf{e}}_{h(\ell,g),u}\) for element \(u\in B_{0,N(g,i,j)}\). For the rest of actions, it goes to the no-reward terminal \(s^{\mathsf{t}}_{h(\ell,g),2}\):
\[P_{h(\ell,g)-1}(s^{\mathsf{b}}_{h(\ell,g)-1,i},a)=\mathbf{1}\{s^{\mathsf{t}} _{h(\ell,g),2}\}\quad\forall a\in\mathcal{A}\backslash\{a^{\mathsf{e}}_{j}\}_ {j\in[A/2]}\]
**(d) Pivotal states.** The pivotal state \(s^{\mathsf{p}}_{h}\) appears at every step \(h\in\mathcal{H}_{1}\), and for \(h<H/2-1\), the transition function equals
\[P_{h}(s^{\mathsf{p}}_{h},a)=\begin{cases}\mathbf{1}\{s^{\mathsf{r}}_{h+1,1}\} &a=a^{\mathsf{p}},h=h(\ell,g,1)\text{ for some }\ell\in[L],g\in[G]\\ \mathbf{1}\{s^{\mathsf{p}}_{h+1}\}&\text{ otherwise}\end{cases} \tag{6}\]
That is, the pivotal state stays on itself, except at the first step of \(\mathcal{H}_{\ell,g}\), it could go to the routing state \(s^{\mathsf{r}}_{h(\ell,g,2),1}\).
At the \(H/2\)-th step, it goes to the no-reward terminal \(s^{\mathsf{t}}_{H/2+1,2}\),
\[P_{H/2}(s^{\mathsf{p}}_{H/2},a)=\mathbf{1}\{s^{\mathsf{t}}_{H/2+1,2}\}\quad \forall a\in A.\]
**(e) Routing states.** Recall \(\{s^{\mathsf{r}}_{h(\ell,g,\tau),\alpha}\}_{\alpha\in[1:2^{\tau-2}]}\) is the collection of routing states at the \(\alpha\)-th step (\(\alpha\in[2:\log_{2}(S)]\)), \(g\)-th group (\(g\in[G]\)) and \(\ell\)-th layer (\(\ell\in[L]\)).
When \(\tau\in[2:\log_{2}(S)-1]\), the transition function equals
\[P_{h(\ell,g,\tau)}(s^{\mathsf{r}}_{h(\ell,g,\tau),\alpha},a)=\begin{cases} \mathbf{1}\{s^{\mathsf{r}}_{h(\ell,g,\tau+1),2\alpha-1}\}&a=a^{\mathsf{r}}_{1} \\ \mathbf{1}\{s^{\mathsf{r}}_{h(\ell,g,\tau+1),2\alpha}\}&a=a^{\mathsf{r}}_{2} \quad,\quad\forall\alpha\in[2^{\tau-2}].\\ \mathbf{1}\{s^{\mathsf{t}}_{h(\ell,g,\tau+1),2}\}&\text{ otherwise}\end{cases} \tag{7}\]
In other words, the routing state \(s^{\mathsf{r}}_{h(\ell,g,\tau),\alpha}\) goes to either \(s^{\mathsf{r}}_{h(\ell,g,\tau+1),2\alpha-1}\) or \(s^{\mathsf{r}}_{h(\ell,g,\tau+1),2\alpha-1}\), depending on the choice of actions (unless it goes to the no-reward terminal \(s^{\mathsf{r}}_{h(\ell,g,\tau+1),2}\)).
When \(\tau=\log_{2}(S)\), the routing state \(s^{\mathsf{r}}_{h(\ell,g,\log_{2}(S)),\alpha}\) goes to the set state \(s^{\mathsf{b}}_{h(\ell,g)-1,\alpha}\) (\(\alpha\in[S/4]\)), that is,
\[P_{h(\ell,g,\log_{2}(S))}(s^{\mathsf{r}}_{h(\ell,g,\log_{2}(S)), \alpha},a)=s^{\mathsf{b}}_{h(\ell,g)-1,\alpha},\quad\forall\alpha\in[S/4],a \in\mathcal{A}. \tag{8}\]
The entire transition of routing states within a group works like a binary search tree: it comes from the pivotal state and goes to one of the set states. We note that if \(S\leq A\) the construction could be simplified: we can remove routing states and have a pivotal state directly go to set states. This completes the description of the initial MDP.
Update sequence.We next specify the sequence of updates to the MDP. The sequence of updates is divided into \(K=n/N\) stages, and each stage contains \(n\)-epochs.
At the beginning of each stage, the update occurs on the state-action pairs for set-states:
\[\{(s^{\mathsf{b}}_{h(\ell,g)-1,i},a^{\mathsf{b}}_{j})\}_{\ell\in[ L],g\in[G],i\in[S/4],j\in[A/2]}\]
Concretely, there is an initialization phase at the beginning of the \(k\)-th stage (\(k\in[K]\)). Let \(t(k)\in[T]\) be the end of initiazation phase, and the nature sets
\[P_{t(k),h(\ell,g)-1}(s^{\mathsf{b}}_{h(\ell,g)-1,i},a^{\mathsf{ b}}_{j})=\mathrm{unif}(s^{\mathsf{e}}_{h(\ell,g),u}:u\in B_{k,N(g,i,j)})\quad \forall\ell\in[L],g\in[G],i\in[S/4],j\in[A/2].\]
Each stage contains \(n\)-epochs, and during each epoch, the update occurs on the state-action pairs \(\{(s^{\mathsf{e}}_{H/2,u},a^{\mathsf{e}})\}_{u\in[m]}\) of element state-action, in the \(H/2\)-th step. Let \(t(k,\tau)\in[T]\) be the end of \(k\)-th (\(k\in[K]\)) stage and \(\tau\)-th (\(\tau\in[n]\)) epoch. In the \(\tau\)-th epoch (\(\tau\in[n]\)), for each element \(u\in[m]\), the transition function is updated to
\[P_{t(k,\tau),H/2}(s^{\mathsf{e}}_{H/2,u},a^{\mathsf{e}})=\begin{cases} \mathbf{1}\{s^{\mathsf{t}}_{H/2+1,1}\}&u\in C_{\tau}\\ \mathbf{1}\{s^{\mathsf{t}}_{H/2+1,2}\}&u\notin C_{\tau}\end{cases}. \tag{9}\]
To count the total number of updates, there are \(K=n/N\) stages. The initialization takes at most \(O(SAHm)\) updates; there are \(n\) epochs, and each epoch contains at most \(2m\) updates. Hence the total number of updates equals \((n/N)\cdot O(SAHm+2mn)\approx T\).
### Analysis
We now proceed to prove Theorem 3.1. For any stage \(k\in[K]\) and epoch \(\tau\in[n]\), we compute the \(V\)-value of the optimal policy. The proof can be found at the Appendix B
**Lemma 3.4** (\(V\)-value, terminal states).: _At the end of stage \(k\in[K]\) and epoch \(t\in[n]\), for any step \(h\in[H]\), the \(V\)-value of optimal policy at terminal states satisfies \(V^{*}_{t(k,\tau),h}(s^{\mathsf{t}}_{h,1})=\min\{H+1-h,H/2\}\) and \(V^{*}_{t(k,\tau),h}(s^{\mathsf{t}}_{h,2})=0\)._
**Lemma 3.5** (\(V\)-value, element states).: _At the end of stage \(k\in[K]\) and epoch \(\tau\in[n]\), for any layer \(\ell\in[L]\) and any step \(h\in\mathcal{H}_{1,\ell}\)_
* _For any element_ \(u\in C_{\tau}\)_,_ \(V^{*}_{t(k,\tau),h}(s^{\mathsf{e}}_{h,u})=H/2\)_; and_
* _For any element_ \(u\notin C_{\tau}\)_, we have_ \(V^{*}_{t(k,\tau),h}(s^{\mathsf{e}}_{h,u})=V^{*}_{t(k,\tau),h(\ell)+1}(s^{ \mathsf{p}}_{h(\ell)+1})\)_._
_Here we take \(V_{t(k,\tau),H/2+1}(s_{H/2+1}^{\mathsf{p}}):=0\)._
**Lemma 3.6** (\(V\)-value, set states).: _At the end of stage \(k\in[K]\) and epoch \(\tau\in[n]\), for each level \(\ell\in[L]\), group \(g\in[G]\), we have_
\[V_{t(k,\tau),h(\ell,g)-1}^{*}(s_{h(\ell,g)-1,i}^{\mathsf{b}})\] \[= \max_{j\in[A/2]}\left\{\frac{|C_{\tau}\cap B_{k,N(g,i,j)}|}{b} \cdot\frac{H}{2}+\left(1-\frac{|C_{\tau}\cap B_{k,N(g,i,j)}|}{b}\right)\cdot V _{t(k,\tau),h(\ell)+1}^{*}(s_{h(\ell)+1}^{\mathsf{p}})\right\}\]
**Lemma 3.7** (\(V\)-value, pivotal state).: _At the end of stage \(k\in[K]\) and epoch \(\tau\in[n]\), for each level \(\ell\in[L]\), the \(V\)-value of the pivotal state satisfies_
\[V_{t(k,\tau),h(\ell-1)+1}^{*}(s_{h(\ell-1)+1}^{\mathsf{p}})\] \[= \max_{\nu\in[N]}\left\{\frac{|C_{\tau}\cap B_{k,\nu}|}{b}\cdot \frac{H}{2}+\left(1-\frac{|C_{\tau}\cap B_{k,\nu}|}{b}\right)\cdot V_{t(k,\tau ),h(\ell)+1}^{*}(s_{h(\ell)+1}^{\mathsf{p}})\right\}.\]
As a corollary, we can compute the \(V\)-value of the initial state.
**Lemma 3.8** (\(V\)-value, initial state).: _Let \(\kappa_{k,\tau}=\max_{\nu\in[N]}\frac{|C_{\tau}\cap B_{k,\nu}|}{b}\), then at the end of stage \(k\) and epoch \(\tau\in[n]\), one has_
\[V_{t(k,\tau),1}^{*}(s_{\mathrm{init}})=(1-(1-\kappa_{k,\tau})^{L})\cdot\frac{ H}{2}.\]
Now we can complete the proof of Theorem 3.1
Proof of Theorem 3.1.: If the input of Max-IP is a YES instance, suppose \(C_{\tau}\subseteq B_{k,\nu}\) for some \(\tau\in[n],k\in[K],\nu\in[N]\); then \(\kappa_{k,\tau}=c/b=1/L\). By Lemma 3.8, the value of \(s_{\mathrm{init}}\) at the end of epoch \(t\) satisfies
\[V_{t(k,\tau),1}^{*}(s_{\mathrm{init}})=(1-(1-\kappa_{k,\tau})^{L})\cdot\frac{ H}{2}=(1-(1-1/L)^{L})\cdot\frac{H}{2}\geq\frac{H}{4}.\]
In the NO instance case, we have
\[\kappa_{k,\tau}\leq c/wb\quad\text{where}\quad w=2^{\log(n)^{1-o(1)}}=\Omega( 1),\]
then the value of \(s_{\mathrm{init}}\) at the end of any stage \(k\in[K]\), epoch \(\tau\in[n]\) is at most
\[V_{t(k,\tau),1}^{*}(s_{\mathrm{init}})=(1-(1-\kappa_{k,\tau})^{L})\cdot H/2 \leq(1-(1-1/wL)^{L})\cdot\frac{H}{2}\leq\frac{1}{w}\cdot\frac{H}{2}\leq\frac{ H}{100}.\]
Now we bound the amortized runtime. By Theorem 3.3, assuming SETH, the total runtime of any NSRL algorithm should be at least \(n^{2-o(1)}\), and therefore, the amortized runtime per update should be at least \(n^{2-o(1)}/T=(SAH)^{1-o(1)}\cdot T^{-o(1)}\approx(SAH)^{1-o(1)}\) when \(T=\mathrm{poly}(SAH)\). This completes the proof.
**Remark 3.9**.: _The statement of Theorem 3.1 asserts the decision version of NSRL requires \((SAH)^{1-o(1)}\) time per update. The same lower bound translates directly to the task of maintaining an approximate \(V\)-value or maintaining an approximately optimal policy._
Incremental action changes
When the MDP changes only through the introduction of new actions, then we can maintain an \(\epsilon\)-approximation to value with amortized runtime that depends, polynomially, only on \(H\) and \(\frac{1}{\epsilon}\) (and not \(S\)).
**Theorem 4.1** (Efficient algorithm, incremental changes).: _There is an algorithm with amortized runtime \(\widetilde{O}(H^{5}/\epsilon^{3})\) per update that maintains an \(\epsilon\)-approximation of the value over any sequence of \(T\) insertions of actions._
The approach is given as Algorithm 1. It combines the classic \(Q\)-value iteration with lazy updates on \(V\)-value. For each new state-action pair \((s_{h},a_{h})\), it constructs the empirical transition kernel using samples from \(P_{h}(s_{h},a_{h})\). The newly added action could potentially affect the state value, and our algorithm propagates the change -- lazily -- to downstream states. That is, a change to \(V\)-value is triggered only if it significantly exceeds the previous estimate. The key mathematical intuition is the monotonicity of \(V\)-value under incremental action changes. The amortized runtime of Algorithm 1 is bounded because the \(Q\)-value of each state-action is updated rarely, at most \(\widetilde{O}(H^{3}/\epsilon^{2}\cdot H^{2}/\epsilon)=\widetilde{O}(H^{5}/ \epsilon^{3})\) times, due to the sparsity of the empirical transition kernel and the lazy updates. The correctness of our algorithm follows from the standard Bernstein type bound and a robust analysis of \(Q\)-value iteration. The detailed proof can be found at Appendix C.
```
1:Initialize \(N\gets H^{3}\log^{3}(SHT)/\epsilon^{2}\), \(\widehat{V}_{h}(s_{h})\gets 0,\widetilde{V}_{h}(s_{h})\gets 0\), \(\forall s_{h}\in\mathcal{S}_{h},h\in[H]\)
2:procedureInsert(\(s_{h},a_{h}\))
3: Generate \(N\) samples \(\{\widehat{s}_{h+1,1},\dots,\widehat{s}_{h+1,N}\}\) from \(P_{h}(s_{h},a_{h})\) and reward \(r_{h}(s_{h},a_{h})\)
4:\(\widehat{P}_{h}(s_{h},a_{h})\leftarrow\text{unif}\{\widehat{s}_{h+1,1},\dots, \widehat{s}_{h+1,N}\}\)
5: Call Propagate
6:endprocedure
7:procedurePropagate
8:for\(h=H,H-1,\dots,1\)do
9:for state-action pair \((s_{h},a_{h})\in\mathcal{S}_{h}\times\mathcal{A}_{h}\)do\(\triangleright\) Update only if there is a change
10:\(\widehat{Q}_{h}(s_{h},a_{h})\gets r_{h}(s_{h},a_{h})+\mathbb{E}_{s_{h+1} \sim\widehat{P}_{h}(s_{h},a_{h})}\,\widetilde{V}_{h+1}(s_{h+1})\)
11:\(\widehat{V}_{h}(s_{h})\leftarrow\max_{a_{h}}\widehat{Q}(s_{h},a_{h})\)
12:If\(\widetilde{V}_{h}(s_{h})\leq\widehat{V}_{h}(s_{h})-\epsilon/4H\)then\(\widetilde{V}_{h}(s_{h})\leftarrow\widehat{V}_{h}(s_{h})\)
13:endfor
14:endfor
15:endprocedure
```
**Algorithm 1** Lazy updated \(Q\)-value iteration (Lazy-QVI)
Theorem 4.1 provides an efficient algorithm for approximately optimal policy, one natural question is whether one can maintain the exact optimal policy (or value function) under incremental action changes. We give a negative answer, showing that \(T^{1-o(1)}\) runtime is necessary if one wants to maintain an \(O(1/T)\)-approximation to the value of optimal policy.
**Theorem 4.2** (Lower bound, exact optimal policy).: _Unless \(\mathsf{SETH}\) is false, there is a sequence of \(T\) action insertions such that no algorithm with amortized runtime \(T^{1-o(1)}\) per update can maintain an \(O(1/T)\)-approximation to the value of optimal policy._
Discussion
The importance of a complexity result rests on its capacity to inform the development of new algorithms. Our result seems to suggest that a successful heuristic approach to NSRL can alternate between additional exploration after each change in parameters and, when this brings diminishing benefits, a restart from scratch. This is not unlike some of the approaches taken by some state-of-the-art applications [10]. By further developing this and similar approaches, the current challenge of NSRL may be eventually tamed. We also note that our negative result leaves open the NSRL problem in the case of function approximation [11, 12]; we conjecture that a similar negative result may be provable in this case as well.
|
2308.08007
|
A degenerate Kirchhoff-type problem involving variable $s(\cdot)$-order
fractional $p(\cdot)$-Laplacian with weights
|
This paper deals with a class of nonlocal variable $s(.)$-order fractional
$p(.)$-Kirchhoff type equations: \begin{eqnarray*} \left\{
\begin{array}{ll}
\mathcal{K}\left(\int_{\mathbb{R}^{2N}}\frac{1}{p(x,y)}\frac{|\varphi(x)-\varphi(y)|^{p(x,y)}}{|x-y|^{N+s(x,y){p(x,y)}}}
\,dx\,dy\right)(-\Delta)^{s(\cdot)}_{p(\cdot)}\varphi(x) =f(x,\varphi)
\quad \mbox{in }\Omega,
\\ \varphi=0 \quad \mbox{on }\mathbb{R}^N\backslash\Omega. \end{array}
\right. \end{eqnarray*} Under some suitable conditions on the functions $p,s,
\mathcal{K}$ and $f$, the existence and multiplicity of nontrivial solutions
for the above problem are obtained. Our results cover the degenerate case in
the $p(\cdot)$ fractional setting.
|
Mostafa Allaoui, Mohamed Karim Hamdani, Lamine Mbarki
|
2023-08-15T19:45:23Z
|
http://arxiv.org/abs/2308.08007v1
|
A degenerate Kirchhoff-type problem involving variable \(s(\cdot)\)-order fractional \(p(\cdot)\)-Laplacian with weights
###### Abstract
This paper deals with a class of nonlocal variable \(s(.)\)-order fractional \(p(.)\)-Kirchhoff type equations:
\[\left\{\begin{array}{l}\mathcal{K}\left(\int_{\mathbb{R}^{2N}}\frac{1}{p(x,y )}\frac{|\varphi(x)-\varphi(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\,dx\,dy\right) (-\Delta)^{s(\cdot)}_{p(\cdot)}\varphi(x)=f(x,\varphi)\quad\text{in }\Omega,\\ \\ \varphi=0\quad\text{on }\mathbb{R}^{N}\backslash\Omega.\end{array}\right.\]
Under some suitable conditions on the functions \(p,s,\mathcal{K}\) and \(f\), the existence and multiplicity of nontrivial solutions for the above problem are obtained. Our results cover the degenerate case in the \(p(\cdot)\) fractional setting.
keywords: Variational methods; \(p(.)\)-fractional Laplacian; Kirchhoff type equations
_2010 Mathematics Subject Classification. 35A15, 35D30, 35J35, 35J60_
## 1 Introduction
In this work, we investigate the following variable \(s(.)\)-order fractional \(p(.)\)-Kirchhoff type problem:
\[\left\{\begin{array}{l}\mathcal{K}\left(\int_{\mathbb{R}^{2N}}\frac{1}{p(x, y)}\frac{|\varphi(x)-\varphi(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\,dx\,dy\right)(- \Delta)^{s(\cdot)}_{p(\cdot)}\varphi(x)=f(x,\varphi)\quad\text{in }\Omega,\\ \\ \varphi=0\quad\text{on }\mathbb{R}^{N}\backslash\Omega,\end{array}\right. \tag{1.1}\]
where \(\mathcal{K}\) is a model of Kirchhoff coefficient, \(\Omega\) is a smooth bounded domain in \(\mathbb{R}^{N}\) and \(f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}\) is a continuous function specified later. The operator \((-\Delta)^{s(\cdot)}_{p(\cdot)}\) is called variable \(s(.)\)-order fractional \(p(.)\)-Laplacian, given \(s(.):\overline{\Omega}\times\overline{\Omega}\rightarrow(0,1)\) and \(p(.):\overline{\Omega}\times\overline{\Omega}\rightarrow(1,+\infty)\) with \(N>s(x,y)p(x,y)\) for all
\((x,y)\in\overline{\Omega}\times\overline{\Omega}\), which can be defined as
\[(-\Delta)^{s(\cdot)}_{p(\cdot)}\varphi(x):=P.V\int_{\mathbb{R}^{N}}\frac{|\varphi (x)-\varphi(y)|^{p(x,y)-2}(\varphi(x)-\varphi(y))}{|x-y|^{N+s(x,y)p(x,y)}}\,dy, \ \ x\in\mathbb{R}^{N},\]
along any \(\varphi\in C^{\infty}_{0}(\mathbb{R}^{N})\), where P.V. denotes the Cauchy principle value.
Throughout this paper, we make the following assumptions:
**(A1)**: \(\mathcal{K}:\mathbb{R}^{+}_{0}\to\mathbb{R}^{+}_{0}\) is a continuous function and satisfies the (polynomial growth) condition
\[k_{1}\zeta^{\theta-1}\leq\mathcal{K}(\zeta)\leq k_{2}\zeta^{\theta-1}\ \ \mbox{for any}\ \ \zeta\geq 0,\]
where \(0<k_{1}\leq k_{2}\) are real numbers and exponent \(1<\theta\);
and the variable exponent \(p\) and the variable order \(s\) admit the following statements:
* \(s(x,y)\) is symmetric function, ie, \(s(x,y)=s(y,x)\) and we have \[0<s^{-}:=\inf_{(x,y)\in\overline{\Omega}\times\overline{\Omega}}s(x,y)\leq s ^{+}:=\sup_{(x,y)\in\overline{\Omega}\times\overline{\Omega}}s(x,y)<1.\]
* \(p(x,y)\) is symmetric function, ie, \(p(x,y)=p(y,x)\) and we have \[1<p^{-}:=\inf_{(x,y)\in\overline{\Omega}\times\overline{\Omega}}p(x,y)\leq p ^{+}:=\sup_{(x,y)\in\overline{\Omega}\times\overline{\Omega}}p(x,y)<\infty.\]
For any \(x\in\mathbb{R}^{N}\), we denote
\[\overline{p}(x):=p(x,x),\quad\overline{s}(x):=s(x,x).\]
In recent years, a great deal of attention has been paid to the study of problems involving fractional \(p,p(.)\)-Laplacian, both in the pure mathematical research and in the concrete real-world applications, such as, optimization, finance, continuum mechanics, phase transition phenomena, population dynamics, and game theory, see [1, 3, 5, 6, 10, 14, 15, 17, 20, 21] and references therein.
In [2], using variational methods, the authors studied a nonlocal \(p(x)\)-Kirchhoff problem:
\[\left\{\begin{array}{l} M\left(\int_{\Omega}\frac{|\nabla u |^{p(x)}}{p(x)}\,dx\right)(-\Delta_{p(x)}u)=f(x,u)\quad\mbox{in $\Omega$,}\\ u=0\quad\mbox{on $\partial\Omega$,}\end{array}\right.\]
and have obtained the existence and multiplicity of the above problem, under appropriate assumptions on \(f\) and \(M\).
In [4], by direct variational approach and Ekeland's variational principle, Azroul et al investigate the existence of nontrivial weak solutions for the following problem:
\[\left\{\begin{array}{l} M\left(\int_{\mathcal{Q}}\frac{1}{p (x,y)}\frac{|u(x)-u(y)|^{p(x,y)}}{|x-y|^{N+sp(x,y)}}\,dx\,dy\right)(-\Delta_{p( x)})^{s}u(x)=\lambda|u(x)|^{r(x)-2}u(x)\quad\mbox{in $\Omega$,}\\ u=0\quad\mbox{on $\mathbb{R}^{N}\backslash\Omega$.}\end{array}\right.\]
In [19], the authors considered a multiplicity result for a Schrodinger equation driven by the variable \(s(.)\)-order fractional Laplace operator via variational methods. However, the authors in [18], investigated the existence of infinitely many solutions for a kind of Kirchhoff-type variable \(s(.)\)-order problem by using four different critical point theorems.
Inspired by the above results, we study problem (1.1) in the case when the function \(\mathcal{K}\) is singular at zero and \(f\) involves indefinite weight functions.
## 2 Functional setting and preliminaries
In this part, we briefly review some basic knowledge, properties of Lebesgue spaces with variable exponents. For more details, the reader can refer to [12; 13] and the references therein. For this aim, set
\[C_{+}(\Omega):=\{g:g\in C(\overline{\Omega})\text{ and }g(x)>1,\forall x\in \overline{\Omega}\}.\]
For \(g(\cdot)\in C_{+}(\Omega)\), the variable exponent Lebesgue space \(L^{g(\cdot)}(\Omega)\) is defined by
\[L^{g(\cdot)}(\Omega):=\{w:\Omega\to\mathbb{R}\text{ measurable and }\int_{\Omega}|w(x)|^{g(x)}dx<\infty\}.\]
This space is endowed with the so-called Luxemburg norm given by
\[\|w\|_{L^{g(\cdot)}(\Omega)}=|w|_{g(\cdot)}:=\inf\{\delta>0:\int_{\Omega}| \frac{w(x)}{\delta}|^{g(x)}dx\leq 1\}\]
and \((L^{g(\cdot)}(\Omega),|w|_{g(\cdot)})\) becomes a Banach space, and we call it variable exponent Lebesgue space.
**Proposition 2.1**.: _[_13_]_ _For every \(\varphi\in L^{g(x)}(\Omega)\) and \(\psi\in L^{r(x)}(\Omega)\), we have_
\[\left|\int_{\Omega}\varphi\psi\,dx\right|\leq\left(\frac{1}{g^{-}}+\frac{1}{r ^{-}}\right)|\varphi|_{g(x)}|\psi|_{r(x)},\]
_where \(r,g\in C_{+}(\Omega)\) and \(1/r(x)+1/g(x)=1\). Moreover, if \(g_{1},g_{2},g_{3}\in C_{+}(\overline{\Omega})\) and \(\frac{1}{g_{1}(x)}+\frac{1}{g_{2}(x)}+\frac{1}{g_{3}(x)}=1\), then for any \(\varphi\in L^{g_{1}(x)}(\Omega)\), \(\chi\in L^{g_{2}(x)}(\Omega)\) and \(\psi\in L^{g_{3}(x)}(\Omega)\) the following inequality holds:_
\[\int_{\Omega}|\varphi\chi\psi|dx\leq\left(\frac{1}{g_{1}^{-}}+\frac{1}{g_{2}^ {-}}+\frac{1}{g_{3}^{-}}\right)|\varphi|_{g_{1}(x)}|\chi|_{g_{2}(x)}|\psi|_{g_ {3}(x)}. \tag{2.1}\]
**Proposition 2.2**.: _[_8_]_ _Assume that \(h\in L^{\infty}_{+}(\Omega),\,g\in C_{+}(\overline{\Omega}).\) If \(|\chi|^{h(x)}\in L^{g(x)}(\Omega),\) then we have_
\[\min\left\{|\chi|_{h(x)g(x)}^{h^{-}},|\chi|_{h(x)g(x)}^{h^{*}}\right\}\leq \left|\left|\chi\right|^{h(x)}\right|_{g(x)}\leq\max\left\{|\chi|_{h(x)g(x)}^{ h^{-}},|\chi|_{h(x)g(x)}^{h^{*}}\right\}. \tag{2.2}\]
In the present part, we give the variational setting of problem (1.1) and state important results to be used later. We set \(\mathcal{Q}:=\mathbb{R}^{2N}\setminus(C^{\Omega}_{R^{N}}\times C^{\Omega}_{R^ {N}})\) and define the fractional Sobolev space with variable exponent as
\[E:=\left\{v:\mathbb{R}^{N}\to\mathbb{R}:v_{\mathbb{I}_{\Omega}}\in L^{ \overline{\rho}(x)}(\Omega),\quad\int_{\mathcal{Q}}\frac{|v(x)-v(y)|^{\rho(x,y )}}{\eta^{p(x,y)}|x-y|^{N+s(x,y)p(x,y)}}\,dx\,dy<\infty,\text{ for some }\eta>0\right\}.\]
The space \(E\) is equipped with the norm
\[\|v\|_{E}:=\|v\|_{L^{\overline{\Omega}\times(\Omega)}}+[v]_{E},\]
where \([v]_{E}\) is the seminorm defined as follows
\[[v]_{E}=\inf\left\{\eta>0:\int_{Q}\frac{|v(x)-v(y)|^{p(x,y)}}{\eta^{p(x,y)}|x-y| ^{N+s(x,y)p(x,y)}}\,dx\,dy<1\right\}.\]
Then \((E,\|\cdot\|_{E})\) is a separable reflexive Banach space.
Now, let define the subspace \(E_{0}\) of \(E\) as
\[E_{0}:=\left\{v\in E:v=0\text{ a.e. in }C_{R^{N}}^{\Omega}\right\},\]
with the norm on \(E_{0}\)
\[\|v\|_{E_{0}}=[v]_{E}.\]
**Proposition 2.3**.: _[_7_]_ _Let \(s(\cdot)\) and \(p(\cdot)\) satisfy (S) and (P) with \(s(x,y)p(x,y)<N\) for any \((x,y)\in\overline{\Omega}\times\overline{\Omega}\). Then for any \(g\in C_{+}(\overline{\Omega})\) such that \(1<g^{-}\leq g(x)<p_{s}^{*}(x):=\frac{N\overline{p}(x)}{N-\overline{s}(x) \overline{p}(x)}\) for any \(x\in\overline{\Omega}\), there exits a positive constant \(C=C(N,s,p,g,\Omega)\) such that_
\[\|w\|_{L^{g(x)}(\Omega)}\leq C\|w\|_{E_{0}},\]
_for every \(w\in E_{0}\). Moreover, the embedding \(E_{0}\hookrightarrow L^{g(x)}(\Omega)\)is compact._
Let us set the fractional modular function \(\rho_{s,p}:E_{0}\to\mathbb{R}\) as
\[\rho_{s,p}(w):=\int_{Q}\frac{|w(x)-w(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\, dx\,dy. \tag{2.3}\]
**Proposition 2.4**.: _[_7_]_ _Let \(w,w_{m}\in E_{0}\) and \(\rho_{s,p}\) be defined as in (2.3). Then we have the following results:_
* \(\|w\|_{E_{0}}<1\)__\((=1;>1)\) _if and only if_ \(\rho_{s,p}(w)<1(=1;>1)\)_._
* _If_ \(\|w\|_{E_{0}}>1\)_, then_ \(\|w\|_{E_{0}}^{p^{-}}\leq\rho_{s,p}(w)\leq\|w\|_{E_{0}}^{p^{+}}\)_._
* _If_ \(\|w\|_{E_{0}}<1\)_, then_ \(\|w\|_{E_{0}}^{p^{+}}\leq\rho_{s,p}(w)\leq\|w\|_{E_{0}}^{p^{-}}\)_._
* \(\lim_{m\to\infty}\|w_{m}-w\|_{E_{0}}=0\Leftrightarrow\lim_{m\to\infty}\rho_{s, p}(w_{m}-w)=0\)_._
**Proposition 2.5**.: _[_7_]_ _(\(E_{0},\|\cdot\|_{E_{0}}\)) is a separable, reflexive and uniformly convex Banach space._
**Proposition 2.6**.: _For all \(u,\varphi\in E_{0}\), we consider the operator \(\mathcal{T}:E_{0}\to E_{0}^{*}\) such that_
\[\langle\mathcal{T}(u),\varphi\rangle=\int_{Q}\frac{|u(x)-u(y)|^{p(x,y)-2}(u( x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+sp(x,y)}}dxdy.\]
_Then, the following assertions hold:_
* \(\mathcal{T}\) _is a bounded and strictly monotone operator;_
_._
2. \(\mathcal{T}\) _is a mapping of type_ \((S^{+})\)_, that is,_ \[\text{if }u_{n}\rightharpoonup u\text{ in }E_{0}\text{ and }\limsup\langle\mathcal{T}(u_{n})-\mathcal{T}(u),u_{n}-u\rangle\leq 0, \text{ then }u_{n}\to u\text{ in }E_{0}.\]
**Proof 2.1**.:
1. _Obviously,_ \(\mathcal{T}\) _is a bounded operator. Using Simon's inequalities:_ \[\left(|\xi|^{p-2}\xi-|\eta|^{p-2}\eta\right)(\xi-\eta)\geq\frac{1}{2^{p}}|\xi- \eta|^{p}\text{ \ if }p\geq 2,\] (2.4) \[\left(|\xi|^{p-2}\xi-|\eta|^{p-2}\eta\right)(\xi-\eta)\left(|\xi|+|\eta| \right)^{2-p}\geq(p-1)|\xi-\eta|^{2}\text{ \ if }1<p<2,\] (2.5) _for any_ \(\xi,\eta\in\mathbb{R}^{N}\)_, we deduce that_ \[\langle\mathcal{T}(v)-\mathcal{T}(w),v-w\rangle>0\text{ \ for }v\neq w.\] _Thus,_ \(\mathcal{T}\) _is strictly monotone._
2. _Let_ \((u_{n})\) _be a sequence of_ \(E_{0}\) _such that_ \[u_{n}\rightharpoonup u\text{ in }E_{0}\text{ and }\limsup\langle\mathcal{T}(u_{n})-\mathcal{T}(u),u_{n}-u\rangle\leq 0.\] _Then, from (i), we deduce that_ \[\lim_{n\to+\infty}\langle\mathcal{T}(u_{n})-\mathcal{T}(u),u_{n}-u\rangle=0.\] (2.6) _Put_ \[\mathcal{U}_{p}=\{(x,y)\in\mathcal{Q}:1<p(x,y)<2\},\text{ \ }\mathcal{V}_{p}=\{(x,y)\in\mathcal{Q}:p(x,y)\geq 2\}.\]
_Let \((x,y)\in\mathcal{U}_{p}\) and \(w_{n}=u_{n}-u\). Using (2.5), Holder's inequality and Propositions 2.2, 2.4, we get_
\[\begin{split}&\int_{\mathcal{U}_{p}}\frac{|w_{n}(x)-w_{n}(y)|^{p(x, y)}}{|x-y|^{N+s(x,y)p(x,y)}}dxdy\\ &\quad\leq\frac{1}{p^{-}-1}\int_{\mathcal{U}_{p}}\left(\left[ \frac{|u_{n}(x)-u_{n}(y)|^{p(x,y)-2}(u_{n}(x)-u_{n}(y))(w_{n}(x)-w_{n}(y))}{|x- y|^{N+s(x,y)p(x,y)}}\right.\right.\\ &\quad\left.\left.-\frac{|u(x)-u(y)|^{p(x,y)-2}(u(x)-u(y))(w_{n}( x)-w_{n}(y))}{|x-y|^{N+s(x,y)p(x,y)}}\right]^{\frac{p(x,y)}{2}}\right.\\ &\quad\left.\times\left[\frac{|u_{n}(x)-u_{n}(y)|^{p(x,y)}+|u(x)- u(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\right]^{\frac{2-p(x,y)}{2}}\right)dxdy\\ &\quad\leq\frac{1}{p^{-}-1}\int_{\mathcal{Q}}\left(\left[\frac{| u_{n}(x)-u_{n}(y)|^{p(x,y)-2}(u_{n}(x)-u_{n}(y))(w_{n}(x)-w_{n}(y))}{|x-y|^{N+s(x,y)p(x,y )}}\right.\right.\\ &\quad\left.\left.-\frac{|u(x)-u(y)|^{p(x,y)-2}(u(x)-u(y))(w_{n} (x)-w_{n}(y))}{|x-y|^{N+s(x,y)p(x,y)}}\right]^{\frac{p(x,y)}{2}}\right.\\ &\quad\times\left[\left(\frac{|u_{n}(x)-u_{n}(y)|^{p(x,y)}}{|x-y| ^{N+s(x,y)p(x,y)}}\right)^{\frac{2-p(x,y)}{2}}+\left(\frac{|u(x)-u(y)|^{p(x,y) }}{|x-y|^{N+s(x,y)p(x,y)}}\right)^{\frac{2-p(x,y)}{2}}\right]\right)dxdy\\ &\quad=\frac{1}{p^{-}-1}\int_{\mathcal{Q}}h_{1}^{\frac{p(x,y)}{2} }(x,y)\left(h_{2}^{\frac{2-p(x,y)}{2}}(x,y)+h_{3}^{\frac{2-p(x,y)}{2}}(x,y) \right)dxdy\\ &\quad\leq c\|h_{1}^{\frac{p(x,y)}{2}}\|_{L^{\frac{2}{R(x,y)}}( \mathcal{Q})}\left(\|h_{2}^{\frac{2-p(x,y)}{2}}\|_{L^{\frac{2}{-p(R,y)}}( \mathcal{Q})}+\|h_{3}^{\frac{2-p(x,y)}{2}}\|_{L^{\frac{2}{-p(R,y)}}(\mathcal{ Q})}\right)\\ &\quad\leq c\left(\|h_{1}\|_{L^{1}(\mathcal{Q})}^{\frac{p^{+}}{2} }+\|h_{1}\|_{L^{1}(\mathcal{Q})}^{\frac{p^{-}}{2}}\right)\left(\|h_{2}\|_{L^{ 1}(\mathcal{Q})}^{\frac{2-p^{+}}{2}}+\|h_{2}\|_{L^{1}(\mathcal{Q})}^{\frac{2-p ^{+}}{2}}+\|h_{3}\|_{L^{1}(\mathcal{Q})}^{\frac{2-p^{+}}{2}}+\|h_{3}\|_{L^{1}( \mathcal{Q})}^{\frac{2-p^{-}}{2}}\right),\end{split} \tag{2.7}\]
_where_
\[h_{1}(x,y)=\frac{|u_{n}(x)-u_{n}(y)|^{p(x,y)-2}(u_{n}(x)-u_{n}(y))(w_{n}(x)-w_{ n}(y))}{|x-y|^{N+s(x,y)p(x,y)}}-\frac{|u(x)-u(y)|^{p(x,y)-2}(u(x)-u(y))(w_{n}(x)-w_{ n}(y))}{|x-y|^{N+s(x,y)p(x,y)}},\]
\[h_{2}(x,y)=\frac{|u_{n}(x)-u_{n}(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\text{ \ and }h_{3}(x,y)=\frac{|u(x)-u(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}.\]
_Then, by using (2.7) and Proposition 2.4, we obtain_
\[\begin{split}&\int_{\mathcal{U}_{p}}\frac{|w_{n}(x)-w_{n}(y)|^{p(x, y)}}{|x-y|^{N+s(x,y)p(x,y)}}dxdy\\ &\quad\leq c\left(\langle\mathcal{T}(u_{n})-\mathcal{T}(u),u_{n} -u\rangle^{\frac{p^{+}}{2}}+\langle\mathcal{T}(u_{n})-\mathcal{T}(u),u_{n}-u \rangle^{\frac{p^{-}}{2}}\right)\\ &\quad\times\left(\rho_{s,p}(u_{n})^{\frac{2-p^{+}}{2}}+\rho_{s, p}(u_{n})^{\frac{2-p^{-}}{2}}+\rho_{s,p}(u)^{\frac{2-p^{+}}{2}}+\rho_{s,p}(u)^{\frac{2-p^{-}}{2}} \right).\end{split} \tag{2.8}\]
_From (2.6) and (2.8), we deduce that_
\[\lim_{n\rightarrow+\infty}\int_{\mathcal{U}_{p}}\frac{|w_{n}(x)-w_{n}(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}dxdy=0. \tag{2.9}\]
_For \((x,y)\in\mathcal{V}_{p}\). Using (2.4), Holder's inequality and Propositions 2.2, 2.4, we get_
\[\int_{\mathcal{V}_{p}}\frac{|w_{n}(x)-w_{n}(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}} dxdy\leq 2^{p^{*}}\langle\mathcal{T}(u_{n})-\mathcal{T}(u),u_{n}-u\rangle \to 0\text{ as }n\to+\infty. \tag{2.10}\]
_Then, thanks to (2.9) and (2.10), we conclude_
\[\rho_{s,p}(w_{n})\to 0\ \text{ as }\ n\to+\infty.\]
_Consequently, \(u_{n}\to u\) in \(E_{0}\)._
To prove Theorem 3.1, we will use Krasnoselskii's genus theory. To this end, let us recall the notion of genus and its basic properties, which can be found in [9, 16].
Let \(E\) be a real Banach space. We set
\[\mathcal{A}=\{Z\subset E\backslash\{0\}:Z\text{ is compact and }Z=-Z\}.\]
Let \(Z\in\mathcal{A}\) and \(E=\mathbb{R}^{k}\). The genus \(\gamma(Z)\) of \(Z\) is defined by
\[\gamma(Z)=\min\{k\geq 1:\text{ there exists an odd continuous mapping }h:Z\to\mathbb{R}^{k}\backslash\{0\}\}.\]
Moreover, if such function does not exist then \(\gamma(Z)=\infty\) and by convenience \(\gamma(\emptyset)=0\). As a typical example of a set of genus \(k\), we can mention a set homeomorphic to a \((k-1)\)-dimensional sphere via an odd map.
**Lemma 2.1**.: _Let \(E=\mathbb{R}^{k}\) and \(\partial\Omega\) be the boundary of an open, symmetric and bounded subset \(\Omega\subset\mathbb{R}^{N}\) with \(0\in\Omega\). Then \(\gamma(\partial\Omega)=N\)._
Moreover, in order to prove Theorem 3.1, we use the following theorem due to Clarke [11].
**Theorem 2.1** ([11]).: _Let \(\mathcal{H}\in C^{1}(E,\mathbb{R})\) be a functional satisfying the following conditions_
* \((i)\) _The functional_ \(\mathcal{H}\) _satisfies the_ \((PS)\) _condition;_
* \((ii)\) _\(\mathcal{H}\) is bounded from below and even;_
* \((iii)\) _there is a compact set_ \(Z\in\mathcal{A}\) _such that_ \(\gamma(Z)=j\) _and_ \(\sup_{x\in Z}\mathcal{H}(x)<\mathcal{H}(0)\)_._
_Then \(\mathcal{H}\) possesses at least \(j\) pairs of distinct critical points, and their corresponding critical values are less than \(\mathcal{H}(0)\)._
In the light of the variational structure of (1.1), we look for critical points of the associated Euler-Lagrange functional \(\mathcal{H}:E_{0}\to\mathbb{R}\) defined as
\[\mathcal{H}(\varphi)=\widehat{\mathcal{K}}\left(\Lambda_{p,s}(\varphi)\right) -\int_{\Omega}F(x,\varphi)dx, \tag{2.11}\]
for all \(\varphi\in E_{0}\), where
\[\Lambda_{p,s}(\varphi)=\int_{\mathcal{Q}}\frac{1}{p(x,y)}\frac{|\varphi(x)- \varphi(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}dx\,dy,\ \ \widehat{\mathcal{K}}(t)=\int_{0}^{t}\mathcal{K}(s)ds.\]
Note that \(\mathcal{H}\) is a \(C^{1}(E_{0},\mathbb{R})\) functional and
\[\langle\mathcal{H}^{\prime}(\varphi),\psi\rangle=\mathcal{K}\left(\Lambda_{p, s}(\varphi)\right)\int_{\mathcal{Q}}\frac{|\varphi(x)-\varphi(y)|^{p(x,y)-2}( \varphi(x)-\varphi(y))(\psi(x)-\psi(y))}{|x-y|^{N+p(x,y)s(x,y)}}dxdy-\int_{ \Omega}f(x,\varphi)\psi dx, \tag{2.12}\]
for any \(\psi\in E_{0}\). Thus, critical points of \(\mathcal{H}\) are weak solutions of (1.1).
## 3 Main results and proofs
Before stating our first result, we make the following assumptions on \(f\):
* There exist \(c_{1}>0\) and \(1<q(x)<p_{s}^{*}(x)\) for all \(x\in\Omega\) such that \[|f(x,\zeta)|\leq c_{1}(1+|\zeta|^{q(x)-1}),\mbox{for all }(x,\zeta)\in\Omega \times\mathbb{R};\]
* there exist \(c_{2}>0\), \(\alpha_{0}\in(1,\theta p^{-})\) and an open set \(\Omega_{0}\subset\Omega\) such that \[F(x,\zeta)\geq c_{2}|\zeta|^{\alpha_{0}}\;\mbox{ for all }\;(x,\zeta)\in\Omega_{0}\times\mathbb{R};\]
* \(f(x,-\zeta)=-f(x,\zeta),\mbox{ for all }(x,\zeta)\in\Omega\times\mathbb{R}.\)
**Theorem 3.1**.: _Suppose that (\(A_{1}\)), (\(f_{1}\))-(\(f_{3}\)) are satisfied. If \(q^{+}<\theta p^{-}\), then problem (1.1) has infinitely many pairs of weak solutions with negative energy._
**Lemma 3.1**.: _Suppose that (\(A_{1}\)) and (\(f_{1}\)) are satisfied. Then \(\mathcal{H}\) is bounded from below and satisfies the (PS) condition._
**Proof 3.1**.: _From (\(A_{1}\)) and (\(f_{1}\)), we have_
\[\mathcal{H}(u) = \widehat{\mathcal{K}}(\Lambda_{p,s}(u))-\int_{\Omega}F(x,u)dx\] \[\geq \frac{k_{1}}{\theta}\left(\Lambda_{p,s}(u)\right)^{\theta}-\frac{ c_{1}}{q^{-}}\int_{\Omega}|u|^{q(x)}dx-c_{1}|\Omega|,\]
_for all \(u\in E_{0}.\) Hence by Proposition 2.3, we obtain_
\[\mathcal{H}(u) \geq \frac{k_{1}}{\theta(p^{+})^{\theta}}\min\left\{\|u\|_{E_{0}}^{ \theta p^{+}},\|u\|_{E_{0}}^{\theta p^{-}}\right\}-\frac{c_{1}}{q^{-}}\max \left\{\|u\|_{L^{q(x)}(\Omega)}^{q^{+}},\|u\|_{L^{q(x)}(\Omega)}^{q^{-}}\right\} -c_{1}|\Omega|\] \[\geq \frac{k_{1}}{\theta(p^{+})^{\theta}}\min\left\{\|u\|_{E_{0}}^{ \theta p^{+}},\|u\|_{E_{0}}^{\theta p^{-}}\right\}-\frac{cc_{1}}{q^{-}}\max \left\{\|u\|_{E_{0}}^{q^{+}},\|u\|_{E_{0}}^{q^{-}}\right\}-c_{1}|\Omega|.\]
_As \(q^{+}<\theta p^{-}\), \(\mathcal{H}\) is bounded from below and coercive. Let \(\{v_{j}\}\) be a (PS) sequence of \(\mathcal{H}\) in \(E_{0}\), that is_
\[\mathcal{H}(v_{j})\;\mbox{ is bounded in }\;E_{0},\quad\mathcal{H}^{ \prime}(v_{j})\to 0\;\mbox{in }E_{0}^{*},\quad\mbox{as }\;j\rightarrow\infty, \tag{3.1}\]
_where \(E_{0}^{*}\) is the dual space of \(E_{0}\)._
_Thus, by (3.1), we have_
\[\langle\mathcal{H}^{\prime}(v_{j}),v_{j}-v\rangle\to 0.\]
_Hence_
\[\langle\mathcal{H}^{\prime}(v_{j}),v_{j}-v\rangle = \mathcal{K}(\Lambda_{p,s}(v_{j}))\int_{Q}\frac{|v_{j}(x)-v_{j}(y) |^{p(x,y)-2}(v_{j}(x)-v_{j}(y))((v_{j}(x)-v(x))-(v_{j}(y)-v(y))}{|x-y|^{N+p(x,y) s(x,y)}}dxdy\] \[- \int_{\Omega}f(x,v)(v_{j}-v)dx\to 0.\]
_From \((f_{1})\), Propositions 2.1 and 2.3, we can easily get that_
\[\int_{\Omega}f(x,v)(v_{j}-v)dx\to 0.\]
_Therefore, we have_
\[\mathcal{K}(\Lambda_{p,s}(v_{j}))\int_{\mathcal{Q}}\frac{|v_{j}(x)-v_{j}(y)|^{p (x,y)-2}(v_{j}(x)-v_{j}(y))((v_{j}(x)-v(x))-(v_{j}(y)-v(y))}{|x-y|^{N+p(x,y)s(x,y )}}dxdy\to 0.\]
_The coercivity of \(\mathcal{H}\) implies that \(\{v_{j}\}\) is bounded in \(E_{0}\), passing to subsequence, if necessary, we may assume that_
\[\Lambda_{p,s}(v_{j})\to d_{1}\geq 0,\;\;\text{as}\;j\to+\infty.\]
_If \(d_{1}=0\), then \(\{v_{j}\}\) converge strongly to \(v=0\) in \(E_{0}\) and the proof is finished. If \(d_{1}>0\), since the function \(\mathcal{K}\) is continuous, we have_
\[\mathcal{K}(\Lambda_{p,s}(v_{j}))\to\mathcal{K}(d_{1})\geq 0,\;\text{as}\;j\to\infty.\]
_Then, by \((A_{1})\), for \(j\) large enough, we obtain_
\[0<c_{3}<\mathcal{K}(\Lambda_{p,s}(v_{j}))<c_{4}.\]
_It follows that_
\[\int_{\mathcal{Q}}\frac{|v_{j}(x)-v_{j}(y)|^{p(x,y)-2}(v_{j}(x)-v_{j}(y))((v_{ j}(x)-v(x))-(v_{j}(y)-v(y))}{|x-y|^{N+p(x,y)s(x,y)}}dxdy\to 0.\]
_Finally, Proposition 2.6 ensures that \(u_{n}\to u\) in \(E_{0}\)._
### Proof of Theorem 3.1
We consider
\[\mathcal{A}_{j}=\left\{Z\subset\mathcal{A}:\;\gamma(Z)\geq j\right\},\]
\[d_{j}=\inf_{Z\in\mathcal{A}_{j}}\sup_{u\in Z}\mathcal{H}(u),\quad j=1,2,\ldots.\]
Thus, we have
\[-\infty<d_{1}\leq d_{2}\leq\ldots\leq d_{j}\leq d_{j+1}\leq\ldots.\]
Now we prove that \(d_{j}<0\) for every \(j\in\mathbb{N}\). For each \(j\), we take \(j\) disjoint open sets \(D_{i}\) such that \(\bigcup_{i=1}^{j}D_{i}\subset\Omega_{0}\). For \(i=1,2,\ldots,j\), let \(u_{i}\in\left(E_{0}\cap C_{0}^{\infty}(D_{i})\right)\backslash\{0\}\) and
\[Z_{j}=span\left\{u_{1},u_{2},\ldots u_{j}\right\},\quad S^{j}_{r_{j}}=\left\{u \in Z_{j}:\;\;\|u\|_{E_{0}}=r_{j}\right\},\]
where \(r_{j}\in(0,1)\). For each \(u\in Z_{j}\), there exist \(v_{i}\in\mathbb{R}\), \(i=1,2,\ldots\) such that
\[u(x)=\sum_{i=1}^{j}v_{i}u_{i}(x)\quad\text{for}\;x\in\Omega. \tag{3.2}\]
So
\[\|u\|_{L^{\alpha_{0}}(\Omega)}=\left(\int_{\Omega}|u(x)|^{\alpha_{0}}\right)^{ \frac{1}{\alpha_{0}}}=\left(\sum_{i=1}^{j}|v_{i}|^{\alpha_{0}}\int_{D_{i}}|u_{i} (x)|^{\alpha_{0}}\right)^{\frac{1}{\alpha_{0}}}. \tag{3.3}\]
As all norms of a finite dimensional normed space are equivalent, there is a constant \(C>0\) such that
\[\|u\|_{E_{0}}\leq C\|u\|_{L^{\alpha_{0}}(\Omega)}\ \ \mbox{for all}\ \ \ u\in Z_{j}. \tag{3.4}\]
By (3.2), (3.3) and (3.4), we obtain
\[\mathcal{H}(tu) =\widehat{\mathcal{K}}\left(\Lambda_{p,s}(tu)\right)-\int_{ \Omega}F(x,tu)dx\] \[\leq\frac{k_{2}}{\theta}\left(\Lambda_{p,s}(tu)\right)^{\theta}- \sum_{i=1}^{j}F(x,tv_{i}u_{i}(x))dx\] \[\leq\frac{k_{2}}{\theta(p^{-})^{\theta}}\|u\|_{E_{0}}^{\theta p^{ -}}-a_{2}t^{\alpha_{0}}\sum_{i=1}^{j}|v_{i}|^{\alpha_{0}}\int_{D_{i}}|u_{i}(x )|^{\alpha_{0}}dx\] \[\leq\frac{k_{2}r_{j}^{\theta p^{-}}}{\theta(p^{-})^{\theta}}t^{ \theta p^{-}}-a_{2}t^{\alpha_{0}}\|u\|_{L^{\alpha_{0}}(\Omega)}^{\alpha_{0}}\] \[\leq\frac{k_{2}r_{j}^{\theta p^{-}}}{\theta(p^{-})^{\theta}}t^{ \theta p^{-}}-\frac{a_{2}r_{j}^{\alpha_{0}}}{C^{\alpha_{0}}}t^{\alpha_{0}},\]
for all \(u\in S_{r_{j}}^{j}\) and sufficient small \(t>0\). Since \(\alpha_{0}<\theta p^{-}\), we can find \(t_{j}\in(0,1)\) and \(\epsilon_{j}>0\) such that
\[\mathcal{H}(t_{j}u)\leq-\epsilon_{j}<0\ \ \mbox{for all}\ \ u\in S_{r_{j}}^{j},\]
i.e.,
\[\mathcal{H}(u)\leq-\epsilon_{j}<0\ \ \mbox{for all}\ \ u\in S_{t_{j}r_{j}}^{j}.\]
It is clear that \(\gamma(S_{t_{j}r_{j}}^{j})=j\) and therefore \(d_{j}\leq-\epsilon_{j}<0\). Finally, by Lemma 3.1 and the results presented above, we can apply Theorem 2.1 to show that the functional \(\mathcal{H}\) admits at least \(j\) pairs of distinct critical points. Moreover, since \(j\) is arbitrary, we obtain infinitely many critical points of \(\mathcal{H}\).
The proof is complete.
Next we will consider problem (1.1) in the case:
\[f(x,u)=\mu\omega_{1}(x)|u|^{\alpha(x)-2}u-\nu\omega_{2}(x)|u|^{\beta(x)-2}u,\]
where \(\mu,\nu\) are two real parameters, \(\alpha,\beta\in C_{+}(\Omega)\) and \(\omega_{1},\omega_{2}\) are functions in some generalized Sobolev spaces, precisely, we assume the following hypothesis.
* \(1<\alpha(x)<\beta(x)<p^{-}\leq p^{+}<\frac{N}{s}<\frac{\theta N}{s}<\min\{m_{1 }(x),m_{2}(x)\}\) for all \(x\in\overline{\Omega}\), where \(m_{1},m_{2}\in C(\overline{\Omega})\), \(\omega_{1}\in L^{\frac{m_{1}(x)}{\theta}}(\Omega)\) such that \(\omega_{1}(x)>0\) in \(\Omega_{0}\subset\subset\Omega\) with \(|\Omega_{0}|>0\) and \(\omega_{2}\in L^{\frac{m_{2}(x)}{\theta}}(\Omega)\) such that \(\omega_{2}(x)\geq 0\) in \(\Omega\).
**Theorem 3.2**.: _If \((A_{1})\), \((A_{2})\) are fulfilled, then for any \(\mu>0\) and \(\nu>0\), problem (1.1) admits at least one nontrivial solution._
For the proof of Theorem 3.2, we will use the minimum principle.
Since \(\mathcal{H}\) is weakly lower semi-continuous, it suffices to show that \(\mathcal{H}\) is coercive.
**Lemma 3.2**.: _Let \((A_{1})\) and \((A_{2})\) hold. Then for any \(\mu>0\) and \(\nu>0\) the functional \(\mathcal{H}\) is coercive on \(E_{0}\)._
**Proof 3.2**.: _By conditions (\(A_{1}\)), (\(A_{2}\)) and Holder's inequality, we get that_
\[\mathcal{H}(u) =\widehat{\mathcal{K}}\left(\Lambda_{p,s}(u)\right)-\mu\int_{ \Omega}\frac{\omega_{1}(x)}{\alpha(x)}|u|^{\alpha(x)}dx+\nu\int_{\Omega}\frac {\omega_{2}(x)}{\beta(x)}|u|^{\beta(x)}dx\] \[\geq\frac{k_{1}}{\theta\left(p^{+}\right)^{\theta}}\min\left\{ \|u\|_{E_{0}}^{\theta^{+}},\|u\|_{E_{0}}^{\theta^{-}}\right\}-\mu\int_{ \Omega}\frac{\omega_{1}(x)}{\alpha(x)}|u|^{\alpha(x)}dx\] \[\geq\frac{k_{1}}{\theta\left(p^{+}\right)^{\theta}}\min\left\{ \|u\|_{E_{0}}^{\theta^{+}},\|u\|_{E_{0}}^{\theta^{-}}\right\}-\frac{\mu}{ \alpha^{-}}\|\omega_{1}\|_{L^{\frac{m_{1}(\alpha)}{\theta}}(\Omega)}\left\|u \right|^{\alpha(x)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_It follows that \(\mathcal{H}(tu_{*})<0\) for all \(0<t<\delta^{\frac{1}{\beta_{0}-\alpha_{0}^{-}-\alpha_{0}}}\) with \(0<\delta<\min\{1,\delta_{0}\}\) and_
\[\delta_{0}:=\frac{\frac{\mu}{\alpha_{0}^{-}+\epsilon_{0}}\int_{\Omega_{1}} \omega_{1}(x)|u_{*}|^{\alpha(x)}\,dx}{\frac{k_{2}}{\theta\left(p_{0}^{-}\right) ^{\theta}}\left(\rho_{s,p}(u_{*})\right)^{\theta}+\frac{v}{\beta_{0}^{-}}\int_ {\Omega_{0}}\omega_{2}(x)|u_{*}|^{\beta(x)}\,dx}.\]
_Finally, we point out that_
\[\frac{k_{2}}{\theta\left(p_{0}^{-}\right)^{\theta}}\left(\rho_{s,p}(u_{*}) \right)^{\theta}+\frac{\nu}{\beta_{0}^{-}}\int_{\Omega_{0}}\omega_{2}(x)|u_{* }|^{\beta(x)}\,dx>0.\]
_In fact, if it is not true then_
\[\rho_{s,p}(u_{*})=0,\]
_which gives \(\|u_{*}\|_{E_{0}}=0\), hence \(u_{*}=0\) in \(\Omega_{0}\). This is a contradiction._
The proof of Theorem 3.3 is now complete.
**Theorem 3.3**.: _If \((A_{1})\), \((A_{2})\) are fulfilled, then there exists \(\mu^{*}>0\) such that for all \(\mu\in(0,\mu^{*})\) and all \(\nu>0\), problem (1.1) has at least one non-negative weak solution._
**Proof 3.4**.: _In this section, we aim to prove Theorem 3.3 by using Ekeland's Variational Principle. To this aim, we need the following lemma._
**Lemma 3.4**.: _There exists \(\mu^{*}>0\) such that for any \(\mu\in(0,\mu^{*})\), \(\nu>0\) there exist \(\tau,b>0\) such that \(\mathcal{H}(v)\geq b>0\) for any \(v\in E_{0}\) with \(\|v\|_{E_{0}}=\tau\)._
**Proof 3.5**.: _By Proposition 2.3, \(E_{0}\) is continuously embedded in \(L^{\alpha(x)}(\Omega)\), then there exists a positive constant \(C\) such that_
\[\|v\|_{E_{0}}\leq C\|v\|_{L^{\alpha(x)}(\Omega)}\ \ \text{for all}\ v\in E_{0}. \tag{3.5}\]
_Let us assume that \(\|v\|_{E_{0}}<\min\left\{1,\frac{1}{C}\right\}\), where \(C\) is given by (3.5). Using the Holder's inequality and relation (3.5), we deduce that for any \(v\in E_{0}\) with \(\|v\|_{E_{0}}=\tau\in(0,1)\) the following inequalities hold true_
\[\mathcal{H}(v) =\widehat{\mathcal{K}}\left(\Lambda_{p,s}(v)\right)-\mu\int_{ \Omega}\frac{\omega_{1}(x)}{\alpha(x)}|v|^{\alpha(x)}dx+\nu\int_{\Omega}\frac{ \omega_{2}(x)}{\beta(x)}|v|^{\beta(x)}dx\] \[\geq\frac{k_{1}}{\theta\left(p^{+}\right)^{\theta}}\left(\rho_{s, p}(v)\right)^{\theta}-\frac{\mu}{\alpha^{-}}\int_{\Omega}\omega_{1}(x)|v|^{ \alpha(x)}dx\] \[\geq\frac{k_{1}}{\theta\left(p^{+}\right)^{\theta}}\|v\|_{E_{0}}^ {\theta p^{+}}-\frac{\mu}{\alpha^{-}}C^{\alpha^{-}}\|\omega_{1}\|_{L^{\frac{m (x)}{\theta}}(\Omega)}\|v\|_{E_{0}}^{\alpha^{-}}\] \[=\frac{k_{1}}{\theta\left(p^{+}\right)^{\theta}}\tau^{\theta p^{+ }}-\frac{\mu}{\alpha^{-}}C^{\alpha^{-}}\tau^{\alpha^{-}}\|\omega_{1}\|_{L^{ \frac{m(x)}{\theta}}(\Omega)}\] \[=\tau^{\alpha^{-}}\left(\frac{k_{1}}{\theta\left(p^{+}\right)^{ \theta}}\tau^{\theta p^{+}-\alpha^{-}}-\frac{\mu}{\alpha^{-}}C^{\alpha^{-}}\| \omega_{1}\|_{L^{\frac{m(x)}{\theta}}(\Omega)}\right).\]
_Putting_
\[\mu^{*}=\frac{k_{1}\alpha^{-}}{2\theta C^{\alpha^{-}}(p^{+})^{\theta}\|\omega_{ 1}\|_{L^{\frac{m(x)}{\theta}}(\Omega)}}\tau^{\theta p^{+}-\alpha^{-}}. \tag{3.6}\]
_Consequently, for all \(\mu\in(0,\mu^{*})\) and \(v\in E_{0}\) with \(\|u\|_{E_{0}}=\tau\), there exists a positive constant \(b=\tau^{\theta p^{*}}/(2\theta(p^{+})^{\theta})\) such that_
\[\mathcal{H}(v)\geq b>0.\]
_This completes the proof._
_By Lemma 3.4, we have_
\[\inf_{v\in\partial B_{\tau}(0)}\mathcal{H}(v)>0, \tag{3.7}\]
_where \(\partial B_{\tau}(0)=\{v\in E_{0};\ \|v\|_{E_{0}}=\tau\}\)._
_On the other hand, from Lemma 3.3, there exists \(u_{*}\in E_{0}\) such that \(\mathcal{H}(tu_{*})<0\) for \(t>0\) small enough. Using the proof of Lemma 3.4, it follows that_
\[\mathcal{H}(v)\geq\frac{k_{1}}{\theta(p^{+})^{\theta}}\|v\|_{E_{0}}^{\theta p ^{*}}-\frac{\mu}{\alpha^{-}}C^{\alpha^{-}}\|\omega_{1}\|_{L^{\frac{m_{1}(x)}{ \theta}}(\Omega)}\|v\|_{E_{0}}^{\alpha^{-}}\quad\text{for }v\in B_{\tau}(0).\]
_Thus,_
\[-\infty<\overline{c}_{\mu}:=\inf_{B_{\tau}(0)}\mathcal{H}<0.\]
_Now let \(\varepsilon\) be such that \(0<\varepsilon<\inf_{\partial B_{\tau}(0)}\mathcal{H}-\inf_{B_{\tau}(0)} \mathcal{H}.\) Then, by applying Ekeland's Variational Principle to the functional_
\[\mathcal{H}:\overline{B_{\tau}(0)}\to\mathbb{R},\]
_there exists \(v_{\varepsilon}\in\overline{B_{\tau}(0)}\) such that_
\[\mathcal{H}(v_{\varepsilon})\leq\inf_{B_{\tau}(0)}\mathcal{H}+\varepsilon,\] \[\mathcal{H}(v_{\varepsilon})<\mathcal{H}(v)+\varepsilon\|v-v_{ \varepsilon}\|_{E_{0}}\ \text{for }v\neq v_{\varepsilon}.\]
_Since \(\mathcal{H}(v_{\varepsilon})<\inf_{B_{\tau}(0)}\mathcal{H}+\varepsilon<\inf_{ \partial B_{\tau}(0)}\mathcal{H}\), we deduce that \(v_{\varepsilon}\in B_{\tau}(0)\)._
_Now, we define \(\mathcal{H}_{1}:\overline{B_{\tau}(0)}\to\mathbb{R}\) by_
\[\mathcal{H}_{1}(v)=\mathcal{H}(v)+\varepsilon\|v-v_{\varepsilon}\|_{E_{0}}.\]
_It is clear that \(v_{\varepsilon}\) is an minimum of \(\mathcal{H}_{1}\). Therefore, for small \(t>0\) and \(u\in B_{1}(0)\), we have_
\[\frac{\mathcal{H}_{1}(v_{\varepsilon}+tu)-\mathcal{H}_{1}(v_{\varepsilon})}{t }\geq 0,\]
_which implies that_
\[\frac{\mathcal{H}(v_{\varepsilon}+tu)-\mathcal{H}(v_{\varepsilon})}{t}+ \varepsilon\|u\|_{E_{0}}\geq 0.\]
_As \(t\to 0\), we obtain_
\[\langle\mathcal{H}^{\prime}(v_{\varepsilon}),u\rangle+\varepsilon\|u\|_{E_{0} }\geq 0\quad\text{for all }\ u\in B_{1}(0).\]
_Hence, \(\|\mathcal{H}^{\prime}(v_{\varepsilon})\|_{E_{0}^{\prime}}\leq\varepsilon\). We deduce that there exists a sequence \((v_{j})_{j}\subset B_{\tau}(0)\) such that_
\[\mathcal{H}(v_{j})\to\overline{c}_{\mu,v}<0\quad\text{and}\quad\mathcal{H}^{ \prime}(v_{j})\to 0. \tag{3.8}\]
_It is clear that \((v_{j})\) is bounded in \(E_{0}\). By reflexivity of \(E_{0}\), for subsequence still denoted \((v_{j})\), we have \(v_{j}\to v\) in \(E_{0}\)._
_Next, we show the strong convergence of \((v_{j})\) in \(E_{0}\)._
_Claim:_
\[\lim_{j\to+\infty}\int_{\Omega}\omega_{1}(x)|v_{j}|^{\alpha(x)-2}v_{j}(v_{j}-v)dx =0, \tag{3.9}\]
_and_
\[\lim_{j\to+\infty}\int_{\Omega}\omega_{2}(x)|v_{j}|^{\beta(x)-2}v_{j}(v_{j}-v)dx =0. \tag{3.10}\]
_In fact, from the Holder's type inequality, we have_
\[\int_{\Omega}\omega_{1}(x)|v_{j}|^{\alpha(x)-2}v_{j}(v_{j}-v)dx\] \[\leq\|\omega_{1}\|_{L^{\frac{m_{1}(x)}{\theta}}(\Omega)}\left\| \left|v_{j}\right|^{\alpha(x)-2}v_{j}(v_{j}-v)\right\|_{L^{\frac{m_{1}(x) \theta(x)}{m_{1}(x)-\theta(x)}}(\Omega)}\] \[\leq\|\omega_{1}\|_{L^{\frac{m_{1}(x)}{\theta}}(\Omega)}\left\| \left|v_{j}\right|^{\alpha(x)-2}v_{j}\right\|_{L^{\frac{\alpha(x)}{\theta(x) -1}}(\Omega)}\left\|v_{j}-v\right\|_{L^{\frac{m_{1}(x)\theta(x)}{m_{1}(x)- \theta(x)}}(\Omega)}\] \[\leq\|\omega_{1}\|_{L^{\frac{m_{1}(x)}{\theta}}(\Omega)}\left(1+ \|v_{j}\|_{L^{\alpha(x)}(\Omega)}^{x^{*}-1}\right)\|v_{n}-v\|_{L^{\frac{m_{1} (x)\theta(x)}{m_{1}(x)-\theta(x)}}(\Omega)}\,.\]
_Since \(E_{0}\) is continuously embedded in \(L^{\alpha(x)}(\Omega)\) and \((v_{j})\) is bounded in \(E_{0}\), so \((v_{j})\) is bounded in \(L^{\alpha(x)}(\Omega)\). On the other hand, since the embedding \(E_{0}\hookrightarrow L^{\frac{m_{1}(x)\theta(x)}{m_{1}(x)-\theta(x)}}(\Omega)\) is compact, we deduce \(\left\|v_{j}-v\right\|_{L^{\frac{m_{1}(x)\theta(x)}{m_{1}(x)-\theta(x)}}( \Omega)}\to 0\) as \(j\to+\infty\). Similarly, we get_
\[\lim_{j\to+\infty}\int_{\Omega}\omega_{2}(x)|v_{j}|^{\beta(x)-2}v_{j}(v_{j}-v)dx =0.\]
_Hence, the proof of Claim is complete._
_Moreover, since \(\mathcal{H}^{\prime}(v_{j})\to 0\) and \((v_{j})\) is bounded in \(E_{0}\), we have_
\[\left|\langle\mathcal{H}^{\prime}(v_{j}),v_{j}-v\rangle\right| \leq\left|\langle\mathcal{H}^{\prime}(v_{j}),v_{j}\rangle\right|+ \left|\langle\mathcal{H}^{\prime}(v_{j}),v\rangle\right|\] \[\leq\|\mathcal{H}^{\prime}(v_{j})\|_{E_{0}^{\prime}}\|v_{j}\|_{E_ {0}}+\|\mathcal{H}^{\prime}(v_{j})\|_{E_{0}^{\prime}}\|v\|_{E_{0}},\]
_that is,_
\[\lim_{j\to+\infty}\langle\mathcal{H}^{\prime}(v_{j}),v_{j}-v\rangle=0.\]
_Therefore_
\[\lim_{j\to+\infty}\Biggl{(} \mathcal{K}(\Lambda_{p,s}(v_{j}))\int_{Q}\frac{|v_{j}(x)-v_{j}(y )|^{p(x,y)-2}(v_{j}(x)-v_{j}(y))((v_{j}(x)-v(x))-(v_{j}(y)-v(y))}{|x-y|^{N+p(x,y )s(x,y)}}dxdy \tag{3.11}\] \[-\mu\int_{\Omega}\omega_{1}(x)|v_{j}|^{\alpha(x)-2}v_{j}(v_{j}-v) dx+\nu\int_{\Omega}\omega_{2}(x)|v_{j}|^{\beta(x)-2}v_{j}(v_{j}-v)dx\Biggr{)}=0,\]
_Combining this with relations (3.8)-(3.10) it follows that_
\[\mathcal{K}(\Lambda_{p,s}(v_{j}))\int_{Q}\frac{|v_{j}(x)-v_{j}(y)|^{p(x,y)-2}( v_{j}(x)-v_{j}(y))((v_{j}(x)-v(x))-(v_{j}(y)-v(y))}{|x-y|^{N+p(x,y)s(x,y)}}dxdy \to 0.\]
_Since \(\{v_{j}\}\) is bounded in \(E_{0}\), passing to subsequence, if necessary, we may assume that_
\[\Lambda_{p,s}(v_{j})\to d_{1}\geq 0,\ \ as\ j\to+\infty.\]
_If \(d_{1}=0\), then \(\{v_{j}\}\) converge strongly to \(v=0\) in \(E_{0}\) and the proof is finished. If \(d_{1}>0\), since the function \(\mathcal{K}\) is continuous, we have_
\[\mathcal{K}(\Lambda_{p,s}(v_{j}))\to\mathcal{K}(d_{1})\geq 0,\ as\ j\to\infty.\]
_Then, by \((A_{1})\), for \(j\) large enough, we obtain_
\[0<c_{3}<\mathcal{K}(\Lambda_{p,s}(v_{j}))<c_{4}.\]
_It follows that_
\[\int_{\mathcal{Q}}\frac{|v_{j}(x)-v_{j}(y)|^{p(x,y)-2}(v_{j}(x)-v_{j}(y))((v_{ j}(x)-v(x))-(v_{j}(y)-v(y))}{|x-y|^{N+p(x,y)s(x,y)}}dxdy\to 0.\]
_According to the fact that \(\mathcal{T}\) satisfies condition \((S^{+})\), we conclude that \(v_{j}\to v\) strongly in \(E_{0}\). Since \(\mathcal{H}\in C^{1}(E_{0},\mathbb{R})\), we have_
\[\mathcal{H}^{\prime}(v_{j})\to\mathcal{H}(v)\ \ as\ j\to+\infty \tag{3.12}\]
_Relations (3.8) and (3.12) show that \(\mathcal{H}^{\prime}(v)=0\) and thus \(v\) is a weak solution for problem (1.1). Moreover, by relation (3.8), it follows that \(\mathcal{H}(v)<0\) and thus, \(v\) is a nontrivial weak solution for (1.1). The proof of Theorem 3.3 is now completed._
|
2302.05578
|
Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented
Large Language Models
|
Despite recent progress, it has been difficult to prevent semantic
hallucinations in generative Large Language Models. One common solution to this
is augmenting LLMs with a retrieval system and making sure that the generated
output is attributable to the retrieved information. Given this new added
constraint, it is plausible to expect that the overall quality of the output
will be affected, for example, in terms of fluency. Can scaling language models
help?
Here we examine the relationship between fluency and attribution in LLMs
prompted with retrieved evidence in knowledge-heavy dialog settings. Our
experiments were implemented with a set of auto-metrics that are aligned with
human preferences. They were used to evaluate a large set of generations,
produced under varying parameters of LLMs and supplied context.
We show that larger models tend to do much better in both fluency and
attribution, and that (naively) using top-k retrieval versus top-1 retrieval
improves attribution but hurts fluency. We next propose a recipe that could
allow smaller models to both close the gap with larger models and preserve the
benefits of top-k retrieval while avoiding its drawbacks.
|
Renat Aksitov, Chung-Ching Chang, David Reitter, Siamak Shakeri, Yunhsuan Sung
|
2023-02-11T02:43:34Z
|
http://arxiv.org/abs/2302.05578v2
|
# Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models
###### Abstract
Despite recent progress, it has been difficult to prevent semantic hallucinations in generative Large Language Models. One common solution to this is augmenting LLMs with a retrieval system and making sure that the generated output is attributable to the retrieved information. Given this new added constraint, it is plausible to expect that the overall quality of the output will be affected, for example, in terms of fluency. Can scaling language models help?
Here we examine the relationship between fluency and attribution in LLMs prompted with retrieved evidence in knowledge-heavy dialog settings. Our experiments were implemented with a set of auto-metrics that are aligned with human preferences. They were used to evaluate a large set of generations, produced under varying parameters of LLMs and supplied context.
We show that larger models tend to do much better in both fluency and attribution, and that (naively) using top-k retrieval versus top-1 retrieval improves attribution but hurts fluency. We next propose a recipe that could allow smaller models to both close the gap with larger models and preserve the benefits of top-k retrieval while avoiding its drawbacks.
## 1 Introduction
Large language models (LLMs) open the door for many downstream applications requiring text generation, including dialog applications like LaMDA [1]. While the recent approach of scaling LLMs has significantly reduced the model perplexity and, in turn, enhanced sensibleness (as a proxy for fluency) and specificity [2], these models are still known to hallucinate and provide non-factual/outdated information. To address this issue, one typical approach is to rely on a trusted external knowledge source, such as search engines or retrieval systems, to retrieve relevant evidence, and hope the generated response has knowledge grounded to and attributable to the evidence.
Take LaMDA as an example. After the base model generates an initial response, a separate research model generates search queries and gathers additional information from Google Search. Finally, the research model generates the actual response to users according to all information gathered. Experiments showed that combining LaMDA with Google Search has significantly enhanced response factuality [1]. Similarly, other work, including REALM [3], RAG [4], RETRO [5], and [6], incorporates LLMs with custom retrieval systems and shows that the performance on the downstream knowledge intensive task is significantly improved.
Given that grounding generated responses in retrieved evidence acts as a restriction on the output space, we hypothesize that it could, in principle, lead to less fluent outputs. In other words, there might be a kind of a tradeoff relationship between fluency and attribution of the model's responses. To verify this hypothesis, we design experiments to measure both metrics for the responses generated by LLMs in various settings that sweep over conditions where LLMs may have different degrees of attribution. Despite the fact that adding a trusted external knowledge source helps, the generation may still not be factual for several reasons:
* **Imperfect retrieval query.** The query to the external knowledge source itself could be a text generation and is also vulnerable to hallucinations. Cascading several LLMs may exacerbate hallucinations.
* **Imperfect external knowledge source.** The imperfection is measured by precision/recall and AUC curves. When irrelevant information is retrieved and provided, are LLMs intelligent enough to ignore the evidence and say I don't know. Or does it respond from its own parametric memory instead?
* **Memorization and grounding.** Does the generated text bind to the memory (model parameters) or the evidence when they are in conflict? Ideally, it is desirable to have a mechanism to control and take precedence between grounding and memorization.
* **Sampling.** To increase the semantic variability, LLMs typically adopt sampling when decoding response sequences. With higher sampling temperature, the token distribution becomes more even and has a better chance to sample next tokens that are less likely. After rare tokens are sampled, hallucinations could happen, but the LM has no choice and is bound to continue and finish the response sensibly.
In this work we simplify and eliminate the first two factors by choosing the QReCC dataset where we have access to golden evidence that supports the golden response. The imperfection in the retrieval system is under control and can be simulated. For memorization and grounding, we hypothesize that we can leverage prompting for controlling the knob between them. Emergent abilities [9], like few-shot learning and chain-of-thoughts reasoning, leverages promptings as context to facilitate new capabilities of the model.
Overall, we are conducting an extensive study, aimed to find out how different model parameters (such as scale or decoding temperature) and the context provided to the model (such as dialog turns, retrieved facts or instructions), can affect fluency and attribution. To be able to better describe the results of our experiments, we introduce the concepts of global and local tradeoff, motivated by prior work. Furthermore, we identify promising ways to reduce or mitigate these tradeoffs.
## 2 Related Work
To the best of our knowledge, there is no literature on the trade-off between fluency and attribution. There are, however, two recent papers, that are looking into different tradeoffs, that we'd like to highlight:
* The paper from Zhou et al. [12] introduces Automatic Prompt Engineer (APE) for automatic instruction generation and selection, and looks into the tradeoff between truthfulness and informativeness. They conclude that memorization is not completely reliable: with being more informative (finer details), the model's response becomes less factual. They use different (auto-generated) instructions to get different values of a tradeoff. Notably, the model (and, for the most part, the size of the provided context) are fixed.
* On the other hand, the paper from Gao et al. [13], looks into improving factuality through post-hoc research and revision. As part of the experiments, they investigate the tradeoff between attribution and preservation scores, where preservation measures the similarity of the response before and after editing, and the response before editing should be fluent/sensible by the nature of LLM generations. They argue that the better way to look into tradeoffs is through F1 score, given that F1 will be low if either attribution or preservation is low, signifying that both metrics are (equally) important. Notably they also use end-to-end NLI as a proxy for attribution. This paper looks into tradeoffs between 3 different models, EFEC [17], LaMDA, and RARR.
Note that these 2 papers effectively are using 2 different views on what having a tradeoff means.
**Global View**: [13] examines 3 different models on 3 different datasets and talks about absolute position of aggregated dots within the corresponding 2d plots. Mitigating tradeoff in this sense simply means choosing the best model for the task among the set of available models. We can say that this is a global tradeoff.
**Local View**: at the same time [12] takes a fixed model and fixed dataset, limits changes to the instructions in the prompt and optimizes those instructions in order to hit different positions within the 2d plot for auto-metrics. Reducing tradeoff in this sense means choosing the best possible way to perform inference when the model, the task and a certain preference between 2 selected metrics are already decided on. We can say that this is a local tradeoff.
## 3 QReCC Dataset
For the experiments we use a knowledge intensive conversational dataset Question Rewriting in Conversational Context (QReCC, [8]) - an end-to-end open-domain QA dataset consisting of 14K conversations with 81K QA pairs.
Each of 14K conversations consists of a series of questions and answers (a dialog history), followed by the final (not answered yet) query. A golden answer, along with the webpage the answer was extracted from, is also provided for each conversation.
We use a fully decontextualized version of QReCC in our experiments. We decided to use this version after doing manual inspections and discovering that some contextualized conversations are hard to parse even for humans. A side-effect of using a decontextualized version is that for some dialogs the last turn with the user's question has all the required information and the model doesn't need to know any additional dialog history to produce sensible responses within the full dialog history.
Importantly, there are also plenty of dialogs where this is not the case and simply knowing the decontextualized last turn is not enough.
One notable property of using this dataset is that the goal of being sensible within QReCC QA style dialogs is aligned with the goal of being attributable to the evidence. This wouldn't always be the case and is important for the way the tradeoff behaves. For example, for more chit-chatty datasets the goal of being sensible could in the first place be aligned with being engaging/interesting and mostly independent from the goal of being attributable.
Many dialogs in QReCC dataset assume that the specific Wikipedia article (and, occasionally, even the specific place within the article) is known during the conversation. For example, some turns could refer to "this article" or similar. Such conversations are not desirable for our use case, so we filter them out. Based on manual examinations we further filter out examples where evidence is too long, where history is not well-formed (in some of the examples there are missing turns) or where the golden answer is a single word. See Appendix for the full impact of filtering on the dataset size.
For our experiments we randomly select 100 examples from the remaining after filtering 324 examples in the dev split. We also manually verify these 100 examples to ensure dialog quality.
## 4 Human Evals
### Pilot
Meena paper [2] introduces a proxy for fluency in the form of Sensibleness and Specificity Average metric (SSA), while paper [10] presents an evaluation framework called Attributable to Identified Sources (AIS) for assessing the output of natural language generation models for attributability. Both SSA and AIS assume the use of human raters and in the ideal world with infinite resources all our experiments would be evaluated this way.
To get a sense of the problem, we started with conducting small pilot human eval, 400 examples in total. We sampled 100 dialogs from QReCC, as described above, and generated responses for them using PaLM 540B model [16] with 4 different setups (see next section for the specifics about how we use PaLM's native dialog prompt):
1. temperature = 0.0 and "no evidence"
2. temperature = 0.7 and "no evidence"
3. temperature = 0.0 and "golden evidence"
4. temperature = 0.7 and "golden evidence"
We have run human SSA eval for all 4 pilot setups and human AIS eval for setups #3 and #4. We are assuming here that attributability for setups #1 and #2 is 0, given that no evidence is provided to the model (see also a note on definition of "attributable" in the Appendix).
### "Merged" Setup for AIS
Human evals for the pilot 400 examples ended up being fairly expensive, both money and timewise, so we started looking into possible ways to reduce these costs.
As described in the previous section, we use the same 100 dialogs in different setups, which means that for the same example a lot of information is repeated. For example, for the pilot setups #3 and #4 evidence and dialog history are the same for the same example, and only the final generated response might differ. This suggests that an eval optimization could be warranted, where instead of doing first all 100 examples from #3 and then all 100 examples from #4, we can do them in parallel. That is, the rater is asked to rate the same example first for #3 and then immediately for #4, so that he doesn't need to re-read evidence and dialog history for this example.
### Alignment of "Merged" vs "Separate"
What we discovered is that an implicit SxS of "merged" evals systematically changes human ratings. More specifically, we looked into the following parts of AIS evaluation:
1. "understand",
2. "relevant",
3. "consistent",
4. "evidence relevant",
5. "evidence contradicts",
6. "evidence supports",
7. "attributable".
All of them are binary classification questions, which allows us to easily define a "single number" aggregated metrics for each. We simply do a majority voting (we use 5 raters, so always well-defined) for each example and then take a mean over the whole set of 100 examples.
We can now compute these aggregated metrics for the 7 questions in the following 4 scenarios:
1. T=0.0 for generation, "separate" evaluation
2. T=0.0 for generation, "merged" evaluation
3. T=0.7 for generation, "separate" evaluation
4. T=0.7 for generation, "merged" evaluation
Our hope/expectation was that the metrics from (A) and (B) pair will be similar, as well as from (C) and (D) pair.
Instead we have discovered that questions (2), (4) and (6) are experiencing systematic shift under "merged" setup: (B) is "better" than (A), and (D) is "worse" than (C). In other words under "merged" eval the raters prefer outputs generated with T=0.0 more and with T=0.7 less. It's not immediately clear whether "merged" ratings are better than "separate" (we do see examples in both directions), but they are significantly different at least for a subset of AIS questions.
We will use the difference in 2 eval setups to estimate human's "uncertainty rate" on our data. Specifically, we will assume "separate" evals as ground truth and measure accuracy of "merged" evals against it.
## 5 Auto-Metrics
### End-to-end NLI for Auto-AIS
We further look into end-to-end NLI in various flavors as a proxy for "attributable":
* Flavor v1: {Golden Evidence, Question} => {Answer}
* Flavor v2: {Golden Evidence} => {Question, Answer}
* Flavor v3: {Golden Evidence, Question} => {Question, Answer}
For example, "v2" concatenates the question that immediately precedes the model's answer in the dialog with the answer and checks if there is entailment between golden evidence and the concatenated pair. Out of these "v3" tends to work better, as measured by alignment with human ratings. But even "v3" only gets 75% accuracy against human AIS, which improves over simply predicting the majority label (60%), but not by much.
RARR [13] shows a boost in AIS performance from splitting the (long) answers into individual sentences, and running NLI for each one. In our case answers are typically short, but evidence is long and could be up to 900 tokens in some cases. We hypothesize that this is the source of the problem and, to verify, we split the evidence by sentence boundaries and compute all NLI scores against fixed size sliding windows of K sentences, then take the maximum of these "localized" scores as the final score. Similar to [14], we find that granularity of K=2 performs best and boosts accuracy against humans to 87%. At the same time, human's uncertainty rate, estimated from "merged" against "separate" AIS, stands at 93%. In other words, "localized" NLI almost closes the gap with human evaluations and can be considered a reliable proxy.
### PaLM for Auto-SSA
We have created custom few-shot prompts for PaLM to enable the model to output SSA ratings (we did build prompts for both Sensibleness and Specificity, but ended up actively using only the prompt for Sensibleness in the experiments).
The prompts are built based on the human ratings from setup 2 in the pilot data (i.e. temperature = 0.7 and "no evidence"). We use linear search in the example space with the goal to predict a score from the "aggregated". In other words, if 2 out of 5 raters said "Yes, sensible" and the remaining 3 said "No", we will assign a sensibility score of 0.4 to this specific example. We use the remaining 3 pilot slices as a validation set for the resulting prompt (table 1):
The final PaLM prompt for Sensibleness has an error rate versus human raters of \(\approx 4\%\) on average when measured on the validation sets (row 1, 2 and 4 in the table 1). The fragment of the final prompt (see Appendix for a complete version):
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Pilot slice & MSE & 1 -> 0 & 0 -> 1 & Acc \\ \hline t=0.0, no evidence & 0.033 & 4\% & 1\% & 5\% \\ t=0.0, golden evidence & 0.016 & 1\% & 1\% & 2\% \\ t=0.7, no evidence & 0.040 & 4\% & 1\% & 5\% \\ t=0.7, golden evidence & 0.026 & 2\% & 4\% & 6\% \\ \hline \end{tabular}
\end{table}
Table 1: sensibleness prompt alignment
**Instructions**: Does B's final reply in the dialog below make sense to you? Use your common sense here. Is the response completely reasonable in context? Then rate it as '1.0'. If anything seems off -- confusing, illogical, out of context, lacks common sense -- then reduce the rating accordingly. Slightly illogical? '0.8'. Complete nonsense out of context? '0.0'
**Bialog**:
A: who celebrates new year first in the world?
B: Tonga and Kiritimati, part of Kiribati, are examples of the first places to welcome the New Year
A: who celebrate new year last in the world?
**Final reply**:
B: Samoa and American Samoa are the last places to welcome the New Year, as they are the first to see the sunrise on January 1st.
Answer: 0.4
...
## 6 Experimental Setup
### Native Dialog Prompt
To support conversational datasets with PaLM, we adopt "native" dialog prompts for PaLM, based on how dialog data was passed to the model during the training. An example of the format is shown below (notice that the example has 3 participants):
0 -1 0 Knock Knock [cot]
1 0 1 Who's there? [cot]
2 1 0 Interrupting cow [cot]
3 12 Nobel [cot]
4 3 1 Nobel who? [cot]
5 4 2 That's why I knocked [cot]
6 5 1 <PaLM to complete>
Here, the first number is a turn's index, the second is the index of a parent's turn and the third is a speaker's id. Note that this format supports multiple speakers and non-linear structure of the conversations.
The use of native dialog prompts for the response generation has an indirect effect of forcing the model to stay sensible within the dialog history. It also simplifies parsing by allowing us to simply stop at the next [cot] token.
### Advanced Promptings
Emergent abilities [9], like few-shot learning and chain-of-thought reasoning, leverage promptings to facilitate new model capabilities. With advanced promptings, we aim to make a sensible model (the model optimized for perplexity) more attributable. In other words, by structuring prompt with "instructions", "facts" and "dialog history", we hope the language model will generate a (sensible) response attributable to the given evidence.
The full prompt might look something like this:
Instructions: use the information from the provided "fact" to answer the question
Fact: Racing career [ edit ] Early racing career [ edit ] Kulwicki began his racing career as a 13-year-old kart racer. [10] His father built engines as the crew chief for Norm Nelson and Roger McCluskey's United States Automobile Club (USAC) racecars. [1][12] Because his work involved travel, Kulwicki's father was unable to help his son at most kart races, [9]... (truncated)
0 -1 0 When did Alan Kulwicki start racing? [cot]
1 0 1 Kulwicki began his racing career as a 13-year-old kart racer. [cot]
2 1 0 Was Alan Kulwicki able to race cars at the young age of 13? [cot]
3 0 1
To better understand the impact of various components, we further adjust structure of the prompts along several dimensions:
* "instructions" could be either present or absent,
* similarly, dialog history could be also present or absent,
* and the provided evidence could be: golden, retrieved, absent, non-evidence (i.e. guaranteed not to be golden evidence).
For the retrieved evidence we use BM25 and look into top-1, top-2 and top-3 retrieval (e.g. in the latter case, three facts will be provided in the prompt rather than one). Also check the notes about "simulated" retrieval in the "results" section.
We arrive into a "full grid" of experiments by running the prompting setups described above in 6 settings each: 3 different model sizes (8B, 62B and 540B) and 2 sampling temperatures (0.0 and 0.7).
As was mentioned in the previous section, conducting the full grid of experiments with human raters is unrealistic. Instead, after confirming that the auto-metrics are well-aligned with the human ratings from the pilot, we apply Auto-AIS and Auto-SSA to the full grid of experiments for further analysis.
## 7 Results and Discussion
In this part we will examine the auto metrics on a full grid of experiments and summarize the findings in several takeaways about global and local tradeoffs. The overall plan is as follows.
First, we will find out that the parameters of the context structure, like presence or absence of the evidence or dialog history, have a large scale effect on the value of the global tradeoff as measured by F1 (takeaway 1), while varying the model parameters (i.e., size or sampling temperature), has only a medium scale effect (takeaway 2).
Next, we will look into quantifying a local tradeoff and conclude that it is inherently connected to having some additional, "hidden" constraints, like context size (takeaway 3). Based on this observation, we will design a set of synthetic experiments to demonstrate how local tradeoff manifests for top-1 retrieval (takeaway 4) under restricted context ratio "budget", and will use the same set of experiments to argue further that using top-k retrieval for higher recall leads to improved attribution at a cost of reduced fluency (takeaway 5). We will also look into the effect of input-level re-ranking on both global and local tradeoffs (takeaway 6).
Finally, we will discuss how the results / takeaways affect our understanding of global and local tradeoffs and propose a way to combine all the takeaways together into a recipe in which a small (8B) model is used with top-k retrieval and re-ranking to achieve values of global tradeoff / F1 that are comparable to those produced by a large (540B) model with oracle knowledge.
### Four Clusters
Let's start with a full grid plot. Each dot in the figure 1 corresponds to aggregated metrics for selected 100 examples where the final response is generated by the same model:
* blue for small (PaLM, 8B), brown for medium (PaLM, 62B) and red for large (PaLM, 540B),
* The light version of the particular color (light blue, light brown, pink) means temperature=0.7 (vs temperature=0.0 otherwise),
* Green curves are iso-F1 levels.
There are 4 distinct clusters in this figure as outlined in the table 2:
Should we expect the "center" cluster to be in the bottom right instead? Note that "dialog history absent" means that the last dialog turn (i.e. user's question) is still provided, which in our case of decontextualized queries often has enough information for the model to keep the dialog sensible, even without knowing the rest of it.
**Takeaway 1**. Presence or absence of the evidence and/or dialog history have a _large scale_ effect on the value of the tradeoff (where tradeoff is considered in accordance with _global_ view).
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Dialog history present & Dialog history absent \\ \hline Golden evidence present & Top right & Center \\ \hline Golden evidence absent & Top left & Bottom left \\ \hline \end{tabular}
\end{table}
Table 2: Cluster types
### Golden Evidence
Next, let's zoom-in to the "top right" cluster (figure 2), i.e. all the models for which both golden evidence and a full conversation history are available.
Here we have 4 well-defined F1 bands (from left to right, in the direction of increasing F1):
* _light blue_ (small models, t=0.7)
* _blue_ (small models, t=0.0) and _light brown_ (medium models, t=0.7)
* _brown_ (medium models, t=0.0) and _pink_ (large models, t=0.7)
* _red_ (large models, t=0.0)
In other words, for our selection of models, sampling temperature and model size have comparable effect on the value of the global tradeoff and one can choose between smaller model in greedy mode or larger model with higher temperature and expect to end up within the same F1 band.
**Takeaway 2**. Reducing temperature and increasing model size have a comparable _medium scale_ effect on the F1.
### Large Model
Let's zoom-in more and look into the large models only:
High-temperature experiments on the left (i.e. where 1st value in the descriptor tuple is t=0.7) are well-separated from the low-temperature ones (t=0.0) on the right by F1 bands, as we have already seen in the previous figure for the medium scale effects. The rest of the variance within the sets of same-temperature experiments comes from the differences in the prompt presence or absence of instructions, additional retrieved evidence besides gold (i.e. top-2 or top-3).
Can we start looking for tradeoffs in the local sense for these sets? In other words, is it enough to fix model size and sampling temperature (and, implicitly, the dataset) to see systematic local tradeoff for different prompt structures (i.e. within same-color experiments in figure 3)? For the most part, the answer is "no".
The prompts that are somewhat comparable are the prompts that differ only in "instructions" being present or not. Specifically "1" vs "6", "0" vs "4" or "3" vs "5" for prompt type (3rd value in the descriptor tuple).
Figure 1: Full grid, 4 clusters
Figure 3: Large models with golden evidence
Figure 2: Cluster with golden evidence
Why are the rest not comparable? We hypothesize that the missing variable is the size of the context (i.e. the combined size of everything that was passed to the model as input: dialog history, one or more retrieved facts, instructions, etc). For an extreme example, consider the prompt with "no dialog history" and "no evidence" and compare it to the prompt with "full dialog history", "instructions" and "3 retrieved facts". The model that is using the former is never going to be more fluent than the latter purely because it has more degrees of freedom.
**Takeaway 3**. Comparing experiments with the context of similar size is important for quantifying the _local_ tradeoff.
### Restricted Context and Retrieval
PaLM is able to effectively handle fairly large context (\(\approx 2000\) sentence pieces), so to investigate the impact of limited context size we will restrict it artificially in the following way:
* Let's start (in the top left corner) with all dialog turns (2
* n + 1 turns) and an empty evidence,
* Keep dropping turns one by one from the dialog (from the top, i.e. starting from the older turns),
* Simultaneously, let's add sentences to the evidence (from the beginning) in such a way that the total amount of sentence pieces in dialog and evidence stays approximately the same (more specifically, the ratio of [the remaining dialog turns to the full dialog history] plus [the ratio of the newly added evidence to the full evidence] is kept close to 1),
* We end (in the bottom right corner) with no dialog turns (unlike with a "center" cluster before, the end state does not even have a user's query) and full evidence.
The figure 4 below shows this process for the 540B model with temperature=0. Green dots mean that "non-evidence" was used, the corresponding blue dots use "golden" evidence, and the connecting "gray" lines could be treated as "top-1 evidence" for various values of recall (with "green" end being \(recall@1=0\%\) and "blue" end being \(recall@1=100\%\)).
More specifically, we first compute the values of auto-metrics for the same 100 examples both with "non-evidence" and with "golden" evidence. Those values are then used to obtain auto-metrics of the "average" retrieval system with a given value of \(recall@1=X\%\) for various values of X from 0 to 100. These interpolated values give us "gray" lines of simulated top-1 retrieval.
Notice that curves defined by blue and green dots respectively are (mostly) well-aligned with iso-F1 lines.
**Takeaway 4**. There is a clear _local_ tradeoff for models with fixed context ratio between dialog and evidence.
Figure 4: Restricted context and top-1 retrieval
There are several plausible ways to re-interpret this simplistic synthetic experiment in more practical terms.
One interpretation is to think about a two-stage retrieval system, where first stage retrieves (long) documents and then second stage selects relevant parts within them with some kind of reading comprehension system. More / less space allocated for evidence in the synthetic setup will then correspond to having a better / worse second stage in a "real" two-stage setup.
Another interpretation could be in terms of top-1 vs top-k retrieval. In this case adding additional pieces to the evidence in our artificial setup corresponds to switching from retrieving \(X\) items to retrieving \(X+1\).
The latter interpretation leads us to the conclusion that in the case of limited context "budget" it's not necessarily advantageous to use top-k evidence for improved recall.
For example, in Figure 5, top-k evidence will lead us to lower (in sensibleness) blue dots compared to dots for top-1 evidence and, in general, we might be losing fluency while gaining attributability (check the slope of the "red" lines below for approximation of the effect that using top-3 evidence could have). I.e. while we are moving faster along the retrieval lines when using top-3 retrieval (due to recall@3 > recall@1), we are moving in a different direction (along "red" lines instead of "gray").
**Takeaway 5**. Improved recall from top-k retrieval can often lead to increased attribution and reduced fluency, compared to top-1 retrieval.
### Re-Ranking or Input-Level Ensembling
Let's now look into local tradeoff from a different angle, which should allow us to connect global and local tradeoffs together, and provide a way to mitigate tradeoff to a degree.
Consider 7 "red" experiments from figure 3 (all with golden evidence). They consist of the same 100 input examples with 7 different prompt structures, which could further be re-ranked on the input example level by auto-metrics to produce "aggregated" experiments. I.e. we can choose 1 out of 7 examples, based on some selection criteria and do it this way for all 100 input examples to build an "aggregated" or re-ranked experiment.
For example, if we want to improve an attribution score, we can choose to simply maximize NLI as our selection criteria above. This is represented by an "orange" experiment in figure 6. Alternatively, we can first filter out examples that were deemed non-sensible (i.e. PaLM produced a score less than 0.5) and only then chose those that maximize NLI. This way we will arrive at a "green" experiment instead.
Figure 5: Restricted context and top-k retrieval
Notice that while original "red" experiments are not comparable for local tradeoff purposes, as was explained before, "green" and "orange" experiments are produced under the same constraints and, as a result, could be considered comparable. These two aggregated experiments demonstrate clear local tradeoff between fluency and attribution, and a significant improvement in F1 from the initial non-aggregated experiments. Figure 7 shows the same re-ranking process applied to the outputs of the small model (notably, the "green" aggregated dot from figure 7 falls inside the cluster of "red" unaggregated dots from figure 6).
To summarize this section, re-ranking of the multiple outputs produced by the model with the same size and temperature leads to medium scale improvements in F1, comparable in our experiments to the effect from simultaneously reducing temperature and increasing model size (i.e. approximately doubling the effect from takeaway 2 before) or keeping temperature the same and increasing model size by two orders of magnitude (i.e. going from 8B -> 540B). Compare also to the similar results from [7].
**Takeaway 6**. Performing multiple inferences with small model (8B) and then re-ranking them by auto-metrics allows to get F1 values of global tradeoff comparable to the ones produced from large model (540B) without re-ranking.
### Discussion
Let's revisit global and local tradeoff based on the results / takeaways. On a high level we can point out the following:
* Global tradeoff in general leads to relatively large "moves" that are orthogonal to iso-F1 curves. For example, on figure 1 it's a move from models with "query only" to models with some context and then to the models with a full context. On figure 2 it's a movement from left to right corresponding to the model size and temperature changes and on figure 6 it's a move from original "red" dots to the re-ranked "orange" and "green" ones.
* At the same time, local tradeoff typically results in movement alongside the iso-F1 levels (in other words local tradeoff tends to preserve F1 value). For example, on figure 3 it's small scale moves between the same model where the only difference in the prompt structure is absence or presence of "instructions" ([0.0, L, 6] -> [0.0, L, 1] or [0.7, L, 6] -> [0.0, L, 1]), on figure 4 it's the movement along the curves corresponding to the dots of the same color (blue or green) and on figure 6 and 7 it's a move between "orange" aggregated experiment and the "green" one.
Notice further that for local tradeoff to happen, we need to have some kind of constraints being defined. In other words, operating in fully unconstrained mode is not very interesting: we simply choose the largest available model, provide it with a full dialog history and golden evidence, try all kind of different prompt structures and use human raters to cherry pick the best generations from a very large number of inferences.
Figure 6: Aggregating 540B models by NLI (orange dot) and by Sensibleness, then NLI (green)
The constraints could include fixing the overall context size, as done in takeaway 3 or restricting the overall ratio that could be "spent" on dialog and evidence, as done in synthetic experiments from takeaways 4 and 5. The constraint in takeaway 6 is a bit more subtle - the context size here is not restricted, as different "red" experiments have (very) different prompt structures and vary a lot in terms of the overall context size. What is restricted, however, is the number of inferences (7 in this case), the set of prompts (also 7) and the space of functions used for re-ranking (in our case, even though our PaLM-based auto-sensibleness produces pseudo-scores, we have only justified its alignment with human preferences as a boolean metric; if, instead, we've build an auto-metric that gives sensibleness as an actual score from [0, 1], we could've expanded the re-ranking functions space to, for example, all linear combinations of a * auto-sensibleness + b * auto-attribution).
Other constraints that could potentially (we haven't covered those with comprehensive experiments and consider them as promising directions for future work) result in local tradeoff include:
* Relative positioning within the context (e.g. putting evidence early in the prompt structure vs putting it close to the end, etc; especially relevant to the case when native dialog prompt format is used, as there could only be single last turn in the dialog and one can expect the last turn in the dialog to have the largest impact on the next turn).
* The number, the specific ratio of the examples and overall cost / efforts spent on the fine-tuning mixture, if one is used (e.g. fine-tuning mixture could have more diversity of examples how to be attributable in different situations, but not enough examples of staying sensible in diverse situations; this could be less relevant for QReCC but more important for chit-chatty datasets we discussed previously).
* Function space constraints for decoding strategy. So far, we have only looked into greedy and temperature-based decoding, but significantly more sophisticated decoding schemes could, in principle, be used and lead to better results (see [15] for example) in terms of the F1 value / global tradeoff. We would still expect to see local tradeoff being present within such extended decoding space.
### Putting Things Together
Let's assume that we can afford running (at scale) only the smallest (i.e. 8B) model, but we want to get results that are comparable in F1 value / global tradeoff to large models that are using oracle knowledge of golden passages. Our qualitative experiments suggest the following high-level recipe:
1. Choose a retrieval system (for example, Google Search) and value \(K_{1}\), such that recall@\(K_{1}\) is sufficiently close to 1 for the anticipated use case.
Figure 7: Aggregating 8B models by NLI (orange dot) and by Sensibleness, then NLI (green)
2. Find a value \(K_{2}<=K_{1}\), such that it's possible to fit \(K_{2}\) retrieved items of the average length into the prompt without exceeding context size (or context ratio) constraints.
3. Build auto-metrics, aligned with human preferences for the expected data. If using a large model for Auto-SSA, as we did, to be practical this might require a 2-step approach of first tuning the prompt with a large model and then distilling it into a small model over a large amount of unlabeled data.
4. Perform a single run of the chosen retrieval system for each data point (i.e. an input dialog for which we want to generate knowledge-grounded next turn) and retrieve \(K_{1}\) items.
5. For the same datapoint, run multiple inferences using the 8B model with advanced promptings as described previously for different values of retrieved items \(K\in[1,K_{2}]\). For each K, run at least ceiling (\(K_{1}\) / K) inferences, to cover all \(K_{1}\) items within prompts that have exactly K retrieved items in them.
6. Use Auto-Fluency as a boolean metric followed by Auto-Attribution to re-rank all the inferences for the given datapoint and produce a single "aggregated" inference.
## 8 Conclusion
We examined the relationship between fluency and attribution for large language models on a factuality-focused dialog dataset (QReCC). We started with human evaluation on a small subset of the data, then used these ratings to align a set of auto-metrics with human preferences from the pilot. We next applied the auto-metrics to a much larger set of experiments that explore a wide range of parameters, both model and context-specific.
To describe the results of our experiments, we have followed implicit definitions from prior work to explicitly introduce and later refine the concepts of global and local tradeoff. We summarized our findings as several takeaways about 2 views of the tradeoff and proposed a way to combine all the takeaways together into a retrieval-augmentation recipe.
|
2304.01490
|
The Economic Effect of Gaining a New Qualification Later in Life
|
Pursuing educational qualifications later in life is an increasingly common
phenomenon within OECD countries since technological change and automation
continues to drive the evolution of skills needed in many professions. We focus
on the causal impacts to economic returns of degrees completed later in life,
where motivations and capabilities to acquire additional education may be
distinct from education in early years. We find that completing an additional
degree leads to more than \$3000 (AUD, 2019) extra income per year compared to
those who do not complete additional study. For outcomes, treatment and
controls we use the extremely rich and nationally representative longitudinal
data from the Household Income and Labour Dynamics Australia survey (HILDA). To
take full advantage of the complexity and richness of this data we use a
Machine Learning (ML) based methodology for causal effect estimation. We are
also able to use ML to discover sources of heterogeneity in the effects of
gaining additional qualifications. For example, those younger than 45 years of
age when obtaining additional qualifications tend to reap more benefits (as
much as \$50 per week more) than others.
|
Finn Lattimore, Daniel M. Steinberg, Anna Zhu
|
2023-04-04T03:09:41Z
|
http://arxiv.org/abs/2304.01490v3
|
# The Economic Effect of Gaining a New Qualification in Later Life+
###### Abstract
Pursuing educational qualifications later in life is an increasingly common phenomenon within OECD countries since technological change and automation continues to drive the evolution of skills needed in many professions. We focus on the causal impacts to economic returns of degrees completed later in life, where motivations and capabilities to acquire additional education may be distinct from education in early years. We find that completing an additional degree leads to more than $3000 (AUD, 2019) extra income per year compared to those who do not complete additional study. For outcomes, treatment and controls we use the extremely rich and nationally representative longitudinal data from the Household Income and Labour Dynamics Australia survey (HILDA). To take full advantage of the complexity and richness of this data we use a Machine Learning (ML) based methodology for causal effect estimation. We are also able to use ML to discover sources of heterogeneity in the effects of gaining additional qualifications. For example, those younger than 45 years of age when obtaining additional qualifications tend to reap more benefits (as much as $50 per week more) than others.
_JEL: J12, J18, H53_
_Keywords: Machine Learning, education, mature-age learners, causal impacts_
Introduction
Pursuing educational qualifications later in life is an increasingly common phenomenon within OECD countries (OECD, 2016). Technological change and automation continues to drive the evolution of skills needed in many professions, or to oust the human workforce in others. This is particularly true for middle-income workers performing routine tasks (Autor, Katz and Kearney, 2008, Acemoglu and Autor, 2011). Also at the lower end of the income-distribution, such as among welfare recipients, governments are increasingly trying to promote the idea of life-long learning.
This paper contributes to understanding one efficacy dimension of these policy and individual choices by estimating the causal effects on earnings and by focusing on mature-age students. We add to previous work on the returns to education for 'younger students'. Previous research points to positive and significant wage premiums for younger cohorts with more education, ranging between 5 and 13% (Angrist and Keueger, 1991, Harmon, Oosterbeek and Walker, 2003, Machin, 2006) or even higher than 15% as in the case of Harmon and Walker (1995). The wage returns to education may be more uncertain for older students as they face higher opportunity costs to study and need to navigate a more fragmented system in the postsecondary education setting.
We also add to the literature that investigates the economic returns for mature-age learners at community or training colleges (Jacobson, LaLonde and Sullivan, 2005, Chesters, 2015, Zeidenberg, Scott and Belfield, 2015, Polidano and Ryan, 2016, Xu and Trimble, 2016, Belfield and Bailey, 2017\(a\), Dynarski, Jacob and Kreisman, 2016, 2018, Mountjoy, 2022). The evidence on the labour market returns to vocational and community college education is strong and positive, particularly for female students (Belfield and Bailey, 2017\(a\), Zeidenberg, Scott and Belfield, 2015, Perales and Chesters, 2017). The results are even stronger once authors account for the different earnings-growth profiles of students and non-students before undertaking the degree (Dynarski, Jacob and Kreisman, 2016, 2018).
By focusing on the one institutional setting - the community or training college - the results of such studies may not be generalisable to the entire mature-age education market, such as to students who seek different degree types or who study at different institutions (Belfield and Bailey, 2017\(b\), Mountjoy, 2022). We add to this literature by estimating the returns across all formal degree-types (post-graduate degrees, training certificates, diplomas etc), and spanning all subjects and institutions at which the study took place. This means we analyse the effects for a group of students with a larger span of demographic and socio-economic background characteristics. The broad remit of students that we analyse
also allows our study to compliment studies that evaluate government-run training programs, which tend to enrol low-productivity workers (Ashenfelter, 1978, Ashenfelter and Card, 1985, Bloom, 1990, Leigh, 1990, Rauum and Torp, 2002, Jacobson, LaLonde and Sullivan, 2005, Card, Kluve and Weber, 2018, Knaus, Lechner and Strittmatter, 2022).
We contribute the first evidence in systematically identifying which groups of mature-age students tend to benefit more from further education. We also compliment previous studies that already find significant heterogeneity by degree-type, institutional setting, and by the background characteristics of the student (Blanden et al., 2012, Zeidenberg, Scott and Belfield, 2015, Polidano and Ryan, 2016, Dorsett, Lui and Weale, 2016, Xu and Trimble, 2016, Belfield and Bailey, 2017\(a\), Perales and Chesters, 2017, Bockerman, Haapanen and Jepsen, 2019). A benefit of a systematic, data-driven approach to heterogeneity analysis is that it can reduce the risk of overlooking important sub-populations compared to less data-driven approaches (Athey and Imbens, 2017, Knaus, Lechner and Strittmatter, 2021).
A key challenge in estimating the causal returns to later-life education is that factors that enable mature-age learners to pursue and complete a qualification may also be precursors to later-life success. Moreover, the drivers of degree completion may be numerous and related to other variables in complex, unknown ways. We use a machine learning (ML) based methodology in this work since it allows us to intensively control for many confounding factors, as well as discover sources of treatment heterogeneity. ML algorithms also automatically discover nonlinear relationships that may be unknown to the researcher. For high-dimensional and complex datasets such as we use in this research, these methodological abilities are crucial in reducing bias from model mis-specification and confounding (e.g. selection into treatment), and reducing variance from correlation/collinearity.
We adapt ML tools for causal inference purposes. We recognise that, as with all statistical models, we make assumptions when we use ML techniques for causal inference, and these need to be tested. One key assumption is that the controls included in the ML models sufficiently account for selection into treatment. We propose to undertake a replication exercise where we compare the results of the ML model with that of baseline models, using Ordinary Least Squares (OLS) and Fixed Effects. We also contrast the selected control variables in the ML model with those that were manually selected in Chesters (2015), and comment on the potential biases from manual variable selection. We have chosen this published work because it uses the same data (HILDA) and examines the same topic.
The results show that an additional degree in later-life increases total future earnings by more than an average of $3,000 per year compared to those who do not complete any further study. We consistently estimate this causal effect using a selection-on-observables strategy based on T-learner, Doubly Robust and Bayesian models. The estimate is based on 19 years of detailed nationally representative Australian data from the Household Income and Labour Dynamics Australia (HILDA) survey. Two dimensions of these data are important. The first is that they contain a wealth of information about each respondent. For example, we begin with more than 3,400 variables per observation, including information about the respondents' demographic and socio-economic background, and on their attitudes and preferences. Access to this broad range of information means that by controlling for them, we can potentially proxy for unobservable differences between those who do and do not obtain a new qualification. Secondly, this dataset contains many variables that are highly correlated, so we require a systematic approach to reduce such information redundancy - something that ML models are adept at.
Our ML approach also identifies new sub-populations for which the treatment effects are different. We document that the starting homle loan amount and employment aspirations are significant factors related to the extent of gain from further study. We also find that the starting levels of and pre-study trends in personal and household income are hugely important. Age and mental health variables also account for variation in estimated effects. All of these variables are consistently selected as being significant for prediction out of the 3,400 features within the HILDA data. This selection is consistent across different ML models (which includes linear and non-linear model classes) and across numerous boostrap draws of the original sample.
Previous studies have found that individuals who seek a futher degree tend to have slower-growing earnings in the period before their study starts compared to similar individuals who do not seek further study (Jacobson, LaLonde and Sullivan, 2005, Dynarski, Jacob and Kreisman, 2016, 2018). By accounting for dynamic selection into obtaining a further degree, we can be confident that we compare the earnings paths of mature-age students to the paths of similar non-students who displayed the same earnings (and other) paths before study began. In this paper, we explicitly control for the trajectories of socio-economic and demographic circumstances before study starts. Standard fixed effects estimation would miss these dynamic confounders. We find that our ML estimates are significantly smaller than the size of the standard fixed effects results. We also estimate lower returns compared to Ordinary Least Squares (OLS) models. We document the additional confounder variables that we include in our models but are usually omitted
from standard OLS specifications. These variables suggest there is significant selection into mature-age students who undertake a further degree.
We adapt ML models for the purpose of estimating causal effects. Standard off-the-shelf ML models are better suited to predictive purposes. When obtaining a prediction, off-the-shelf ML models can find generalisable patterns and minimise overfitting issues, though the use of cross-validation, because the true outcomes are observed. This means that we can optimize a goodness-of-fit criterion. Causal parameters, however, are not observed in the data, which means we cannot directly train and evaluate our models.
In this paper, we take the difference between the two optimal outcome models, which can achieve the optimum bias-variance trade-off point for the conditional average treatment effect. Specifically, we model the response surfaces for two conditional mean equations - one using the treatment observations and another using the control observations. We estimate these equations with ML methods such as the T-learner and Doubly Robust. Here, we employ both linear (LASSO and Ridge) and non-linear (Gradient Boosted Regression) model classes. We compare and evaluate their comparative performance using nested cross-validation. We then test the statistical significance of our causal parameters by examining the distribution of the estimates through bootstrapping. Last, we use a variety of Bayesian ML models following the formulation presented in Hahn, Murray and Carvalho (2020) that reduce effect estimation bias within the Bayesian paradigm. These models have several properties that may be desirable, such as the ability directly parameterise heterogeneous prognostic and treatment models.
## 2 Context: Higher education and Vocational study in Australia
Mature-age education in Australia is among the highest in the world. In 2014, Australia's participation in vocational education by those aged 25-64 was the highest among OECD countries. The tertiary education rate for those aged 30-64 was the second highest (Perales and Chesters, 2017). Mature-age Australians are increasingly enrolling in university or college to change employers, change careers, gain extra skills, improve their promotion prospects and earning capability or search for better work/life balance. Redundancy and unemployment have also been driving forces for individuals to return to education later in life (Coelli, Tabasso and Zakirova, 2012).
The increase in mature-age learners accessing higher education has in part been driven by government policy. In 2009, the Australian government adopted a national target of at
least 40% of 25-34-year-olds having attained a qualification at bachelor level or above by 2025 (O'Shea, May and Stone, 2015). This was part of a policy that transitioned Australia to a demand-driven system (Universities Australia, 2020). The policy had a large effect on access to higher education, as it removed the cap on the number of university student places. By 2017, 39% of 25-34-year-olds had a bachelor's degree or higher (Caruso, 2018).
While the initial uptake of university places in the demand-driven system was strong, especially among mature-age students1 (Universities Australia, 2019), growth in undergraduate enrolments slowed since 2012. In 2018, mature-age enrolments even dropped below the previous year. The 40+ age group showed the worst growth, receding by 10%, while the 25-29's and 30-39's showed growth of around -4% (Universities Australia, 2020). The decline of enrolments coincided with the freezing of the Commonwealth Grant Scheme (CGS) which capped funding at 2017 levels, effectively ending the demand-driven system (Universities Australia, 2020).
Footnote 1: Between 2010 and 2012, growth in mature-age enrolments in undergraduate courses doubled for the 30-39 age group and tripled for the 40+ age group.
Access to Commonwealth Supported Places (CSPs) have since been limited to 2017 levels, with cap raises from 2020 subject to performance measures (Universities Australia, n.d.). As a proportion of the working age population, mature-age students also participated less in vocational education and training (VET) over the same period. It appears the introduction of the demand-driven system also increased VET participation between 2010 and 2012, before continuing its decline (Atkinson and Stanwick, 2016). Total VET enrolments since 2018 stabilised, with 2019 and 2020 enrolments slightly above 2018 levels2 (NCVER DataBuilder, 2021). The impact of COVID-19 on 2021 enrolments is yet to be fully determined. So far, VET enrolments for the first half of 2021 are well above the previous 4 years across all age groups, with \(\sim\)1 million enrolments in 2021 compared to \(\sim\)870 thousand enrolments in 20173 (NCVER DataBuilder, 2021).
Footnote 2: Total VET enrolments 2016-2020.
Footnote 3: Government funded program enrolments Jan-June 2017-2021.
The cost of a bachelor's degree for domestic students in Australia is the sixth highest among OECD countries (Universities Australia, 2020). In 2018, the average annual cost of a bachelor's degree was around $5,000 in Australia, about half of the top 2 most expensive countries where it costs around $9,000 in the US and $12,000 in the UK4. VET and TAFE courses in Australia cost a minimum of $4,000 per year on average while postgraduate courses cost a minimum of $20,000 per year on average5 (Studies in Australia, 2018).
Mature-age students can cover the cost of further study themselves or they can receive support from the government. Students at university or approved higher education providers can access financial support from the Higher Education Loan Program (HELP) scheme, which provides income-contingent loans. This allows students to defer their tuition fees until their earnings reach the compulsory repayment threshold, upon which repayments are deducted from their pay throughout the year at a set rate. Postgraduate students can access the Commonwealth Supported Place (CSP) scheme, which subsidises tuition fees for those studying at public universities and some private higher education providers. However, most CSPs are for undergraduate study.
FEE-HELP is the HELP scheme available to full-fee paying students who don't qualify for a CSP i.e., post-graduate students. VET Students Loans (formerly VET FEE-HELP) are also part of the HELP scheme and are available to students undertaking vocational education and training (VET) courses outside of higher education (Universities Australia, 2020). CSPs and HELP loans are withdrawn from students who fail half of their subjects, assessed on a yearly or half-yearly basis depending on the level of study.6
Footnote 6: Yearly at bachelor level and per trimester for courses lower than bachelor level.
## 3 Data
We use data from the Household Income and Labour Dynamics Australia (HILDA) survey. These data are rich, and we exploit the full set of background information on individuals (beginning with more than 3,400 variables per observation).
HILDA covers a long time span of 19 years, starting in 2001. We use the 2019 release. This means we observe respondents annually from 2001 to 2019.
### Sample exclusions
Our main analysis sample contains respondents who were 25 years or above in 2001. This allows us to focus on individuals who obtain a further education - beyond that acquired in their previous degree.
Our main analysis focuses on measuring the impact of further education using wave 19 outcomes. Here, the feature inputs to the models are taken from the individuals in 2001. We delete any individuals who were 'currently studying' in 2001. This also ensures that our features, which are defined in 2001 are not contaminated by the impacts of studying but clearly precede the study spell of interest. These sample exclusions result in 7,359
respondents being dropped because they are below the age of 25 in 2001 and a further 1,387 respondents being dropped because they were studying in 2001.
We then restrict the sample to those who are present in both 2001 and 2019. This ensures that we observe base characteristics and outcomes for every person in our analysis sample. This results in a further 5,727 respondents being dropped from the sample. Our analysis sample has 5,441 observations. More details of our main analysis sample and data can be found in the Online Appendix Document 1.7
Footnote 7: For sensitivity analysis, a second sample of respondents are examined. They are slightly younger when they began study, their feature values are taken in the two years before study began and their outcomes are measured four years after their study began. In this second sample, there are 1,814 individuals who started and completed a further educational degree, and 60,945 person-wave control observations who never completed a further degree. We detail our second approach in the Online Appendix Document 2.
### Outcomes
We measure outcomes in 2019 across the groups of individuals who did and did not get re-educated. We use annual earnings to measure the economic returns to education. We also analyse outcomes related to the labour market such as employment, changes in earnings, changes in occupation, industry, and jobs.8
Footnote 8: A second approach is to use outcomes measured four years after the start of a study spell. For sensitivity analysis, we repeat our main estimations using this second approach. Here, as many individuals in our dataset never started a further degree i.e. they are in our control group, we assign a time stamp to them for every year the control person theoretically could have started to study. We do this for every year from 2003 to 2019. This implies that control group individuals can be duplicated multiple times in the dataset. We then measure the control individuals’ outcomes 4 years after their theoretical time stamp.
### Treatment
We define further education as an individual who obtains a further degree in a formal, structured educational program. These programs must be delivered by a certified training, teaching or research institution. Thus, we do not analyse informal on-line degrees (such as Coursera degrees). We also do not consider on-the-job training as obtaining further education.
Our treatment variable is a binary variable that takes the value of 1 if an individual has obtained an additional degree anytime between wave 2 (2002) and wave 17 (2017). As we analyse outcomes in 2019, this means we calculate the average returns between 2 years and up to 17 after course completion. We delete any respondent who obtained a qualification after wave 17. This allows us to analyse outcomes at least two years after course completion.
HILDA documents formal degree attainment in two ways. The first is to ask respondents, in every, wave what is their highest level of education. The second way is to ask respondents, in every wave, if they have acquired an additional educational degree since the last time they were interviewed.
We utilise both these questions to construct our measure of further education. Using the first question, we compare if the highest level of education in 2019 differs from that in 2001. If there has been an upgrade in educational qualification between these two years, we set the treatment indicator to be one and zero otherwise. This question, however, only captures upgrades in education; it fails to capture additional qualifications that are at the same level or below as the degree acquired previously by the respondent. We rely on the second survey question to fill this gap.
These two survey questions thus capture any additional qualification obtained from 2002 to 2017, inclusive. Additional qualifications refer to the following types of degrees: Trade certificates or apprenticeships; Teaching or nursing qualifications, Certificate I to IV, Associate degrees, Diplomas (2-year and 3-year fulltime), Graduate Certificates, Bachelor, Honours, Masters and Doctorate degrees.
### Covariates/features
We define our covariates, or features as they are known in machine learning parlance, using 2001 as the base year. Since we delete any respondents who were currently studying in 2001, we ensure that all features were defined before a respondent begins further study.9
Footnote 9: We also test the sensitivity of our results to using feature inputs that are taken from the individuals closer to the timing of their study, namely two years before study began. Here, we use both the year and the two years preceding the start of a study spell to define our features. This allows us to capture both level and growth values in the features.
A unique approach to our feature selection strategy is that we use all the information available to us from the HILDA survey in 2001. This means that we have more than 3,400 raw variables per observation. Before using the features in a ML model, we delete any features that are identifiers or otherwise deemed irrelevant for explaining the outcome.
In order to reduce redundancy in this vast amount of information, we next apply a supervised Machine learning model to predict outcomes 5 years ahead of 2001 i.e., in 2006. We then select the top 100 variables that are most predictive of the outcome in 2006.10 These variables are listed in Table 1.
Footnote 10: Confounders are features that both have an impact on the outcome and on the treatment. Chernozhukov et al. (2018) suggest including the union of features kept in the two structural equations (outcome on features and treatment on features). Here, we only include the features that predict the
Footnote 10: [https://www.cds.org/](https://www.cds.org/)
### Missing variables from the baseline model
As part of a replication exercise, we constrast the results from the ML model with published work using Ordinary Least Squares (OLS) and Fixed Effects models. We also contrast the features selected in the ML model with an approach that manually selects the variables as in the case of Chesters (2015). We call this the 'baseline' model.
As a descriptive exercise, Table 2 presents the features that were'missed' by the baseline model. In the baseline model, we included features such as age, gender, state of residence, household weekly earnings, highest level of education attained, and current work schedule. This collection of variables have been informed by theory or previous empirical results.
The data-driven model identifies more salient variables compared to the baseline model. Additional variables include employment conditions such as work schedule, casual employment, firm size, tenure or years unemployed; financial measures such as weekly wage, investment income and mortgage debt; health measures such as limited vigorous activity and tobacco expenses; and work-life preferences related to working hours and child care.
We identify variables as missing from the baseline model if those variables explain the residual variation in the outcome. Specifically, we regress the residuals from the baseline models (without the treatment included) on the features included in the data-driven model and train a LASSO model to highlight the salient variables that were missed. The variables that are chosen are listed in Table 2. We also document how these variables are correlated to the outcome and to the treatment in order to give us a sense of the direction of the bias their omission may induce.
Most of the omitted variables bias the OLS estimates is upwards.11 The upward bias is consistent with the ML-models estimating an economic returns on obtaining a new qualification that is significantly smaller than the returns from an OLS model or a Difference-in-Difference - Fixed Effects (DD-FE) model. In the DD-FE model, we use the same 5,441 individuals as the other methods but they are followed over two waves: 2001 and 2019 (i.e. there are 10,882 person-wave observations). We control for individual and wave fixed-effects.
Footnote 11: Exceptions include casual employment status, the presence of a past doctorate qualification, years unemployed, parental child care and dividend and business income.
Figure 10 displays the estimated returns from six different models. The first three bars show significantly higher returns based on the OLS (no controls), OLS (with controls) and the DD-FE models compared to the last three bars, which are based on the ML
models - Gradient Boosted Regression, Doubly Robust and Bayesian Causal Forest. We discuss these methods in more detail below.
It is important to highlight that our approach to identifying missing variables from the baseline model is a descriptive one. As previously mentioned, the ML algorithm randomly selects variables that are highly correlated thus we may have missed out on reporting the label of important variables omitted from the baseline model.
## 4 Descriptive Figures and Tables
We calculate the average returns to degree completion for mature-age students who completed degrees between 2002 and 2017. The window in which study and degree-completion took place is noticeably large. However, sample size limitations with our survey data mean that it is not feasible to run an ML analysis, disaggregated by the timing-of-completion.
In order to obtain some insights into the potential heterogeneity over time, we present a series of descriptive graphs in this section. Here, our aim is not to present any causal analysis but to describe which groups studied earlier in the time period (and thus had more time to accumulate returns). These graphs can also point to the potential different factors driving study across the time period, and different effects on earnings depending on how much time has elapsed since completion.
Figure 3 presents the distribution of degree completion over time. There is a steep decline in degree-completion proportions over time. This is likely to reflect the aging profile of HILDA survey respondents and that further study is disproportionately higher among the younger cohorts (25-44 year olds) (See Figure 4).
Over time, Figure 5 shows that the composition of degrees completed has shifted. Among those who completed a degree in later years, compared to those who completed a degree in the earlier period, a higher percentage completed a Certificate III or IV, Diploma or Advanced Diploma as opposed to a lower-level degree (Certificate I or II or below). In all years, the most frequently completed degrees are Cert 3 or 4, Associate degrees, Diplomas and Advanced Diplomas.
The predominance of Cert 3 or 4 degrees is common across gender. Although, Figure 6 shows the distribution of degrees is more heavily skewed towards these degrees for men then they are for women.
Figure 7 shows an increase in both average earnings and employment overtime between 2002 and 2017. Despite the upward trajectory, these outcomes show more volatility following 2008. This is likely to reflect the smaller samples in the later years of the
survey. In our main analysis we average the returns over time as the samples within each year are inadequate to draw inference about heterogeneity across time.
## 5 Method
We aim to estimate the causal impact of obtaining a new qualification. Our empirical challenge is a missing data one in the sense that we do not observe the counterfactual outcome for each person - what would have their income been if they had/had not obtained a new qualification?
We use capitalisation to denote random variables, where \(Y\in\mathbb{R}^{+}\) is the outcome variable, \(T\in\{0,1\}\) is the binary treatment indicator, and \(X\in\mathcal{X}\) are the conditioning variables (which can be a mix of continuous or categorical in type). Small case is used to denote realisations of these random variables, e.g. \(y\), \(t\) and \(x\), and we may use a subscript for an individual realisation, e.g. \(y_{i}\) for individual \(i\) from a sample of size \(n\).
Under the potential outcomes framework of Imbens and Rubin (2015), \(Y(0)\) and \(Y(1)\) denote the outcomes we would have observed if treatment were set to zero (\(T=0\)) or one (\(T=1\)), respectively. In reality, we only observe the potential outcome that corresponds to the realised treatment,
\[Y=T\cdot Y(1)+(1-T)\cdot Y(0). \tag{1}\]
The missing data problem (or the lack of counterfactuals) is especially problematic when the treated group is different from the control group in ways that also affect outcomes. Such selection issues mean that we cannot simply take the difference in the average of the non-missing values of \(Y(0)\) and \(Y(1)\).
To address the missing data problem, we turn to a range of ML-based techniques. Standard ML tools are purposed to predict, but our aim is to estimate the causal parameter. These are different aims, and so we have to adapt the ML tools. We may potentially bias our causal parameter of interest if we were to use the off-the-shelf tools. For example, if we were to select the important confounders using an ML model to predict the outcome \(Y\), then we may undervalue the importance of variables that are highly correlated to the treatment \(T\) but only weakly predictive of \(Y\)(Chernozhukov et al., 2018).
We approach filling the missing data indirectly with three types of ML models that have been specially adapted to causal inference. They are: the T-Learner, Doubly Robust and Bayesian models. For all our models, we require the following identification assumptions.
#### Identification assumptions
To interpret the estimated parameter as a causal relationship, the following assumptions are needed:
1. **Conditional independence** (or conditional ignorability/exogeneity or conditional unconfoundedness) Rubin (1980): \(Y(0)\) and \(Y(1)\) are independent of \(T\) conditional on \(X\); i.e. \(\{Y(0),Y(1)\}\perp T\ |\ X\).
This assumption requires that the treatment assignment is independent of the two potential outcomes. Practically, this amounts to assuming that components of the observable characteristics available in our data, or flexible combinations of them, can proxy for unobservable characteristics. Otherwise, unobservable confounding bias remains.
A benefit of using all the features the HILDA dataset has to offer is that we may minimise unobserved confounding effects. Specifically, we rely on the 3,400 features and complex interactions between them as well as flexible functional forms to proxy for components of this unobserved heterogeneity. For example, while we do not observe ability or aptitude directly, we may capture components of it with other measures that are observed in HILDA such as past educational attainment or the long list of income and other sources of income variables (see Table 1 for a list of the features).
The reader is likely to conceptualise other dimensions of unobserved heterogeneity that may not be captured in Table 1. There are two likely scenarios in this case. First, HILDA may not be exhaustive enough, even with its existing richness, to capture all dimensions of unobserved heterogeneity. As a result, our estimates may be biased.
Another potential scenario is that the source of unobserved heterogeneity in question (or some components of it) is still captured but modelled under the guise of another variable label. Variables that are highly correlated with each other are unlikely to be simultaneously included in the model. This is because the ML algorithm, in attempting to reduce the amount of information redundancy, may have randomly dropped one or more of those correlated variables.
1. **Stable Unit Treatment Value Assumption** (SUTVA) or counterfactual consistency: \(Y=Y(0)+T\cdot(Y(1)-Y(0))\).
Assumption 2 ensures that there is no interference, no spill-over effects, and no hidden variation between treated and non-treated observations. SUTVA may be violated if individuals who complete further education influence the labour market outcomes of those who do not complete further education. For example, if the former group absorb resources that would otherwise be channelled to the latter group. Alternatively, the
former group may be more competitive in the labour market and reduce the probability of promotions or job-finding for the latter group. As those who complete further education are a relatively small group, it is unlikely that these general equilibrium effects would occur.
1. **Overlap Assumption** or common support or positivity - no subpopulation defined by \(X=x\) is entirely located in the treatment or control group, hence the treatment probability needs to be bounded away from zero and one.
The overlap is an important assumption because counterfactual extrapolation using the predictive models,
\[\mathbb{E}[Y|X{=}x,T{=}1] \approx\mu_{1}(x)\quad\text{and} \tag{2}\] \[\mathbb{E}[Y|X{=}x,T{=}0] \approx\mu_{0}(x) \tag{3}\]
is likely to perform best for treatment and control subpopulations that have a large degree of overlap in \(\mathcal{X}\). If the treatment and control groups had no common support in \(\mathcal{X}\), we would be pushing our counterfactual estimators to predict into regions with no support in the training data, and therefore we would have no means by which to evaluate their performance.
This means the optimum bias-variance trade-off point for the conditional average treatment effect may not align with the optimum bias-variance trade-off point for the separate \(\mu_{1}(x)\) and \(\mu_{0}(x)\) models. Since, ultimately we are interested in the CATEs (as opposed to the predictive accuracy of the individual conditional mean functions), this can mean that we have biased CATEs.
1. **Exogeneity of covariates (features)** - the features included in the conditioning set are not affected by the treatment.
To ensure this, we define all of our features at a time point before any individual started studying. Specifically, we use the first wave of HILDA (in 2001) to define our features. We only look at those individuals who completed further education in 2002 onwards. Furthermore, we delete any individuals who were currently studying in 2001 to ensure the features cannot reflect downstream effects of current study.
With the strong ignorability and overlap assumptions in place, treatment effect estimation reduces to estimating two response surfaces - one for treatment and one for control.
### T-Learner model
The first adaptation of ML models for causal estimation is the T-learner approach. We aim to measure the amount by which the response \(Y\) would differ between hypothetical worlds in which the treatment was set to \(T=1\) versus \(T=0\), and to estimate this across subpopulations defined by attributes \(X\).
The T-learner is a two-step approach where the conditional mean functions defined in Equations (2) and (3) are estimated separately with any generic machine learning algorithm.
Machine learning methods are well suited to find generalizable predictive patterns, and we employ a range of model classes including linear (LASSO and Ridge) and non-linear (Gradient Boosted Regression). Once we obtain the two conditional mean functions, for each observation, we can predict the outcome under treatment and control by plugging each observation into both functions. Taking the difference between the two outcomes results in the Conditional Average Treatment Effect (CATE).
To show this, we define our parameter of interest, the CATE, which is formally defined as:
\[\tau(x)=\mathbb{E}[Y(1)-Y(0)|X{=}x], \tag{4}\]
which, with the assumptions outlined previously, is equivalent to taking the difference between two conditional mean functions \(\mu_{1}(x)-\mu_{0}(x)\):
\[\tau(x) =\mu_{1}(x)-\mu_{0}(x)\] \[\approx\mathbb{E}[Y|T{=}1,X{=}x]-\mathbb{E}[Y|T{=}0,X{=}x]\] \[=\mathbb{E}[Y(1)-Y(0)|X{=}x]. \tag{5}\]
In this estimation, we are not interested in the coefficients from regressing \(Y\) on \(X\). What we require is a good approximation of the function \(\tau(x)\), and hence good estimates from \(\mu_{1}(x)\) and \(\mu_{0}(x)\), which is within the perview of machine learning methods.
A benefit of our set-up is that when we take the difference between the two conditional mean functions, we coincidently find the optimum bias-variance trade-off point for the conditional average treatment effect. This means that we have an indirect way to obtain the best prediction of the CATE through two predictive equations, where we observe the true outcomes (and thus are able to regularise).
In practice, however, this indirect way of minimising the mean squared error for each separate function to proxy for the minimum mean squared error of the treatment effect can be problematic. See, for example, Kunzel et al. (2019), Kennedy (2020) for settings when the T-learner is not the optimal choice. One potential estimation problem arises when there are fewer treated individuals than control individuals and the individual regression functions are non-smooth. In this instance the response surfaces can be difficult to estimate them in isolation, and the T-learner does not exploit the shared information between treatment and control observations. For example, if \(X\) relates to \(Y\) in the same fashion for treated and control observations the T-learner cannot utilise this information. As a result, the estimate \(\mu_{1}\) tends to over smooth the function; in contrast, the estimate \(\mu_{0}\) regularises to a lesser degree because there are more control observations. This means a naive plug-in estimator of the CATE that simply takes the difference between \(\mu_{1}-\mu_{0}\) will be a poor and overly complex estimator of the true difference. It will tend to overstate the presence of heterogeneous treatment effects. We turn to other ML models to address this potential problem.
### Doubly Robust model
The second approach is the Doubly Robust learner (DR-learner). It is similar to the T-learner in that it separately models the treatment and control surfaces, but it uses additional information from a propensity score model. In this case the propensity score model is a machine learning classifier that attempts to estimate the treatment assignment process,
\[\mathbb{E}[T{=}1|X{=}x]=\mathbb{P}(T{=}1|X{=}x)\approx\rho(x), \tag{6}\]
where \(\rho(x)\) as a probabilistic machine learning classifier. This allows information about the students' background, and the nature and complexity of their situation that may have led them to pursue further education to be incorporated into the model. Thus, the doubly robust approach can improve upon the T-learner approach because it can reduce misspecification error either through a correctly specified propensity score model or through correctly specified outcome equations. Another feature of the Doubly Robust approach is that it places a higher weight on observations in the area where the relative count of treatment and control observations is more balanced (i.e. the area of overlap). This may allow better extrapolations of the predicted outcomes within the region of
overlap. The ATE is estimated from three separate estimators,
\[A\hat{T}E=\frac{1}{n}\sum_{i=1}^{n}\left[\frac{t_{i}(y_{i}-\mu_{1}(x_{i}))}{\rho( x_{i})}+\mu_{1}(x_{i})\right]-\frac{1}{n}\sum_{i=1}^{n}\left[\frac{(1-t_{i})(y_{i}- \mu_{0}(x_{i}))}{1-\rho(x_{i})}+\mu_{0}(x_{i})\right] \tag{7}\]
Previously, with the T-learner, we were just estimating \(\mu_{0}(x)\) and \(\mu_{1}(x)\). With the DR-learner, we augment \(\mu_{0}(x)\) and \(\mu_{1}(x)\). For example, for the treated observations, we augment \(\mu_{1}(x)\) by multiplying the prediction error by the inverse propensity scores. This up-weights those who get treated but who are statistically similar to the control observations. We then apply this same augmentation to the \(\mu_{0}(x)\) for the control observations.
### Bayesian Models
The third approach is to use Bayesian models. We follow the general formulation presented by Hahn, Murray and Carvalho (2020) that suggests a predictive model of the following form,
\[\mathbb{E}[Y|X{=}x_{i},T{=}t_{i}]\approx\mu_{0}(x_{i},\rho(x_{i}))+\tau(x_{i} )\cdot t_{i}, \tag{8}\]
where \(\mathbb{E}[T=1|X{=}x_{i}]\approx\rho(x_{i})\) is the propensity score of individual \(i\) for the treatment. The component \(\mu_{0}(x_{i},\rho(x_{i}))\) is known as the 'prognostic' effect, and is the impact of the control variates, \(X\), on the outcome without the treatment. Then we are left with \(\tau(x_{i})\), which is the individual treatment effect,
\[\mathbb{E}[Y|X{=}x_{i},T{=}1]-\mathbb{E}[Y|X{=}x_{i},T{=}0] \approx[\mu_{0}(x_{i},\rho(x_{i}))+\tau(x_{i})]-\mu_{0}(x_{i},\rho (x_{i})),\] \[=\tau(x_{i}).\]
Average treatment effect is then just simply estimated as,
\[A\hat{T}E=\frac{1}{n}\sum_{i=1}^{n}\tau(x_{i}).\]
The advantage of this approach are manifold. From a Bayesian perspective, it allows us to place explicit and separate priors on the prognostic and treatment components of the models. For example, it may be sensible to expect the prognostic component to be flexible and strongly predictive of the outcome, while me may expect that the treatment component is relatively simple and small in magnitude (Hahn, Murray and Carvalho, 2020). Furthermore, this separation of model components and inclusion of the propensity score minimises bias in the form of regularisation induced confounding (RIC)
which is discussed in more detail in (Hahn et al., 2018, Hahn, Murray and Carvalho, 2020). And finally, it is a very natural way to estimate heterogeneous treatment effects, since we can parameterise \(\tau(x_{i})\) directly as an additive effect on \(\mu_{0}\), rather than having to separately parameterise control and treatment surfaces.
We explore three different model classes for \(\mu_{0}\) and \(\tau\), the first is a linear model for both prognostic and treatment models, the next uses a Gaussian process (GP), and lastly we use Bayesian additive regression trees (BART). We detail these models in the following sections.
#### Hierarchical Linear Model
The first Bayesian model uses linear prognostic and treatment components from Equation (8),
\[y_{i} \sim\mathcal{N}\big{(}\mu_{0}(x_{i},\rho(x_{i}))+\tau(x_{i})\cdot t _{i},\sigma^{2}\big{)}\quad\text{where,}\] \[\mu_{0}(x_{i},\rho(x_{i})) =w_{0}+w_{x}^{\top}x_{i}+w_{\rho}\rho(x_{i}),\] \[\tau(x_{i}) =w_{t}+w_{tx}^{\top}x_{i}.\]
We have used the following hierarchical priors,
\[\{\lambda_{0},\lambda_{x},\lambda_{\rho}\} \sim\text{Uniform}(0,100)\] \[\{\lambda_{t},\lambda_{tx}\} \sim\text{Uniform}(0,1000)\] \[\sigma \sim\text{HalfCauchy}(25)\] \[w_{0} \sim\mathcal{N}(0,\lambda_{0}^{2})\] \[w_{x} \sim\mathcal{N}(0,\lambda_{x}^{2}\text{I}_{d})\] \[w_{\rho} \sim\mathcal{N}(0,\lambda_{\rho}^{2})\] \[w_{t} \sim\mathcal{N}(0,\lambda_{t}^{2})\] \[w_{tx} \sim\mathcal{N}(0,\lambda_{tx}^{2}\text{I}_{d}),\]
where \(I_{d}\) is the identity matrix of dimension \(d\), which is the number of control factors. The propensity score, \(\rho(x_{i})\), is obtained from a logistic regression model. We also tested a gradient boosted classifier (Friedman, 2001) for this using five-fold nested cross validation. It did not seem to be more performant than the logistic model on held-out log-loss score.
For model inference, we use the no U-turn MCMC sampler (Hoffman and Gelman, 2014) in the numpyro software package (Bingham et al., 2019, Phan, Pradhan and Jankowiak, 2019). The choice of an uniform improper and non-informative prior over the regression
weight scales, \(\lambda_{*}\), is motivated by the advice in Gelman (2006) where we desire a non-informative prior that admits large values. We choose a broader prior for the treatment component of the model to minimise bias as suggested by Hahn, Murray and Carvalho (2020). We first burn in the Markov chain for 30,000 samples, then draw 1000 samples from the posterior parameters to approximate the ATE,
\[A\hat{T}E=\frac{1}{Sn}\sum_{s=1}^{S}\sum_{i=1}^{n}\tau^{(s)}(x_{i}), \tag{9}\]
where \((s)\) denotes a sample from the posterior parameters has been used to construct a random realisation of the treatment model component, and \(S=1000\).
#### Gaussian Process Regression
Gaussian process (GP) regression can be viewed as a non-linear generalisation of Bayesian linear regression that makes use of the kernel trick (Williams and Rasmussen, 2006, Bishop, 2006). Another way of understanding a GP is that is parameterises a distribution over functions (response surfaces) directly, rather than model weights as is the case with Bayesian linear regression.
Say we have the regression function, \(\mathbb{E}[Y|X{=}x_{i}]=f(x_{i})\), a Gaussian process models the covariance of \(f(x)\) directly using a kernel function,
\[\mathbb{E}[f(x_{i})\cdot f(x_{j})] =k(x_{i},x_{j})\qquad\text{or},\] \[\mathbb{E}[Y_{i}\cdot Y_{j}] =k(x_{i},x_{j})+\sigma^{2}\delta_{ij},\]
where \(\delta_{ij}\) is a Kroneker delta, and is one iff \(i=j\), otherwise zero. This formulation also assumes \(\mathbb{E}[Y]=\mathbb{E}[f(x)]=0\) for simplicity - and can be used directly if the outcomes are transformed to be zero mean, or we can model an additional mean function (see Williams and Rasmussen (2006) for details). The Gaussian process can be written as,
\[\mathbf{y}\sim\mathcal{N}(\mathbf{0},\mathbf{K}+\sigma\mathbf{I}_{n}),\]
where \(\mathbf{y}=[y_{1},\ldots,y_{i},\ldots,y_{n}]^{\top}\) is the vector of all outcome samples, \(\mathbf{K}\) is the covariance matrix with elements \(\mathbf{K}_{ij}=k(x_{i},x_{j})\), and \(\mathbf{I}_{n}\) the \(n\)-dimensional identity matrix.
To implement the functional relationship in Equation (8) in a Gaussian process, we create the kernel function over \(\langle x,t\rangle\) pairs,
\[k(\langle x_{i},t_{i}\rangle,\langle x_{j},t_{j}\rangle)=\sigma_{\mu_{0}}^{2}k _{\mu_{0}}(\langle x_{i},\rho(x_{i})\rangle,\langle x_{j},\rho(x_{j})\rangle)+ t_{i}t_{j}\cdot[\sigma_{\tau}^{2}k_{\tau}(x_{i},x_{j})+\tau_{0}].\]
Here \(k_{\mu_{0}}\) and \(k_{\tau}\) are the prognostic and treatment kernels respectively, \(\sigma_{\mu_{0}}\) and \(\sigma_{\tau}\) allow us to scale the contribution of these kernels to the functional relationships learned, and \(\tau_{0}\) permits a constant treatment effect. This induces the functional relationship we want; \(f(x_{i},t_{i})=\mu_{0}(x_{i},\rho(x_{i}))+\tau(x_{i})\cdot t_{i}\). We use the same propensity model for \(\rho(x_{i})\) as the linear model previously.
We have chosen isotropic Matern \(\frac{3}{2}\) kernel functions for \(k_{\mu_{0}}\) and \(k_{\tau}\),
\[k_{\nu=3/2}(x_{i},x_{j})=\left(1+\frac{\sqrt{3}|x_{i}-x_{j}|}{l}\right)\exp \left(\frac{-\sqrt{3}|x_{i}-x_{j}|}{l}\right),\]
where \(l\) is the length scale parameter, and controls the width of the kernel function. Smaller length scales allow for more high-frequency variation in the resulting function \(f(x_{i})\). The Matern kernel is a stationary and isotropic kernel, but does not have excessive smoothness assumptions on the functional forms it can learn - this kernel leads to the response surface being at least once differentiable (Williams and Rasmussen, 2006). A Gaussian process with this kernel can learn non-linear and interaction-style relationships between input features and the outcome. Our composite kernel is not necessarily stationary however, as we have included a non-stationary term, \(t_{i}t_{j}\).
A-priori, we expect reasonably smooth variation \(\mathbb{E}[y_{i}\cdot y_{j}]\) so we choose a long length-scale for the prognostic kernel function, \(l_{\mu_{0}}=10\), and an amplitude, \(\sigma_{\mu_{0}}^{2}=1\). We expect an even smoother relationship with less contribution for the treatment, and set the corresponding kernel parameters as; \(l_{\tau}=50\), \(\sigma_{\tau}^{2}=0.1\) and \(\tau_{0}=.001\). These parameters are then optimised using the maximum likelihood type-II procedure outlined in Section 5.4.1 of Williams and Rasmussen (2006).
The ATE is then approximated as,
\[A\hat{T}E=\frac{1}{Sn}\sum_{s=1}^{S}\sum_{i=1}^{n}f_{*}^{(s)}(x_{i},1)-f_{*}^{ (s)}(x_{i},0),\]
where \(f_{*}^{(s)}(x_{i},t)\) are samples from the Gaussian process posterior predictive distribution12 with kernel inputs \(k_{*}(\langle x_{i},t\rangle,\langle x_{i},t\rangle)\), which is equivalent to sampling from the distribution over \(\tau(\cdot)\). We use \(S=100\) samples.
Footnote 12: See Equations (2.22)-(2.24) of Williams and Rasmussen (2006).
### Bayesian Causal Forests
The last Bayesian model we use is the Bayesian causal forest introduced In Hahn, Murray and Carvalho (2020). Broadly it models the prognostic and treatment components As Bayesian additive regression trees (BART),
\[y_{i} \sim\mathcal{N}\big{(}\mu_{0}(x_{i},\rho(x_{i}))+\tau(x_{i})\cdot t _{i},\sigma^{2}\big{)}\quad\text{where},\] \[\mu_{0}(x_{i},\rho(x_{i})) =\text{BART}(x_{i},\rho(x_{i})),\] \[\tau(x_{i}) =\text{BART}(x_{i}).\]
We use the accelerated BART (XBART) implementation of this algorithm detailed in Krantsevich, He and Hahn (2022). BART (Chipman, George and McCulloch, 2010) has been shown to be an effective and easily applicable non-parametric regression technique that requires few assumptions in order to capture complex relationships that can otherwise confound effect estimation. We follow Hahn, Murray and Carvalho (2020) in our choice of BART priors,
\[\alpha_{\mu_{0}}=0.95, \alpha_{\tau}=0.25,\] \[\beta_{\mu_{0}}=2, \beta_{\tau}=3.\]
This choice prefers a more simple treatment effect model, \(\tau(x_{i})\), that is less likely to branch, and more likely to have shallower trees than the prognostic model. Similarly, we use 200 trees for the prognostic model, and 50 for the treatment. We take 500 burn-in sweeps, and then 2000 sweeps to estimate the posterior BART distributions.
ATE is estimated in the same way as for the linear model in Equation (9), but where the BART posterior is used for the treatment effect distribution.
### Model selection and model evaluation
For the non-Bayesian models we separate the evaluation of the model class and estimation of the ATE and CATE parameters in two procedures. We evaluate the predictive capacity of each model class using nested cross-validation. The procedure is represented in Figure 1. Here, our aim is to compare the predictive performance of three model classes: LASSO, Ridge and Gradient Boosted Regression (GBR). Our second procedure is to estimate the ATE and CATE parameters. The procedure is represented in Figure 2. We use bootstrap sampling (with replacement) to generate uncertainty estimates for the parameters, which we obtain over several draws of the same model class, but with model parameter re-fitting.
Focusing on the first procedure, we apply nested cross-validation to evaluate which model class performs best. In a first step, as Figure 1 shows, we pre-process the full dataset (containing 3,400 variables) to generate a dataset with a smaller set of highly predictive features (containing 91 variables). We apply a supervised machine learning approach with a LASSO model to select our top 91 predictors of the outcome of interest using outcomes measured in 2006. Note that in our later estimations of the treatment effect, the outcome is measured in 2019. We implement this intermediary step in order to reduce the correlation between variables and eliminate redundant information.
We assume that the top 9113 features that are most predictive of the outcome in 2006 correlate with the features that would be most predictive of the outcome in 2019. By choosing to apply this pseudo-supervised ML approach on the same outcome variable, but measured at a different time point, we obtain a good indication of the features that are useful for a model to perform well. Improved model performance here will also mean that the selected features are likely to represent the important confounders. We have chosen 2006 to ensure there is no overlap with 2019 outcomes to avoid overfitting issues with subsequent models.14
Footnote 11: We were aiming for approximately 100 features, and 91 was the closest we could get the LASSO estimator to select by changing the value of its regularisation strength.
Footnote 12: We do not compromise predictive performance when we use the selected subset of features as opposed to the full set of features. For example, the predictive performance from a Gradient Boosted Tree model that predicts earnings in 2006, using 5-fold nested cross-validation, is statistically similar between models that use the 91 feature set and the full, 3,400 feature set (with Root Mean-Squared Errors (RMSEs) of 484.251 and 482.286, respectively). This is a negligible loss in predictive performance. There is a slightly larger associated loss between the restricted and full feature sets from models predicting earnings in 2019 (RMSEs of 843.548 and 831.931, respectively), but this is still not statistically significant.
Using the top 91 predictors, we then apply nested cross validation to evaluate the predictive capacity of each model class (LASSO, Ridge, GBR). First, we split the data into train and test folds with an 80-20 split. Within the 80 percent train fold we perform 5-fold cross-validation in order to train and evaluate the performance of each configuration of hyperparameters. We do this separately for the outcome surface using the treated observations and the outcome surface using the control observations. From this, we select the models with the best mean predictive scores. We then evaluate the predictive performance of the selected model on the holdout test.
We repeat this process ten times (10-outer scores) for each model class. This allows us to evaluate the performance based on the mean and standard deviation of these scores. Note that thus far, we have not evaluated any particular configuration of the model, rather the performance of the model class on random (without replacement) subsets of data. The
nested cross validation procedure protects us against overfitting when reporting predictive performance, as the model selection and validation happens on different data.
Table 3 shows that the GBR is the best performing model class. It yields the highest out-of-sample R-squared and the lowest MSE. This is true for both the outcome surfaces separately.
As the DR-learner model relies on the same treatment and control outcome surfaces estimated in the T-learner, we do not repeat Table 3 for the DR results. A further component of the DR model, however, is the propensity score. Here, we implement a regularised logistic regression to predict the likelihood of being treated (to obtain a further degree). Specifically, we use cross validation to fit a Logistic regression and obtain the predictions from the original sample. The holdout performance of the fitted Logistic regression model yields an area under the ROC curve of 0.71.
#### Inference via bootstrapping
Once we have selected the best performing model class, we turn to the estimation of the parameters and their associated uncertainty. We use a bootstrapped validation procedure to capture the uncertainty arising from model hyperparameter selection in addition to that from estimating parameters of a fixed model from noisy, finite data.
A common approach to inference in the causal machine learning literature is to use cross-fitting (Chernozhukov et al., 2018) or sample splitting (Athey and Wager, 2019). These methods ensure that the standard errors on the estimators are not underestimated because they avoid using the same data point to both select hyperparameters of the model and to estimate the parameters of the outcome or effect surfaces. The result of using the same data for model selection and effect estimation is that our standard errors would suffer from pre-test bias since the model may suffer from overfitting.
Sample splitting and cross-fitting are appropriate when the sample size is large. An issue with studies that rely on survey-based data is that sample sizes are often not large enough to efficiently use these methods. For example, there may not enough data to split the dataset into separate train and test datasets for each model such that each of these splits would cover all the common and uncommon values of the \(X\)-features that are observed in the full sample. Consequently, the ML models may not find representative functional forms for \(\mu_{0}(x)\) and \(\mu_{1}(x)\). As a result, our estimate treatment effects are likely to have a large degree of uncertainty.
A suitable alternate procedure is to use bootstrapping. Bootstrap resampling allows us to estimate variation in the point model parameter estimates. In this way, we side-step
the need to rely on the assumption of asymptotic normality, and it is more efficient than sample splitting to generate standard errors. In our bootstrapping procedure, we ensure that the standard errors reflect the sources of uncertainty stemming from both the selection of the model and the estimation of the model. As a result, we generate standard errors that avoid any potential pre-test issues.
As a first step we obtain the 91 top predictors from the initial pre-processing of the full dataset, shown in Figure 2. That is, we train a supervised machine learning LASSO model to extract the features that best predict earnings in 2006.
The second step involves training our models using the 91 top predictors on a bootstraped sample, \(s\), to select the best models for \(\mu_{1}^{(s)}(x)\) and \(\mu_{0}^{(s)}(x)\). Within this bootstrap sample, we divide the dataset into five folds and perform cross-validation to select the best model configuration. Similar to the cross-validation description above, our model configuration is trained on subsets of the data, and then evaluated on holdout samples. We modify the 5-fold cross validation to ensure bootstrap replicated training data does not simultaneously appear in the training and validation set. We perform this model selection step within the bootstrapping procedure to capture the uncertainty coming from the selection of hyperparameters. If we simply re-estimated the same model with a given set of hyperparameters in each bootstrap model then the uncertainty is only over the model parameters, and not the model choice (e.g. the GBR tree depth).
Third, and once we have these predicted outcome surfaces, \(\mu_{1}^{(s)}(x)\) and \(\mu_{0}^{(s)}(x)\), we are able to calculate the individual treatment effect, \(\tau(x_{i})\), for each person, \(i\), in the original sample (not the individuals from the bootstrap sample) by substituting the values of their features into the LASSO, Ridge or tree estimators for the outcome surfaces. We can obtain a sample mean, \(\bar{\tau}^{(s)}\), by averaging \(\frac{1}{h}\sum_{i=1}^{n}\tau^{(s)}(x_{i})\) using the bootstrapped effect model. We repeat this procedure over \(S=100\) bootstrap samples. This provides an empirical distribution of \(\bar{\tau}\) and \(\tau(x_{i})\). The grand mean over the bootstrap sample means, \(\bar{\tau}_{G}=\frac{1}{S}\sum_{s=1}^{S}\bar{\tau}^{(s)}\), will converge to the sample treatment effect mean. We use \(\bar{\tau}_{G}\) as an estimate of the ATE, and \(\frac{1}{S}\sum_{s=1}^{S}\tau^{(s)}(x_{i})\) as an estimate of the individual CATE. The bootstrap resample is the same size as the original sample because the variation of the ATE depends on the size of the sample. Thus, to approximate this variation we need to use resamples of the same size.
To obtain confidence intervals for the ATE and CATE estimates we use standard empirical bootstrap confidence interval estimators (Efron and Tibshirani, 1986).
For the DR-learners, similar to the T-learner, we train \(\mu_{1}(x)\) and \(\mu_{0}(x)\) models across 100 bootstrap samples and weight these outcome surfaces by the propensity score model, \(\rho(x)\), which is estimated using logistic regression (as described previously).
#### Inference for the Bayesian models
The inference process for the Bayesian models a little different since the hyper-paramters of the models are either fixed or selected automatically by the learning algorithm (maximum likelihood type-II or MCMC). Bayesian inference procedures tend to afford some protection against over-fitting since they are parsimonious when choosing posterior distributions over model parameters that vary from their prior distributions, which induces a natural model complexity penalty15. As such, we use all the available data to learn the model posterior distributions, which we then sample from to form empirical estimates of the (C)ATE as outlined in the previous section.
Footnote 15: This point can be understood more thoroughly by examining the evidence lower bound in variational Bayesian inference, see Chapter 10 of Bishop (2006).
## 6 Results
There are clear economic benefits to gaining an additional qualification in later life (25 years or older). The effects remain strong up to a decade-and-a-half after course completion. Table 4 displays a gain of approximately $88-110 per week in gross earnings across the T-learner approaches. In 2019, this was roughly 7-8 percent of the average gross weekly earnings of $1256.20 for all Australian employees (ABS, 2019; 6345.0 Wage Price Index, Australia).
The effect sizes from the GBR model are smaller than that of the two linear models. GBR better captures non-linearities. For example, age is likely to exhibit a highly non-linear relationship with earnings in 2019. Those who were aged 46 or above in 2001 will be aged 65 or above in 2019. This means they are more likely to have retired by 2019 compared to those who were aged below 46 in 2001. As a result, we may expect a shift down in earnings at age 46.
Age fixed-effects alone are unlikely to capture the differential age effects across other variables such as across different occupations, or by gender, and earnings. The linear ML models include age fixed effects. However, they do not include interactions between age and other variables whereas GBR does include them.
To illustrate how GBR adequately captures non-linearities we re-estimated our results focusing on those who were aged 25-45 in 2001. This is the same as interacting a binary
variable (for age 25-45) with every other feature in the model. In Appendix Figure 13, we see that the results across the models are now more similar than when we use the full sample.
The Doubly Robust (DR) models estimate smaller effects compared to the T-learner models. Table 4 displays a gain of approximately $62-69 per week in gross earnings across the DR approaches. The estimated effect sizes are statistically different from zero. The confidence intervals for the DR estimates also exclude the point estimates from the T-Learner approach.
One reason the DR approach differs from the T-learner approach is that the former uses additional information from the propensity score (i.e. we estimate machine learning models to gain a better understanding of the treatment assignment process, the students' background, and the nature and complexity of their situation that may have led them to pursue further education). Thus, the doubly robust approach can improve upon the T-learner approach because it can reduce misspecification error either through a correctly specified propensity score model or through correctly specified outcome equations. Another feature of the Doubly Robust approach is that it places a higher weight on observations in the area where the relative count of treatment and control observations is more balanced (i.e. the area of overlap). A benefit of this is that it can also provide better extrapolations of the predicted outcomes.
The Bayesian models estimate similar sized effects to the DR models for the most part. However, they tend to have more uncertainty associated with their estimates. They all remain significant with the 95% confidence intervals remaining above $0. The hierarchical linear model and the Gaussian process both estimate a gain of approximately $61-$63 per week in gross earnings, with the Gaussian process being more certain in its estimate. Interestingly, the Gaussian process prefers a much smoother and smaller treatment effect component compared to its prognostic component - the treatment kernel length scale is long, and the kernel has a small amplitude and offset (\(l_{\tau}=243\), \(\sigma_{\tau}^{2}=0.0517^{2}\), and \(\tau_{0}=0.0312^{2}\)). Whereas the prognostic kernel parameters stay relatively close to their initial settings (\(l_{\mu_{0}}=16\), and \(\sigma_{\mu_{0}}^{2}=1.42^{2}\)). The Bayesian causal forest estimates a slightly higher gain of $84.50 per week in gross earnings, which is more inline with the GBR T-learner. This suggests that the tree ensemble methods may be able to more easily capture non-linear relationships than the other models.
Proportionate changes in earnings can be measured by taking the log of the earnings measures. In Appendix Figure 14, we see that the proportionate change in earnings was large at 50 percent. This is likely to be because of people entering the labour market as a
result of the new qualification. We find that a new qualification increases the likelihood of employment by approximately 8 percent. See Figure 11.
As previously mentioned, the ML models estimate smaller returns than the returns estimated in DD-FE or cross-sectional models (OLS with and without controls) where features have been selected based on theory or previous empirical learnings. For example, the 'OLS Baseline model' uses the features in models estimated in Chesters (2015). The DD-FE eliminates all selection effects that are fixed over time. Figure 10 displays the estimated returns from six different approaches.
A potential reason for the smaller results estimated in the ML models is that the additional features included, as well as the non-linear specifications of the features, more effectively account for selection into treatment. The smaller results suggest individuals positively select into further study i.e. the characteristics that lead one to complete further study are positively correlated to future earnings. Once we control for this upward selection bias, we thus estimate smaller returns to further education.
The smaller estimated results relative to the DD-FE model are likely to stem from the inclusion of key time-varying variables such as the 'change in total gross income' in the ML models, as well as other non-linear specifications. For example, the ML models allow the treatment effects to vary in a highly flexible fashion across different parts of the feature distributions rather than making linear extrapolations.
This points to a benefit of using ML models, compared to conventional models, because they can more effectively identify confounders. We show evidence of the types of confounders missed in conventional models in Table 2, as well as the direction of the bias stemming from their omission.
In addition, we show evidence that models which allow for more flexible functional-form specifications lead to differences in the ATE. Within our ML models, the GBR tree ensemble tended to perform better (in terms of the nested cv results) compared to the linear-based models. The former yielded a slightly smaller ATE compared to the LASSO and Ridge results, for example, and they were also consistent with results from the Bayesian Causal Forest.
## 7 Sub-group analysis
Qualification advancements may not benefit individuals in the same way. In this section we analyse if there is heterogeneity in the treatment impacts. We use a data-driven approach to select the sub-groups.
Specifically, we identify the important variables for which we expect to see the largest changes in the treatment effects. This involves using a Permutation Importance procedure.
### Permutation importance feature selection method
We use a permutation importance selection method (Breiman, 2001, Molnar, 2020) to evaluate the relative importance of individual features. Our aim here is to understand where the heterogeneous treatment effects are most pronounced. In other words, we aim to identify the sub-groups for which the treatment effects differ most significantly. In selecting the important features our objective is to understand how to partition the data by the treatment effects as opposed to predicting the outcomes themselves.
The permutation importance proceedure involves testing the performance of a model after permuting the order of samples of each individual feature, thereby keeping the underlying distribution of that feature intact but breaking the predicitve relationship learned by the model with that feature. The model performance we are interested in, as previously mentioned, is the one that maps the features to the individual treatment effects.
Following the approach described above, we compute the individual treatment effects. Note that we train the model on the bootstrapped sample but estimate the individual treatment effects using the feature values for individuals from the original sample. Thus, for every individual we have a distribution of values of their individual treatment effects.
After obtaining the individual treatment effects, we train another model that maps the features to the individual treatment effects. We use cross-validation to select our hyperparameters and obtain the optimal model.
Using the original data, we take a single column among the features and permute the order of the data and calculate a new set of individual treatment effects. We compare the new and original individual treatment effects (based on the permuted data and those from the non-permuted data) and calculate the Mean Squared Errors (MSE).
We repeat this for all the features, permuting them individually and evaluating how they change the prediction of the individual treatment effect target. Features that yield the largest MSEs are likely to be more important than those features with lower MSEs since permuting those features breaks the most informative predictive relationships.
We then repeat the above steps across all the bootstrap samples. Note that a different bootstrap sample will change the value of the individual treatment effects since we train different outcome surfaces for \(\mu_{0}^{(s)}(x)\) and \(\mu_{1}^{(s)}(x)\) for each bootstrap sample.
We embed the permutation importance selection method in a bootstrapping procedure in order to capture hyperparameter uncertainty. For example, a different 'tree depth' could be chosen between different bootstrap samples. This would affect the type of non-linear/interaction relationships that would be captured by the models, which in turn would affect which features turn out to be important.
Finally, we obtain an average MSE for each feature, averaged across all bootstrap samples. This average value allows us to rank the features by their importance. Again, those with the largest average MSE values are the most important. We can also evaluate the uncertainty of this estimate since we obtain a distribution of MSE values across the different bootstrap samples.
Figure 8 displays the top ten features (based on the permutation importance procedure described above) and a residual category for all the other features. The features that are most important are: weekly gross wages on the main job and income- or wealth-related variables. Together, this class of income/wealth variables accounts for 40% of the importance of all variables. We focus on these selected features since our Nested CV approach pointed to the better predictive performance of the GBR model over the linear models.
Other important features include those related to employment, including occupational status, employment expectations, and employment history. The demographic background of the individual, namely their age, is also important.
Figure 9 displays the distribution of the MSE values across the bootstrap samples for the GBR model. It displays the distributions for the top 3 features. The feature with the highest importance score: weekly gross wage in the main job. This suggests that in some of the bootstrap samples, where the MSE is larger, the individual treatment effects from the permuted data differ greatly from the original individual treatment effects.
The results from the T-learner model (using GBR) shows a similar story to the results from the permutation importance procedure using the DR model. Overall, as Appendix Figure 15 shows, income and employment-related variables are the most salient in explaining treatment effect heterogeneity.
Continuing to focus on the results from the Doubly Robust model, Figure 12 shows that there is heterogeneity in the treatment impacts. We have identified the features that were considered most important according to the permutation procedure. For each feature, we divide the sample into two groups. For continuous variables, we take the median value and divide the sample into those who are above and below this median value.
Weekly personal income has a large impact on the effect size. Those with below median income in 2001 derive more benefits than those with above median income, possibly because high income earners hit an earnings ceiling. Younger people in 2001 also derive more returns, as they may have had more time to accumulate returns. This result aligns with findings from previous studies (Polidano and Ryan, 2016; Dorsett, Lui and Weale, 2016; Perales and Chesters, 2017). Weekly personal income and age are likely to be highly correlated - with older individuals tending to earn a higher personal income. We cannot say which variable is the main driver of the heterogeneous treatment effects and there may also be interaction effects between them.
We also investigate if there are heterogeneous treatment effects according to commonly used variables in Figure 12. Females reap slightly higher returns compared to males although this is not statistically significant. Similar treatment effects apply to those with and without a resident children, although the effect sizes widen in favour of parents with older children in the household.
Acquiring an additional qualification may increase earnings through a number of potential mechanisms. We find evidence that, in Figure 11 for example, it increases the chance that individuals move from being unemployed or out of the labour force to being employed. The increase in employment is approximately 8 percentage points and is statistically significant. We also find evidence pointing to workers switching occupations or industries. This suggests that further education in later life can support the economic goals of a larger workforce as well as a more mobile one.
### Sensitivity Analysis
For sensitivity analysis, we repeated the T-learner estimations using feature inputs values taken from individuals two years before they began study. Thus, we examine if our main results are sensitive to changes in the mapping equations for the treatment and control outcome equations when features are measured closer to the event of study, compared to taking input values in 2001. We also measured outcomes four years after study began. This means that the timing between when the feature input values are measured, when a further degree commenced and was completed, as well as when the outcomes are measured, are all closer together. This necessarily leads us to estimate the short-term returns of obtaining a further degree.
Our results from the sensitivity analysis are similar to that of the main results. Specifically, the gains in gross earnings from a further degree in the sensitivity analysis are: $74 per week (Ridge), $117 per week (LASSO) and $93 per week (GBR). The key take
away from these results is that the average treatment effects in the main analysis are not sensitive to whether our features use 2001 as the input year or use the two years before study.
Furthermore, the main results are not sensitive to when outcomes are measured i.e. the returns measured four years after the start of a study spell are comparable to the returns averaged over 2 to 17 years after study completion. This may point to the fact that the returns to further study are accrued in the immediate years following the completion of the degree. It also suggests the returns may not atrophy over time, especially since the majority of people who did complete a degree in the main analysis did so in the earlier years of the survey (Figure 5). Unfortunately, our sample sizes are not sufficient to explore heterogeneity in treatment effects by the year of completion.
The importance of employment-related features such as earnings (individual and household), wages, and hours worked are reiterated in the sensitivity analysis using the panel structure of the data. Namely, when we define our outcomes 4 years after the start of a study spell and where we define features two years before study started, we also see similar results to that of the main results. However, in Figure 16, it is clear that the 'trend' or 'growth' in the values of features such as individual earnings, hours worked and household income are also important. This finding of dynamic selection is echoed in the literature (Jacobson, LaLonde and Sullivan, 2005, Dynarski, Jacob and Kreisman, 2016, 2018).
In Figure 16, the feature mental health is also picked. This result may reflect the fact that the timing of the measurement of features, treatment and outcomes are all closer together compared to the main results. This means that mental health is an important factor in explaining the heterogeneity in relatively'short-term' treatment effects.
## 8 Conclusions
Using a machine learning based methodology and data from the rich and representative Household Income and Labour Dynamics Australia survey we have shown that completing an additional degree later in life can add $60-80 (AUD, 2019) per week to an individual's gross earnings. This represents roughly 7-8 percent of the weekly gross earning for the average worker in Australia. Our machine learning methodology has also uncovered sources of heterogeneity in this effect.
Our methodology has allowed us to exploit the full set of background information on individuals from the HILDA survey, beginning with more than 3,400 variables, to con
trol our analysis. We find that our automated feature selection method selects a set of controls/features that include those that have theoretical foundations and/or align with those chosen in past empirical studies. However, we also choose features that have been traditionally overlooked. These include variables such as household debt, wealth, housing, and geographic mobility variables. Other important predictors include the ages of both resident and non-resident children: non-resident children aged 15 or above matter and resident children aged 0-4 are important.
Qualification advancements do not benefit Australian workers in the same way: those with lower weekly earnings appear to benefit more from later-life study than those with higher earnings. One possible reason is that ceiling effects limit the potential returns from additional education. We also find that younger Australians (less than 45 years of age) benefit more than their older counterparts. Again, a ceiling effect phenomenon may apply since age is highly correlated to weekly earnings.
Acquiring an additional qualification may increase earnings through a number of potential mechanisms. We find evidence that it increases the chance that individuals move from being unemployed or out of the labour force to being employed. We also find evidence pointing to workers switching occupations or industries. This suggests that further education in later-life can support the economic goals of a larger workforce as well as a more mobile one.
## 9 Tables and Figures
\begin{tabular}{l c c c} \hline \multicolumn{5}{c}{_Continued from previous page_} \\ \hline Variable label & Variable name & Mean & SD \\ \hline No. persons aged 0-4 years in HH & hh0\_4 & 0.257 & 0.589 \\ No. persons aged 10-14 years in HH & hh10\_14 & 0.274 & 0.606 \\ Age when first left home & fmagelh & 21.502 & 11.230 \\ Living circumstances & hgms & 1.997 & 1.708 \\ English fluency & hgeab & 1.604 & 0.262 \\ Unemployment rate in region & hhura & 6.884 & 1.075 \\ _Education_ & & & \\ Highest year of school completed/attending & edhists & 2.383 & 1.439 \\ Bachelor degree (without honours) obtained & edqobd & 0.211 & 0.330 \\ Masters degree obtained & edqoms & 0.041 & 0.160 \\ Doctorate obtained & edqodc & 0.011 & 0.085 \\ No. qualifications unknown & edqunk & 0.078 & 0.403 \\ _Employment_ & & & \\ Occupation & jbmo61 & 3.772 & 1.825 \\ Years in paid work & ethjbyr & 21.963 & 11.907 \\ Tenure with current employer & jbempt & 8.505 & 7.369 \\ Type of work schedule & jbmday & 3.785 & 2.612 \\ Current work schedule & jbmsch & 2.255 & 1.819 \\ Casual worker & jbcasab & 1.797 & 0.291 \\ Hours/week worked at home & jbmhrh & 12.372 & 7.174 \\ Hours/week travelling to and from work & lshrcom & 3.052 & 3.716 \\ Satisfaction with employment opportunities & losateo & 6.693 & 2.557 \\ \hline \end{tabular}
\begin{tabular}{l c c c} \hline \multicolumn{4}{c}{_Continued from previous page_} \\ \hline Variable label & Variable name & Mean & SD \\ \hline Occupational status - current main job & jbmo6s & 50.177 & 19.199 \\ No. persons employed at place of work & jbmwpsz & 3.746 & 1.961 \\ Age intends to retire & triage1 & 345.709 & 230.208 \\ Age retired/intends to retire & triage & 113.904 & 130.211 \\ Prob. of losing job in next 12 months & jbmploj & 15.196 & 35.018 \\ Prob. of accepting similar/better job & jbmpgj & 59.585 & 26.196 \\ Looked for work in last 4 weeks & jsl4wk & 1.272 & 0.411 \\ Years unemployed and looking for work & ehtujyr & 0.464 & 1.647 \\ Hours per week worked in last job & ujljhru & 34.990 & 6.922 \\ Industry of last job & ujljin1 & 9.373 & 1.822 \\ _Work preferences_ & & & \\ Total hours per week would choose to work & jbprhr & 34.378 & 6.407 \\ Importance of work situation to your life & loimpew & 6.854 & 2.908 \\ _Children_ & & & \\ Child looks after self & chu\_sf & 0.128 & 0.144 \\ Uses child care while at work & cpno & 1.257 & 0.139 \\ Parent provides child care & cpu\_me & 0.434 & 0.151 \\ _Work-family balance_ & & & \\ Do fair share of looking after children & pashare & 2.411 & 0.671 \\ Miss out on home/family activities & pawkmfh & 3.904 & 1.069 \\ Working makes me a better parent & pawkbp & 4.038 & 0.979 \\ _Family_ & & & \\ \hline \end{tabular}
\begin{tabular}{l c c c} \hline \multicolumn{4}{c}{_Continued from previous page_} \\ \hline Variable label & Variable name & Mean & SD \\ \hline No. dependent children aged 5-9 & hhd5\_9 & 0.261 & 0.584 \\ No. dependent children aged 10-14 & hhd1014 & 0.269 & 0.604 \\ No. non-resident children & tcnr & 0.993 & 1.373 \\ Sex of non-resident child & ncsex1 & 1.509 & 0.320 \\ Likely to have a child in the future & icprob & 1.188 & 0.374 \\ _Finances_ & & & \\ Owned a home previously & hspown & 1.368 & 0.424 \\ Amount outstanding on home loans & hsmgowe & 96803.720 & 43547.610 \\ Time until home loan paid off & hsmgfin & 2011.858 & 4.157 \\ Food expenses outside the home & xposml & 36.982 & 42.522 \\ SEIFA (level of economic resources) & hhec10 & 5.463 & 2.897 \\ Taxes on total income & txtottp & 7476.727 & 14035.510 \\ Change in total gross income since 1 year ago & wslya & 2231.465 & 1950.065 \\ Had an incorporated business & bifinc & 1.715 & 0.199 \\ Had a non-LLC or unincorporated business & bifuinc & 1.259 & 0.193 \\ _Income_ & & & \\ HH current weekly gross wages - all jobs & hiwscei & 992.666 & 918.261 \\ Current weekly gross wages - main job & wscme & 468.062 & 556.185 \\ HH financial year gross wages & hiwsfei & 52472.490 & 49458.180 \\ Financial year gross wages & wsfe & 25463.770 & 30265.630 \\ Financial year regular market income & tifmktp & 30734.790 & 33618.860 \\ Financial year disposable total income & tifditp & 27477.160 & 22701.270 \\ \hline \end{tabular}
\begin{tabular}{l c c c} \hline \multicolumn{4}{c}{_Continued from previous page_} \\ \hline Variable label & Variable name & Mean & SD \\ \hline Imputation flag: current weekly gross wages - all jobs & wscef & 0.070 & 0.256 \\ Imputation flag: current weekly gross wages - other jobs & wscoef & 0.044 & 0.205 \\ Imputation flag: financial year gross wages & wsfef & 0.071 & 0.256 \\ \multicolumn{4}{l}{_Other sources of income_} \\ Receive superannuation/annuity payments & oifsup & 0.059 & 0.232 \\ Receive redundancy and severance payments & oifsrv & 0.002 & 0.038 \\ Receive other irregular payment & oifrr & 0.001 & 0.027 \\ Receive government pensions or allowances & bncyth & 0.004 & 0.027 \\ Receive Disability Support Pension & bnfdsp & 0.151 & 0.181 \\ Receive other regular public payments & oifpub & 0.000 & 0.019 \\ Financial year regular private income & tifprin & 77.299 & 1409.625 \\ Financial year investments & oifnvp & 1951.052 & 10569.050 \\ Financial year dividends & oidvry & 744.263 & 4651.593 \\ Financial year interest & oiint & 666.116 & 3448.494 \\ Financial year regular private pensions & oifpp & 967.101 & 5055.004 \\ Financial year business income (loss) & bifn & 185.652 & 3274.511 \\ Financial year business income (profit) & bifip & 2597.792 & 13649.410 \\ Financial year irregular transfers from non-resident parents & oifnpt & 35.067 & 1305.812 \\ Financial year public transfers & bnfapt & 2865.540 & 4717.042 \\ Financial year government non-income support payments & bnfnis & 1025.031 & 2237.987 \\ HH financial year public transfers & hifapti & 5542.675 & 7937.136 \\ \hline \end{tabular}
\begin{tabular}{l c c c} \hline \hline Variable label & Variable name & Mean & SD \\ \hline HH financial year business income & hibifip & 4880.589 & 18393.360 \\ _Health_ & & & \\ Imputation flag: current weekly public transfers & bncapuf & 0.044 & 0.204 \\ Imputation flag: financial year investments & oifinf & 0.124 & 0.330 \\ Imputation flag: financial year dividends & oidvrgf & 0.079 & 0.270 \\ Imputation flag: financial year rental income & oirntf & 0.071 & 0.257 \\ Imputation flag: financial year business income & biff & 0.071 & 0.258 \\ Health limits vigorous activities & gh3a & 2.108 & 0.718 \\ How much pain interfered with normal work & gh8 & 1.704 & 0.971 \\ Health condition/disability developed last 12 months & helthyr & 1.870 & 0.151 \\ Tobacco expense in average week & lstbca & 37.771 & 10.690 \\ _Housing_ & & & \\ Years at current address & hsyrcad & 9.541 & 10.226 \\ External condition of dwelling & docond & 1.970 & 0.870 \\ No dwelling security & dosecno & 0.552 & 0.497 \\ No. homes lived in last 10 years & mhn10yr & 3.456 & 1.107 \\ Moved to be near place of work & mhreawp & 0.084 & 0.111 \\ Moved because I was travelling & mhrearo & 0.009 & 0.038 \\ _Attitudes_ & & & \\ Importance of religion & loimprl & 4.612 & 3.483 \\ Working mothers care more about work success & atwkwms & 3.729 & 1.807 \\ Mothers who don't need money shouldn't work & atwkmsw & 3.951 & 1.982 \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c} \hline \hline Variable label & Variable name & Mean & SD \\ \hline _Identifiers_ & & & \\ Family number person 02 & hhfam02 & NA & NA \\ Relationship to person 03 & rg03 & NA & NA \\ ID of other responder for HH Questionnaire & hhp2 & NA & NA \\ \hline \hline \end{tabular}
*Definition of technical and qualitative degree: Technical: STEM, Architecture, Agriculture and Environment, Medicine, Other Health-related Studies and Nursing, Management and Commerce and Law. Non-technical: Education, Society and Culture (includes economies!), Creative Arts, and Food, Hospitality and Personal Services.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & Relationship & Relationship & Bias direction in \\ Variable label & Variable name & with re-education & with outcome & OLS models \\ & & (redufl) & (y\_wscei) & \\ \hline _Education_ & & & & \\ Doctorate obtained & edqodc & - & + & - \\ _Employment_ & & & & \\ Tenure with current employer & jbempt & - & - & + \\ Current work schedule & jbmsch & - & - & + \\ Casual worker & jbcasab & - & + & - \\ Occupational status - current main job & jbmo6s & + & + & + \\ No. persons employed at place of work & jbmwpsz & + & + & + \\ Prob. of accepting similar/better job & jbmpgj & + & + & + \\ Years unemployed and looking for work & ethujyr & + & - & - \\ _Work-life balance_ & & & & \\ Total hours per week would choose to work & jbrbrr & + & + & + \\ Parent provides child care & cpu\_me & & & - \\ Do fair share of looking after children & pashare & - & + & - \\ Miss out on home/family activities & pawkmfh & + & + & + \\ _Income_ & & & & \\ Current weekly gross wages - main job & wscme & + & + & + \\ Imputation flag: current weekly gross wages & wscef & + & + & + \\ - all jobs & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: ML variables omitted by OLS Baseline model
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Variable label} & \multirow{2}{*}{Variable name} & Relationship & Relationship & \multirow{2}{*}{Bias direction in OLS models} \\ & & with re-education & with outcome & \\ & & (redufl) & (y\_wsei) & \\ \hline Change in total gross income since 1 year ago & wlya & + & + & + \\ Financial year investments & oifinvp & - & - & + \\ Financial year business income (profit) & bifip & - & - & + \\ Amount outstanding on home loans & hsmgowe & + & + & + \\ Imputation flag: financial year dividends & oidvryf & + & - & - \\ Imputation flag: financial year rental income & oirntf & + & + & + \\ Imputation flag: financial year business income & bif & + & - & - \\ \hline \multirow{2}{*}{_Health_} & \multirow{2}{*}{gh3a} & + & + & + \\ & & lstbca & - & - & + \\ \hline \end{tabular}
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & N & ATE & CI (ATE) \\ \hline OLS (S-learner) & 5441 & 64.41 & [8.16, 120.66] \\ T-learner (GBR) & 5441 & 88.38 & [30.72, 137.15] \\ T-learner (LASSO) & 5441 & 110.08 & [4.01, 182.49] \\ T-learner (Ridge) & 5441 & 108.95 & [46.84, 183.05] \\ Doubly Robust (GBR) & 5441 & 68.85 & [50.91, 82.07] \\ Doubly Robust (LASSO) & 5441 & 54.64 & [27.97, 72.74] \\ Doubly Robust (Ridge) & 5441 & 61.74 & [45.7, 78.86] \\ Hierarchical Linear Model & 5441 & 63.22 & [0.63, 121.70] \\ Gaussian Process & 5441 & 61.01 & [12.63, 109.51] \\ Bayesian Causal Forests & 5441 & 84.51 & [26.28, 141.17] \\ \hline \end{tabular} Notes: Sample of 25 or older respondents who had completed a degree at any point between 2002 and 2017. Total completions: 1,383.
\end{table}
Table 4: Average Treatment Effects: Level Earnings. Comparison across models.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & Outcome surface & Negative MSE & NMSE Std & R-squared & R-squared Std & ATE & ATE\_std \\ \hline \multirow{2}{*}{GBR} & Treated & -886515 & 452077 & 0.22 & 0.06 & 68.2 & 28.4 \\ & Control & -659056 & 107251 & 0.36 & 0.07 & & \\ \hline \multirow{2}{*}{LASSO} & Treated & -955958 & 361911 & 0.15 & 0.09 & & \\ & Control & -710521 & 178030 & 0.32 & 0.05 & & \\ \hline \multirow{2}{*}{Ridge} & Treated & -966849 & 434518 & 0.16 & 0.08 & & \\ & Control & -712374 & 174033 & 0.32 & 0.04 & 97.8 & 14.5 \\ \hline \hline \end{tabular} Notes: 5 fold CV performed on 80% train sample. All statistics presented in this table are based on the 20% holdout sample. Ten outer folds are used. See Figure 1 for more details.
\end{table}
Table 3: Nested CV Holdout Sample: Level Earnings
Figure 1: Selecting and Evaluating Model Class
Figure 2: Generating Uncertainty Parameters
Figure 4: Degree completions by age
Figure 3: Timing of Completion
Figure 5: Timing of Completion by Type of Degree
Figure 6: Degree completions by sex
Figure 8: Important Features in Heterogeneous Treatment Effects Estimation using DR: Level Earnings
Figure 7: Earnings and Employment by year
Figure 10: Comparison of Treatment Effects across Different Methods
Figure 9: Top 3 Features Distribution of Importance using DR: Level Earnings
Figure 11: Other Employment Outcomes
_Notes_: The impact of a new qualification. Sample of people who are 25 or older in 2001. Observation sizes vary depending on the outcome variable. All results are estimated using the LASSO algorithm.
Figure 12: Earnings HTEs: DR
|
2308.10648
|
EVE: Efficient zero-shot text-based Video Editing with Depth Map
Guidance and Temporal Consistency Constraints
|
Motivated by the superior performance of image diffusion models, more and
more researchers strive to extend these models to the text-based video editing
task. Nevertheless, current video editing tasks mainly suffer from the dilemma
between the high fine-tuning cost and the limited generation capacity. Compared
with images, we conjecture that videos necessitate more constraints to preserve
the temporal consistency during editing. Towards this end, we propose EVE, a
robust and efficient zero-shot video editing method. Under the guidance of
depth maps and temporal consistency constraints, EVE derives satisfactory video
editing results with an affordable computational and time cost. Moreover,
recognizing the absence of a publicly available video editing dataset for fair
comparisons, we construct a new benchmark ZVE-50 dataset. Through comprehensive
experimentation, we validate that EVE could achieve a satisfactory trade-off
between performance and efficiency. We will release our dataset and codebase to
facilitate future researchers.
|
Yutao Chen, Xingning Dong, Tian Gan, Chunluan Zhou, Ming Yang, Qingpei Guo
|
2023-08-21T11:36:46Z
|
http://arxiv.org/abs/2308.10648v1
|
# EVE: Efficient zero-shot text-based Video Editing
###### Abstract
We present EVE, an efficient and robust zero-shot text-based video editor, which successfully trades off editing performance and efficiency.
_Motivated by the superior performance of image diffu
sion models, more and more researchers strive to extend these models to the text-based video editing task. Nevertheless, current video editing tasks mainly suffer from the dilemma between the high fine-tuning cost and the limited generation capacity. Compared with images, we conjecture that videos necessitate more constraints to preserve the temporal consistency during editing. Towards this end, we propose EVE, a robust and efficient zero-shot video editing method. Under the guidance of depth maps and temporal consistency constraints, EVE derives satisfactory video editing results with an affordable computational and time cost. Moreover, recognizing the absence of a publicly available video editing dataset for fair comparisons, we construct a new benchmark ZVE-50 dataset. Through comprehensive experimentation, we validate that EVE could achieve a satisfactory trade-off between performance and efficiency. We will release our dataset and codebase to facilitate future researchers._
## 1 Introduction
Owing to powerful diffusion models [10, 25], recent years have witnessed dramatic progress in text-based image synthesis and editing tasks, igniting the soaring research interest in extending these methods to the video editing field. Nevertheless, current text-based video editing methods, which manipulate attributes or styles of videos under the guidance of the driven text, mainly suffer from the dilemma between the considerable fine-tuning cost and the unsatisfied generation performance.
Recent video editing methods could be roughly divided into two classes: tuning-based methods [23, 27] and zero-shot ones [2, 15]. The former approaches mainly rely on fine-tuning image diffusion models to derive strong generative priors. Nevertheless, they are usually costly as the fine-tuning step would consume substantial time and GPUs. Towards this end, zero-shot video editing methods aim to directly edit real-world videos without time-consuming fine-tuning. Nevertheless, the edited videos in the zero-shot manner may suffer from the spatio-temporal distortion and inconsistency. Besides, some zero-shot methods are built upon diffusion models fine-tuned on video datasets, which may not be free of the high cost as the tuning-based ones.
In this paper, we attempt to achieve a trade-off between editing performance and efficiency. Specifically, we adopt the approach of zero-shot video editing, while improving editing performance based upon initial image diffusion models rather than video tuning-based ones. Consequently, the primary challenge is how to preserve and improve the temporal consistency of edited videos.
Let's begin by considering human editing. When dealing with images, adjusting object appearances or attributes is relatively straightforward. However, when it comes to videos, a comprehensive evaluation of all edited frames becomes imperative to prevent the spatio-temporal distortion and inconsistency in edited videos. As a result, we conjecture that **videos necessitate more temporal constraints** to preserve the time consistency, whose editing process could not be as unconstrained as images. This hypothesis also interprets unsatisfied performance when directly extending image diffusion models to videos, as current image editing methods seldom enforce explicit constraints.
Given this argument, different from current methods that neither explicitly control over individual frame editing nor enforce additional constricts on inter-frame generation, we propose two strategies to reinforce temporal consistency constraints during zero-shot video editing: 1) **Depth Map Guidance**. Depth maps locate spatial layouts and motion trajectories of moving objects, providing robust prior cues for the given video. Therefore, we incorporate depth maps into video editing to improve the temporal consistency. And 2) **Frame-Align Attention**. We enhance the temporal encoding by forcing models to place their attentions on both previous and current frames.
Moreover, by narrowing the gap of whether introducing depth maps into the noise-to-image inference procedure, we design an efficient parameter optimization strategy that directly updates target latent features without fine-tuning the complex diffusion model. In this way, it takes about 83.1 seconds to edit a video with 8 frames on average.
Currently, there lack public video editing datasets for fair performance comparisons. Towards this end, we construct a new ZVE-50 dataset, where each collected video is associated with four corresponding driven text. We conduct extensive experiments to benchmark our ZVE-50 dataset.
Our contributions are summarized in four folds:
* We propose EVE, a zero-shot text-based video editor with a satisfactory trade-off between the generation capability and efficiency.
* We argue the indispensability of temporal consistency constraints in the video editing task. Towards this end, we propose two strategies to improve the temporal consistency, achieving robust editing performance.
* We construct a new benchmark ZVE-50 dataset. To the best of our knowledge, ZVE-50 is the first dataset for zero-shot text-based video editing, which facilitates future researchers to perform a fair comparison.
* We conduct extensive experiments on ZVE-50 dataset. Experimental results indicate that the proposed EVE is a robust and efficient zero-shot video editing method.
## 2 Related Work
**Diffusion Models.** Large-scale diffusion models [3, 4, 18] have achieved start-of-the-art performance in image synthesis and translation. Diffusion models, in essence, are generative probabilistic models that approximate a data distribution \(p(x)\) by gradually denoising a normally distributed variable. Nevertheless, training a diffusion model from scratch is often expensive and time-consuming.
**Text-based Image Editing.** Text-based image editing [1, 11, 26] aims to manipulate the attributes or styles of one image with the guidance of the driven text. Based on powerful diffusion models, researchers have proposed various methods. _E.g._, DreamBooth [19] proposes a subject-driven generation technology by fine-tuning diffusion models, while T2I-Adapters [13] provides an efficient image editing approach with a low training cost.
**Text-based Video Generation and Editing.** Motivated by text-based image editing, video editing [8, 12] has attracted increasing research interest recently, which could be roughly divided into two categories: tuning-based ones [5] and zero-shot ones [15]. The former approaches mainly edit video attributes by fine-tuning powerful image diffusion models, whose training cost is inevitably expensive. Alternatively, FateZero [15] proposes the zero-shot video editing task, attempting to generate a text-driven video without extra optimization on complicated generative priors. Nevertheless, FateZero suffers from the limited video editing performance due to weak constraints on the temporal consistency. Moreover, FateZero still heavily relies on Tune-A-Video [23], which is a tuning-based diffusion model and is still costly and time-consuming.
## 3 Methodology
### Preliminary: DMs, LDMs, and DDIMs
**Diffusion Models** (DMs) [20] are essentially generative probabilistic models that approximate a data distribution \(p(x)\) by gradually denoising a normally distributed variable. Specifically, diffusion models learn to reconstruct the reverse process of a fixed forward Markov chain \(x_{1},x_{2},\cdots,x_{T}\), where \(T\) is the length. The forward Markov chain (\(1\to T\)) could be treated as an image-to-noise procedure, where each Markov transition step \(q(x_{t}|x_{t-1})\) is usually formulated as a Gaussian distribution (\(\mathcal{N}\)) with a variance schedule \(\beta_{t}\in(0,1)\), that is:
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{ I}). \tag{1}\]
The reverse Markov chain (\(T\to 1\)) could be treated as a noise-to-image procedure, where each reverse Markov transition step \(p(x_{t-1}|x_{t})\) is formulated as:
\[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{ \theta}(x_{t},t)), \tag{2}\]
where \(\theta\) denotes learnable parameters to guarantee that the reverse process is close to the forward one.
Empirically, current diffusion models could be interpreted as an equally weighted sequence of denoising auto-encoders \(\epsilon_{\theta}(x_{t},t)\), which is utilized to recover a denoised variant of their input \(x_{t}\), and \(x_{t}\) is a noisy version of the input \(x\). The optimization objective could be simplified as:
\[\mathbb{E}_{x,\epsilon\sim\mathcal{N}(0,1),t}[\left\|\epsilon-\epsilon_{ \theta}(x_{t},t)\right\|_{2}^{2}]. \tag{3}\]
**Latent Diffusion Models** (LDMs) [18] are trained in the learned latent space \(z_{t}\) rather than redundant spatial dimensionality \(x_{t}\), aiming to remove the noise added to latent image features \(\epsilon_{x}\). LDMs are generally composed of an encoder \(\mathcal{E}\), a time-conditional UNet \(\mathcal{U}\), and a decoder \(\mathcal{D}\), where \(z=\mathcal{E}(x)\) and \(x\approx\mathcal{D}(\mathcal{E}(x))\). The optimization objective could be formulated as:
\[\mathbb{E}_{\epsilon_{x},\epsilon\sim\mathcal{N}(0,1),t}[\left\|\epsilon- \epsilon_{\theta}(z_{t},t)\right\|_{2}^{2}]. \tag{4}\]
**Denoising Diffusion Implicit Models** (DDIMs) [11] could accelerate the sampling from the distribution of images/videos at the denoising step. During inference, deterministic DDIM sampling (\(T\to 1\)) aims to recover a clean latent \(z_{0}\) from a random noise \(z_{T}\) with a noise schedule \(\alpha_{t}\), which could be formulated as:
\[z_{t-1}=\sqrt{\frac{\alpha_{t-1}}{\alpha_{t}}}z_{t}+(\sqrt{1-\alpha_{t-1}}- \sqrt{\frac{1}{\alpha_{t}}-1})\cdot\epsilon_{\theta}. \tag{5}\]
On the contrary, DDIM inversion (\(1\to T\)) aims to process a clean latent \(z_{0}\) into a noise one \(\hat{z}_{T}\), which could be simplified as:
\[\hat{z}_{t}=\sqrt{\frac{\alpha_{t}}{\alpha_{t-1}}}\hat{z}_{t-1}+(\sqrt{1- \alpha_{t}}-\sqrt{\frac{1}{\alpha_{t-1}}-1})\cdot\epsilon_{\theta}. \tag{6}\]
Compared with conventional DMs that directly employ random noise as inputs and attempt to map each noise vector to a specific image, we exploit DDIM inversion to produce a \(T\) steps trajectory between the clean latent \(z_{0}\) to a Gaussian noise vector \(z_{T}\). Then we treat \(z_{T}\) as the start vector of the denoising step. This configuration seems appropriate for our video editing task, since it ensures that the generated video would be close to the original one.
Note that we employ **LDMs** and **DDIM inversion/denoising** in zero-shot text-based video editing. Readers can refer to [18] (LDM) and [21] (DDIMs) for more details of formulation derivations if necessary.
### Problem Formulation
Given a video \(V\) and a prompt text \(P\), zero-shot text-based video editing aims to generate an edited video \(\hat{V}\), which aligns with the description outlined in the prompt \(P\) and looks similar to the original video \(V\).
### Overall Framework
As shown in Figure 1 and 2, we present both simplified and elaborated versions of the overall framework. The simplified version could be treated as a flow chart that reveals the whole processing pipeline of our EVE. While the complex one presents detailed information mainly on the iterative DDIM inversion and denoising procedures.
As shown in Figure 1, our EVE is built upon the pre-trained latent diffusion model (LDM), which is composed of a UNet for T-timestep DDIM inversion and denoising. To enforce the temporal consistency of the generated video, we introduce depth maps and exploit them to guide the editing process. Moreover, we propose two consistency constraints to prevent edited videos from spatial or temporal distortion.
We first present the overall pipeline of our EVE based upon Figure 2, including the following five steps.
**1. Frozen Features Extraction**. Given a video \(V\), we first derive \(K\) frames from \(V\), and utilize an image encoder \(\mathcal{E}_{I}\) to obtain **frozen** latent features \(\mathbf{Z}_{0}=\mathcal{E}(V)\), where \(\mathbf{Z}_{0}=\{z_{0}^{i}\}_{i=1}^{K}\). Meanwhile, we employ the MiDas Detector [17] to generate \(K\) depth maps from \(V\), and utilize another visual encoder \(\mathcal{E}_{M}\) to obtain **frozen** depth-map features \(\mathbf{M}=\{m^{i}\}_{i=1}^{K}\). Moreover, we utilize a text encoder \(\mathcal{E}_{p}\) to process the prompt \(P\) into **frozen** features \(p\).
**2. DDIM Inversion**. Then we repeat DDIM inversion for \(T\) steps to derive Gaussian noise vectors \(\mathbf{Z}_{T}\) from video latent features \(\mathbf{Z}_{0}\). Each DDIM inversion at timestep \(t\) could be formulated as:
\[\mathbf{Z}_{t}=\mathrm{DDIM}_{\mathrm{inv}}(\mathbf{Z}_{t-1}\mid\mathbf{M},t )\quad t=1\to T, \tag{7}\]
where \(\mathrm{DDIM}_{\mathrm{inv}}\) denotes DDIM inversion shown in Eq. 6.
To prevent the edited video from temporal distortion and inconsistency, we improve the image-based DDIM inversion operation by introducing depth-map features into the down-sampling pass of the **frozen** UNet, which could rectify the discrepancies among neighboring frames at each inversion step. In this way, we ensure that the generated noise vectors \(\mathbf{Z}_{T}\) would not severely spoil the temporal consistency.
Specifically, we repeat \(T\) DDIM inversion steps to process video latent features \(\mathbf{Z}_{0}\) into generated noise vectors \(\mathbf{Z}_{T}\)
**3. DDIM Denoising**. Afterward, we repeat DDIM denoising for \(T\) steps to obtain edited video features \(\mathbf{\hat{Z}}_{0}\) from DDIM inverted noise \(\mathbf{\hat{Z}}_{T}\), where \(\mathbf{\hat{Z}}_{T}=\mathbf{Z}_{T}\). Each DDIM denoising at timestep \(t\) could be formulated as:
\[\mathbf{\hat{Z}}_{t-1}=\mathrm{DDIM}_{\mathrm{den}}(\mathbf{\hat{Z}}_{t}\mid p,\mathbf{M},t),\quad t=T\to 1, \tag{8}\]
where \(\mathrm{DDIM}_{\mathrm{den}}\) denotes DDIM denoising shown in Eq. 5.
To prevent the edited video from temporal distortion and inconsistency, we improve the image-based DDIM denoising operation from two aspects: 1) We introduce depth-map features into the down-sampling pass of the **frozen** UNet as DDIM inversion. And 2) we propose the frame-aligned attention to place explicit temporal constraints on the edited video, which is discussed in the following subsection.
Specifically, we repeat \(T\) DDIM denoising steps to obtain edited video features \(\mathbf{\hat{Z}}_{0}\) from DDIM inverted noise \(\mathbf{\hat{Z}}_{T}\)
**4. Parameter Optimization**. To reduce the computation
Figure 1: The SIMPLIFIED version of the proposed EVE, presenting the overall video editing pipeline.
Figure 2: The ELABORATED version of the proposed EVE, detailing the DDIM inversion and denoising procedures.
cost and make the generation process more efficient, we freeze all feature extractors (,,, and ) and Unets, and only set noise vectors in DDIM denoising to be trainable. In another word, different from conventional editing methods that update "neural networks", we directly update "latent noise" to obtain edited videos.
Specifically, at each timestep in DDIM denoising, except for, we also derive auxiliary vectors as:
(9)
Compared with (Eq. 8), is obtained without strict depth map constraints, which could be treated as free image editing that could unleash the generation capacity of powerful image-based diffusion models. In brief, sacrific the creativity to preserve the temporal consistency, while is just the opposite. Therefore, we leverage the more creative and more temporal consistent, pursuing to achieve a trade-off between diversity and quality.
The detailed DDIM denoising procedure is illustrated in Algorithm 1, including the parameter optimization step (Lines 4-5). denotes updating trainable by the gradient descent procedure according to the loss, and denotes the cosine similarity computation.
```
Input: DDIM inverted noise, text prompt features, depth-map features, learning rate Output: edited video features 1 foriot1do
2\(\hat{\mathbf{Z}}_{t-1}=\mathrm{DDIM}_{\mathrm{den}}(\hat{\mathbf{Z}}_{t}\mid p,\mathbf{M},t)\) ;
3\(\hat{\mathbf{Z}}_{t-1}^{\prime}=\mathrm{DDIM}_{\mathrm{den}}(\hat{\mathbf{Z}}_{t} \mid p,t)\) ;
4\(\mathcal{L}=1-\mathrm{cos}(\hat{\mathbf{Z}}_{t-1},\hat{\mathbf{Z}}_{t-1}^{{}^{ \prime}})\)
5\(\hat{\mathbf{Z}}_{t-1}=\hat{\mathbf{Z}}_{t-1}-\lambda\Delta_{\hat{Z}_{t-1}}( \mathcal{L})\)
6 end for
```
**Algorithm 1**DDIM Denoising Procedure.
**5. Edited Video Decoding**. Ultimately, we feed the **frozen** visual decoder with generated latent features, obtaining the edited video.
### Temporal Consistency Constraints
As aforementioned, we assume that videos necessitate more temporal constraints to preserve the time consistency. Therefore, we propose two strategies to alleviate temporal distortion and inconsistency problems.
**1. Depth Map Guidance**. Depth maps record visual representations of the distance information, revealing spatial layouts and motion trails of all objects within a video. Therefore, depth maps could be treated as strong prior cues to guide the video editing procedure close to the initial version. Nevertheless, recent video editing methods seldom take advantage of depth maps and neglect the significance of explicitly intervening in the video editing procedure, resulting in intractable temporal distortion and inconsistency problems. Towards this end, we introduce depth maps into the down-sampling pass of the frozen UNet for both DDIM inversion and denoising procedures, forcing the editing process to imitate motion trails and scene transformations of the origin video. In this way, the stability and consistency of the edited video would be improved.
**2. Frame-Align Attention**. We propose the frame-align attention (FAA) to explicitly introduce the temporal information during video editing. As illustrated in Figure 3, a typical UNet comprises a series of "Conv-Attn" blocks to conduct the down-sampling and up-sampling calculation. The conventional attention block (Attn) contains a self-attention (SA) module [22], a cross-attention (CA) module [28], and a feed-forward network (FFN). The computation of \(\mathrm{SA}(Q,K,V)\) and \(\mathrm{CA}(Q,K,V)\) could be formulated as:
(10)
where denotes **frozen** projection matrices, is the latent features of the frame within the video, and is the latent features of the text prompt.
Conventional self-attention modules are inherited from image diffusion models, which encodes each frame separately and seem insufficient in preserving the temporal consistency for video editing. Therefore, we propose the frame-align attention (FAA) to replace and with the first frame features, forcing models to emphasize both previous and current frames for better temporal encoding. The computation of \(\mathrm{FAA}(Q,K,V)\) could be formulated as:
(11)
Figure 3: The architecture of the attention block within the UNet. Note that we propose the Frame-Align Attention to improve the temporal consistency.
## 4 Experiments
### Dataset Construction
Since zero-shot text-based video editing is a novel task, to the best of our knowledge, there lacks a public dataset to perform fair performance and efficiency comparisons. Towards this end, we construct _Zero-shot Video Editing 50_ (dubbed as **ZVE-50**) to fulfill this job.
**Data Collection**. We collect videos from two resources: DAVIS-2017 [14] and stock-video-footage *. DAVIS-2017 is a competition dataset for the video object segmentation task [24], while stock-video-footage is a public website for free stock video clips and motion graphics. After filtering out videos with similar scenes and styles to avoid the repeatability and promote the diversity, we collect 14 short videos from DAVIS-2017 and 36 ones from stock-video-footage, resulting in the ZVE-50 dataset.
Footnote *: [https://www.video.net/stock-video-footage/](https://www.video.net/stock-video-footage/)
**Caption Generation**. Then we feed collected videos into BLIP2 [9] to obtain the corresponding captions. Specifically, we generate several candidate captions and select the longest one as the ground-truth text.
**Prompt Generation**. Afterward, we employ GPT-4 + to generate the driven text derived from video captions and our manually made prompts. There are four types of driven text, requiring models to edit the given video by 1) Object Replacement (OR), 2) Object Adding (OA), 3) Style Transfer (ST), and 4) Background Changing (BC). Here we present an example of feeding GPT-4 with the manually written prompt and ground-truth caption to obtain the driven text:
Footnote †: [https://openai.com/gpt-4](https://openai.com/gpt-4)
Q (human): _Here is a sentence. Please replace the object with another object with a similar shape: "a pink lotus flower in the water with green leaves"_
A (GPT-4): _a pink lotus flower floating in a tranquil koi pond with lily pads_
Ultimately, we manually check all videos, captions, and prompt text to ensure the correctness of the ZVE-50 dataset.
### Experimental Settings
**Implementation Details**. Zero-shot text-based video editing directly takes a given video and outputs its edited version, which differs from previous methods with explicit training or testing procedure. Specifically, we freeze the pre-trained Latent Diffusion Model as our basic model, where the visual encoder \(\mathcal{E}_{I}\), the UNet, and the visual decoder \(\mathcal{D}\) are inherited from [18] with the version of v1.5. We employ MiDas [17] to derive depth maps, and utilize frozen Resnet blocks [6] to extract depth map features \(M\). The text encoder \(\mathcal{E}_{p}\) is the pre-trained CLIP text encoder [16].
During video editing, following [23] and [15], we uniformly sample 8 frames at the resolution of 512*512 from each video, and conduct DDIM inversion and denoising steps 50 (\(T\)) times. The learning rate \(\lambda\) is 0.8. It takes about **83 seconds** to edit a video on an A40 GPU.
**Evaluation Metrics.**
We employ two metrics, _i.e_., Temporal Consistency (TC) and Prompt Consistency (PC), to thoroughly evaluate the quality of edited videos: 1) Following [5], we first extract CLIP embedding of all frames within the edited video, and calculate the average cosine similarity between all pairs of neighborhood frames to derive the Temporal Consistency score. 2) Following [15], we utilize Text-Video CLIP Score to evaluate the Prompt Consistency between the edited video \(\hat{V}\) (\(K\) frames) and the driven text \(p\), which could be formulated as:
\[\mathrm{CLIP}(\hat{V},p)=\frac{1}{K}\sum\nolimits_{k=1}^{K}\mathrm{CLIP}(\hat {v}^{k},p). \tag{12}\]
\begin{table}
\begin{tabular}{c|c|c|c|c c c c|c c c c c} \hline \hline \multirow{2}{*}{No.} & \multirow{2}{*}{Model} & \multirow{2}{*}{DMG} & \multirow{2}{*}{Attn} & \multicolumn{4}{c|}{Temporal Consistency} & \multicolumn{4}{c}{Prompt Consistency} \\ \cline{4-14} & & & OR & OA & ST & BC & AVG & OR & OA & ST & BC & AVG \\ \hline \hline \multicolumn{14}{c}{_Performance comparisons between the proposed EVE and FateZero._} \\ \hline A1 & FateZero & - & STSA & 95.53 & 95.64 & 95.84 & 96.62 & 95.91 & 28.92 & 29.43 & 29.24 & 28.90 & 29.12 \\ A2 & EVE & \(\surd\) & FAA & **96.41** & **96.65** & **96.43** & **96.70** & **96.54** & **30.39** & **31.01** & **31.27** & **28.94** & **30.40** \\ \hline \multicolumn{14}{c}{_Ablation study of two proposed temporal consistency constraints within EVE._} \\ \hline B1 & & - & SA & 92.59 & 92.94 & 92.07 & 96.30 & 93.48 & 25.39 & 26.43 & 27.38 & 28.88 & 27.02 \\ B2 & & - & FAA & 95.26 & 95.20 & 95.16 & 95.68 & 95.33 & 27.77 & 29.37 & 29.11 & 29.71 & 28.99 \\ B3 & EVE & \(\surd\) & SA & 94.57 & 94.02 & 94.88 & 95.45 & 94.73 & **30.58** & **31.12** & 31.16 & 28.45 & 30.33 \\ B4 & & \(\surd\) & SCA & 95.74 & 96.18 & 96.13 & 96.62 & 96.17 & 30.26 & 31.01 & 31.00 & 28.91 & 30.29 \\ B5 & & \(\surd\) & FAA & **96.41** & **96.65** & **96.43** & **96.70** & **96.54** & 30.39 & 31.01 & **31.27** & **28.94** & **30.40** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparisons between our EVE with FateZero, and ablation study of two proposed temporal consistency constraints within EVE. We report the detailed results of four video editing missions and their average performance (underlined), where _OR = Object Replacement, OA = Object Adding, ST = Style Transfer_, and _BC = Background Changing_. All experiments are conducted on one A40 GPU under the same setting. “DMG” denotes with/without the depth map guidance.
### Performance and Efficiency Comparisons
As aforementioned, zero-shot text-based video editing is a novel task without public datasets and widely-employed baselines. Thus, it is intractable to conduct a fair performance comparison with other methods. Therefore, we compare the video editing efficiency between our EVE with the tuning-based Tune-A-Video [23] and the zero-shot Fatezero [15]. We also compare the zero-shot video editing performance between our EVE and Fatezero in two quantitative metrics.
Regarding efficiency comparisons, as illustrated in Table 2, compared with the tunning-based Tune-A-Video that takes about 30 minutes to generate an edited video, zero-shot video editing methods are much more efficient as they shorten the time to less than 5 minutes. Moreover, compared with the baseline FateZero, the proposed EVE only costs about 1/3 of the total time (83_vs._ 247s) to edit a video, which is more time-efficient and user-friendly.
Regarding performance comparisons, as illustrated in Table 1 (A1 _vs._ A2), we observe that our proposed EVE outperforms the baseline FateZero in all four tasks on the constructed ZVE-50 dataset, achieving an average improvement of +0.63\(\%\) on the temporal consistency and +1.28\(\%\) on the prompt consistency. It indicates that EVE is an efficient and robust video editing method, which improves the temporal consistency of the generated video.
### Ablation Study
Based on the argument that video editing necessitates more temporal constraints to preserve the time consistency, we propose two constraints to alleviate temporal distortion and inconsistency problems. We conduct several ablation study to verify their effectiveness on our ZVE-50 dataset.
As illustrated in Table 1, we have three observations:
1) Depth maps are strong generative priors that prevent the edited video from temporal distortion and inconsistency. Compared with B2 (without DMG) and B5, we witness an obvious performance decay on both temporal and prompt consistency, indicating the indispensability of the proposed depth map guidance strategy.
2) The proposed Frame-Align Attention (FAA) reinforce the temporal encoding to improve the consistency of edited videos. Compared with B3 (without FAA) and B5, methods equipped with FAA would outperform conventional ones with SA by a large margin, especially on the metric of the temporal consistency.
3) We also compare our FAA with the Sparse-Causal Attention (SCA) mechanism proposed by Tune-A-Video. SCA calculates attentions among current frames and the previous neighborhood ones, which could be formulated as:
\[\mathrm{SCA}:Q=W^{Q}\dot{z}^{i},K=W^{K}[\dot{z}^{1};\dot{z}^{i-1}],V=W^{V}[ \dot{z}^{1};\dot{z}^{i-1}], \tag{13}\]
where \([\cdot]\) denotes the concatenation operation. We implement SCA based upon our backbone with the same setting. Compared with B4 (with SCA) and B5, we outperform SCA on both temporal and prompt consistency in all four tasks, proving the advantages of our proposed FAA strategy.
### Visualization Results and Applications
As illustrated in Figure 6, our EVE supports four types of applications towards zero-shot text-based video editing:
1) Object Replacement (OR). OR replaces an object with another one in the given video. _E.g._, "\(man\to woman\)".
2) Object Adding (OA). OA adds a new object to the original video. _E.g._, "\(man\to man\)_with glasses_".
3) Style Transfer (ST). ST transfers the original video into different styles. _E.g._, "_style \(\to\) Van Gogh style_".
4) Background Changing (BC). BC changes the video background. _E.g._, "_background \(\to\) under stars_".
## 5 Conclusion
We present EVE, a robust and efficient zero-shot text-based video editing method, to tackle with the dilemma between the considerable fine-tuning cost and the unsatisfied generation performance. Motivated by the observation that videos necessitate more constraints to preserve the time consistency, we introduce depth maps and two temporal consistency constraints to guide the video editing procedure. In this way, the proposed EVE achieves a satisfactory trade-off between performance and efficiency. Moreover, we construct and benchmark ZVE-50, a public video editing dataset that provides a fair comparison for future researchers.
## 6 Future Work
In the future, we aim to further improve the quality of edited videos, narrowing the performance gap between tuning-based video editing methods and zero-shot ones. _E.g._, introducing the triplet attention mechanism [29] into the attention block of Unets to promote the temporal stability; and generating pseudo labels by recording attention maps of Unets during the DDIM inversion procedure, which helps to build a knowledge distillation mechanism [7] in the following denoising step.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Model & Tune-A-Video & FateZero & EVE \\ \hline Time & \(\sim\) 30 minutes & 247.6 seconds & **83.1** seconds \\ \hline \end{tabular}
\end{table}
Table 2: Efficiency comparisons between our EVE with tuning-based Tune-A-Video and zero-shot FateZero. All experiments are conducted on one A40 GPU.
|
2302.02217
|
Tailoring magnetic hysteresis of Fe-Ni permalloy by additive
manufacturing: Multiphysics-multiscale simulations of process-property
relationships
|
Designing the microstructure of Fe-Ni permalloy by additive manufacturing
(AM) opens new avenues to tailor the materials' magnetic properties. Yet,
AM-produced parts suffer from spatially inhomogeneous thermal-mechanical and
magnetic responses, which are less investigated in terms of process simulation
and modeling schemes. Here we present a powder-resolved multiphysics-multiscale
simulation scheme for describing magnetic hysteresis in materials produced via
AM. The underlying physical processes are explicitly considered, including the
coupled thermal-structural evolution, chemical order-disorder transitions, and
associated thermo-elasto-plastic behaviors. The residual stress is identified
as the key thread in connecting the physical processes and in-process phenomena
across scales. By employing this scheme, we investigate the dependence of the
fusion zone size, the residual stress and plastic strain, and the magnetic
hysteresis of AM-produced Fe21.5Ni78.5 permalloy on beam power and scan speed.
Simulation results also suggest a phenomenological relation between magnetic
coercivity and average residual stress, which can guide the magnetic hysteresis
design of soft magnetic materials by choosing appropriate AM-process
parameters.
|
Yangyiwei Yang, Timileyin David Oyedeji, Xiandong Zhou, Karsten Albe, Bai-Xiang Xu
|
2023-02-04T18:39:41Z
|
http://arxiv.org/abs/2302.02217v1
|
Tailoring magnetic hysteresis of Fe-Ni permalloy by additive manufacturing: Multiphysics-multiscale simulations of process-property relationships
###### Abstract
Designing the microstructure of Fe-Ni permalloy by additive manufacturing (AM) opens new avenues to tailor the materials' magnetic properties. Yet, AM-produced parts suffer from spatially inhomogeneous thermal-mechanical and magnetic responses, which are less investigated in terms of process simulation and modeling schemes. Here we present a powder-resolved multiphysics-multiscale simulation scheme for describing magnetic hysteresis in materials produced via AM. The underlying physical processes are explicitly considered, including the coupled thermal-structural evolution, chemical order-disorder transitions, and associated thermo-elasto-plastic behaviors. The residual stress is identified as the key thread in connecting the physical processes and in-process phenomena across scales. By employing this scheme, we investigate the dependence of the fusion zone size, the residual stress and plastic strain, and the magnetic hysteresis of AM-produced Fe\({}_{21.5}\)Ni\({}_{78.5}\) permalloy on beam power and scan speed. Simulation results also suggest a phenomenological relation between magnetic coercivity and average residual stress, which can guide the magnetic hysteresis design of soft magnetic materials by choosing appropriate AM-process parameters.
_Keywords--_ additive manufacturing, selective laser sintering, multiphysics-multiscale simulation, phase-field model, microstructure evolution, soft magnetic, magnetic hysteresis, permalloy
## 1 Introduction
The Fe-Ni permalloy has been widely studied in recent decades owing to its extraordinary magnetic permeability, low coercivity, high saturation magnetization, mechanical strength, and magneto-electric characteristics. The material has been widely employed in conventional electromagnetic devices, such as sensors and actuators, transformers, electrical motors, and magnetoelectric inductive elements. Fe-Ni-based permalloys modified with additives are also promising candidate materials for multiple novel applications, such as wind turbines, all-electric vehicles, rapid powder-conversion electronics, electrocatalysts, and magnetic refrigeration [1, 2, 3, 4].
Due to the increasing importance of additive manufacturing (AM) technologies, the possibilities of designing soft magnetic materials by AM have been explored in a number of studies [5, 6, 7, 8, 9, 10, 11, 12]. However, due to the delicate interplay
of process conditions and resulting properties, there are several open questions that need to be answered in order to obtain AM-produced Fe-Ni permalloy parts with the desired property profile. Magnetic properties of the Fe-Ni system depend on the chemical composition, as depicted in Fig. 1a. Fe\({}_{21.5}\)Ni\({}_{78.5}\) is typically selected targeting the chemical-ordered low-temperature FCC phase (also known as waruite, L1\({}_{2}\), or \(\gamma^{\prime}\) phase, as the phase-diagram shown in Fig.S1a), which possesses a minimized coercivity and peaked magnetic permeability [13, 8]. The main problem is to increase the generation of \(\gamma^{\prime}\) phase using AM, as the phase transition kinetics from the chemical-disordered high-temperature FCC phase (also known as austenite, A1, or \(\gamma\) phase) to the chemical-ordered \(\gamma^{\prime}\) phases is extremely restricted [14]. On the laboratory timescale, growing the \(\gamma^{\prime}\) phase into considerable size requires annealing times on the order of days [15, 16, 17, 18, 19]. Due to the rapid heating and cooling periods, only the \(\gamma\) phase exists in AM-processed parts [9]. Combining in-situ alloying with AM methods allows to stabilize the \(\gamma^{\prime}\) phase in printed parts [8], but the restricted kinetics still limits the growth of a long-range ordered phase [14]. Another question concerns the influences of associated phenomena on magnetic hysteresis. Although there are studies on the effects of crystallographic texture and orientation [11, 20]. Unfortunately, a pivotal discussion regarding interactions among residual stress, microstructures, and physical processes leading to the magnetic hysteresis behavior is missing.
Recently, it has been shown that soft magnetic properties can be designed by controlling magneto-elastic coupling [21, 22, 23]. Along these lines, the residual stress caused by AM and the underlying phase transitions could be key for tuning the coercivity in an AM-processed Fe-Ni permalloy. Micromagnetic simulations by Balakrishna et al. [22] presented that magneto-elastic coupling plays an important role in governing magnetic hysteresis, as the existence of the pre-stress shifts the minimized coercivity from the composition Fe\({}_{25}\)Ni\({}_{75}\) to Fe\({}_{21.5}\)Ni\({}_{78.5}\), while the magneto-crystalline anisotropy is zero at Fe\({}_{25}\)Ni\({}_{75}\) but non-zero at Fe\({}_{21.5}\)Ni\({}_{78.5}\). Based on a vast number of calculations, the dimensionless constant \((C_{11}-C_{12})\lambda_{100}^{2}/2K_{\rm u}\approx 81\) was proposed as a condition for low coercivity along the \(\langle 100\rangle\) crystalline direction for cubic materials (incl. Fe-Ni permalloy) [23]. Here, \(C_{11}\) and \(C_{12}\) are the components of the stiffness tensor, \(\lambda_{100}\) is the magnetostrictive constant, and \(K_{\rm u}\) is the magneto-crystalline constant. Nevertheless, the influence of the magnitude and the states of the residual stress were not comprehensively analyzed. Yi et al. discussed magneto-elastic coupling in the context of AM-processed Fe-Ni permalloy for the first time [21]. The simulations were performed with positive, zero, and negative magnetostrictive constants under varying beam power. The results showed that the coercivity of the Fe-Ni permalloy with both positive and negative magnetostrictive constants rises with increasing beam power. In contrast, the permalloy with zero magnetostrictive constant showed no dependence on the beam power. This demonstrates the necessity of magneto-elastic coupling in tuning coercivity in AM-produced permalloy and illustrates the potential of tailoring properties of permalloys via controlling the residual stress during AM.
Understanding the residual stress in AM and its interactions with other physical processes, such as thermal and mass transfer, grain coarsening, and phase transition, is never the oak that felled at one stroke. Taking the popular selective laser melting/sintering (SLM/SLS) method as an instance, the temperature gradient mechanism (TGM) explains the generation of residual stress by considering the heating mode and the cooling mode [24, 25, 26]. The heating mode presents a counter-bending with respect to the building direction (BD) of newly fused layers (Fig. 1b). This is because the thermal expansion in an upper overheated region gets restricted by the lower old layer/substrate. Plastic strain can also be generated due to the activated plasticity of the material and compensate the local stress around the heat-affected zone. The cooling mode, in contrast, presents a bending towards the BD of the newly fused layer due to the thermal contraction between the fusion zone and the old layer/substrate.
It should be noted that TGM only provides a phenomenological aspect by employing the idealized homogeneous layers. In the practical SLM/SLS, varying morphology and porosity of the powder bed create inhomogeneity in not only the temperature field but also the on-site thermal history on the mesoscale (10-100 \(\upmu\)m), inciting varying degrees of thermal expansion, and eventually leading to the development of the thermal stress in various degree. Stochastic inter-particle voids and lack-of-fusion pores also create evolving inhomogeneity in material properties on the mesoscale [27, 28], leading to the shifted local conditions for developing the residual stress as interpreted by TGM. In other words, mesoscopic inhomogeneity and coupled thermo-structural evolution should act as a long-range factor in residual stress development. On the other hand, due to the relatively smaller lattice parameter, the continuous growth of the \(\gamma^{\prime}\) phase would also result in the increasing misfit stress between itself and the chemical-disordered \(\gamma\) matrix [19, 18] (as also presented in Fig. 1c). Therefore, the nanoscopic solid-state phase transition also contributes to the development of residual stress as a short-range fluctuation, which is almost effectless to the mesoscopic phenomena yet still influences the local magnetic behavior [22]. To sum up, the residual stress from AM processes that eventually affects the magnetization reversal via magneto-elastic coupling should already reflect such long-range
(morphology and morphology-induced chronological-spatial thermal inhomogeneity) and short-range factors (misfit-induced fluctuation). This is the central challenge that this work addresses.
In this work, we developed a powder-resolved multiphysics-multiscale simulation scheme to investigate the hysteresis tailoring of Fe-Ni permalloy by AM under a scenario close to practical experiments. This means that the underlying physical processes, including the coupled thermal-structural evolution, chemical order-disorder transitions, and associated thermo-elasto-plastic behaviors, are explicitly considered and bridged by accounting for their chronological-spatial differences. The influences of processing parameters (notably the beam power and scan speed) are analyzed and discussed on distinctive aspects, including the size of the fusion zone, the development of residual stress and accumulated plastic strain, the \(\gamma/\gamma^{\prime}\) transition under the residual stress, and the resulting magnetic coercivity of manufactured parts. It is anticipated that the presented work could provide transferable insights in selecting processing parameters and optimizing routine for producing permalloy using AM, and deliver a comprehensive understanding of tailoring the hysteresis of soft magnetic materials in unconventional processing.
## 2 Results
### Multiphysics-multiscale simulation scheme
In this work, we consider SLS as the AM approach due to its relatively low energy input as compared to other methods, like SLM. SLS allows us to obtain a stable fusion zone and thus to gain better control of the residual stress development since we don't need to consider melting and evaporating processes and the associated effects, such as the keyholing and Marangoni convection. The microstructure of SLS-processed parts is porous, which allows us to explore the effects of lack-of-fusion pores on the development of residual stress and plastic strain. Based on an overall consideration of all possible phenomena involved in the SLS of the Fe-Ni permalloy, two chronological-spatial scales are integrated in this work:
1. On the mesoscale, with the characteristic length of several 100 \(\upmu\)m, powders are fused/sintered around the laser spot, creating the fusion zone. Featured phenomena such as partial/full melting, necking, and shrinkage among powders can be observed. High gradients in the temperature field are also expected due to laser scanning and rapid cooling of the post-fusion region. By choosing a typical scan speed of 100 \(\mathrm{mm\,s^{-1}}\), this stage lasts only 10 ms.
2. On the nanoscale with a characteristic length well below 1 \(\upmu\)m, the chemical order-disorder (\(\gamma/\gamma^{\prime}\)) transition can be observed once the on-site temperature is below the transition temperature, as presented in Fig. 0(c). Owing to the difference in thermodynamic stability, redistribution of the chemical constituents by inter-diffusion between \(\gamma\) and \(\gamma^{\prime}\) phases is coupled to the phase transition. Due to the extremely restricted kinetics, it would cost several hundred hours of annealing to have the \(\gamma^{\prime}\) phase formation in a mesoscopic size [15, 29, 18].
We explicitly consider a three-stage processing route consisting of an SLS, a cooling, and an annealing stage. As shown in the inset of Fig. 2, the domain temperature will rise from a pre-heating temperature (\(T_{0}\)) during the SLS stage. After that, the whole powder bed would gradually cool down to \(T_{0}\). Finally, the processed powder bed enters the annealing stage at \(T_{0}\) where the \(\gamma/\gamma^{\prime}\) transition continues. The cooling stage lasts three times longer than the SLS stage, and the sequential annealing stage takes far longer than the two other stages. Taking a 500 \(\upmu\)m scan section with a typical scan speed of 100 \(\mathrm{mm\,s^{-1}}\) as an example, the SLS and cooling stages would last 5 and 20 ms, respectively, and the annealing time is on the order of 100 hours. Remarkably, the inhomogeneous and time-varying temperature field in the mesoscopic powder bed can be treated as uniform and nearly constant for the nanoscopic \(\gamma/\gamma^{\prime}\) transition, as the local heating and cooling stages induced by laser scan are negligible compared to the time required for the \(\gamma/\gamma^{\prime}\) transition. Nonetheless, the long-term mechanical response (notably the residual stress) remains after the first two stages. It would further influence the \(\gamma/\gamma^{\prime}\) transition during the annealing stage and the resultant magnetic hysteresis behavior by electro-magnetic coupling [30, 21, 22].
The simulations are arranged in a subsequent scheme to recapitulate the aforementioned characteristics on different scales while balancing the computational cost-efficiency, as shown in Fig. 2. Accepting that heat transfer is only strongly coupled with microstructure evolution (driven by diffusion and underlying grain growth) but weakly coupled with mechanical response during the SLS-process stage, we employ the non-isothermal phase-field model proposed in our former work [27] to simulate the coupled thermo-structural evolution, and perform the subsequent thermo-elasto-plastic calculations based on the resulting transient mesoscopic structure (hereinafter called mesostructure) and temperature field from the SLS simulations. In other
words, mechanical stress and strain are developed under the quasi-static microstructure and temperature field. This is based on the fact that the thermo-mechanical coupling strength is negligible for most metals [31], unlike the strong inter-coupling among mass as well as heat transfer and grain growth [32, 33]. From a kinetic point of view, the propagation of elastic waves is generally faster than thermal conduction and diffusion-based mechanisms, like grain coarsening and solid-state phase transition. Next, taking the nanoscopic domains that are sufficiently small and can be regarded as "homogenized points" on the mesostructure, we transfer the historical quantities on the sampled coordinates, notably the temperature and stress histories, to the subdomains as the transient uniform fields and perform the non-isothermal phase-field simulations of the \(\gamma/\gamma^{\prime}\) transitions. This also means that mesoscopic temperature gradients are disregarded in the nanoscopic simulations in this work. Finally, we connect the nanoscopic Ni concentration and stress field to the magnetic properties, incl. the saturation magnetization \(M_{\text{s}}(X_{\text{Ni}})\), magneto-crystalline anisotropic strength \(K_{\text{u}}(X_{\text{Ni}})\), and magnetostriction constants \(\lambda_{100}(X_{\text{Ni}})\) and \(\lambda_{111}(X_{\text{Ni}})\), and perform the magneto-elastic coupled micromagnetic simulations for the local hysteresis. Both non-isothermal phase-field and thermo-elasto-plastic models are numerically implemented by the finite element method (FEM), which allows handling the geometric complexity and adaptive meshing at considerable numerical accuracy. The micromagnetic models, on the other hand, are implemented by the finite difference method (FDM) to allow for GPU-accelerated high-throughput calculations [34]. Simulation domains are collectively illustrated in Fig. S2. Apart from the main workflow, the proposed scheme also involves other methods, such as the discrete element method (DEM) and CALculation of PHAse Diagrams (CALPHAD) approach, to deliver information such as the powder size (\(R_{i}\)) and center (\(O_{i}\)) distributions and the thermodynamic/kinetic parameters that required in the simulations. Details regarding the modeling and the simulation setup are explicitly given in the _Method_ section.
### SLS single scan simulations and coupled thermal-microstructural evolution
Here we present the results of SLS single-scan simulations of a Fe\({}_{21.5}\)Ni\({}_{78.5}\) powder bed in Argon atmosphere. A powder bed with an average thickness of \(\tilde{h}=25\) um is placed on a substrate with the same composition and thickness of \(240\) um. The powder size distribution is presented in Fig. S1b. The simulation domain has the geometry of \(250\times 500\times 300\) um. The melting point of Fe\({}_{21.5}\)Ni\({}_{78.5}\) is \(T_{\text{M}}=1709\) K, and the initial temperature of the powder bed is set as the pre-heating/annealing temperature \(T_{0}=0.351T_{\text{M}}=600\) K. The temperature at the substrate bottom is set as \(T_{0}\) throughout the simulations. \(D_{\text{FWE2}}=200\) um (i.e., full-width at \(1/e^{2}\)) is adopted as the nominal diameter of the laser spot, within which around \(86.5\%\) of the power is concentrated. The full width at half maximum intensity (FWHM) is then calculated as \(D_{\text{FWHM}}=0.588D_{\text{FWE2}}=117.6\) um, characterizing \(50\%\) power concentration within the spot.
Fig. 3a shows the evolution of simulated microstructure for a single scan of \(P=30\) W and \(v=100\) mm s\({}^{-1}\). In the overheated region, particles may be fully/partially melted. The tendency to reduce the total surface energy leads to the motion of the localized melt flowing from convex to concave points, which contributes to the fusion of the powders. In regions with \(T\leq T_{\text{M}}\), no melting occurs. However, the temperature of the particles is sufficiently high to induce diffusion, evidenced by the formation of necking between adjacent particles. Since the local temperature is well above the \(\gamma/\gamma^{\prime}\) transition temperature during the SLS processes, there is no ordering at this stage.
The temperature profiles of the powder bed for different beam powers and scan speeds are presented in Fig. 3b, where the temperature field strongly depends on particle morphology, as the isotherms are concentrated around the surface concaves and sintering necks among particles. Apart from this, one can observe relatively dense isotherms at the front and the bottom of the overheated region, indicating a large temperature gradient. While the laser spot is moving, this temperature gradient becomes smaller, as the isotherms tend to be sparser. This indicates a fast heating process followed by slow cooling. Comparing the beam spot for different processing parameters presented in Fig. 3b shows that increasing the beam power and/or decreasing the scan speed enhance the heat accumulation at the beam spot, resulting in the overheated region with increasing size. For the same scan speed \(v=100\) mm s\({}^{-1}\), increasing the beam power from \(P=30\) W (Fig. 3a) to \(35\) W (Fig. 3b) leads to more significant overheated region. On the other hand, for the same power of \(P=30\) W, increasing the scan speed from \(v=100\) mm s\({}^{-1}\) (Fig. 3a) to \(150\) mm s\({}^{-1}\) (Fig. 3b) leads to reduced overheated region.
### Development of stress and plastic strain during SLS single scan
In order to analyze the stress evolution during SLS, we use isotropic hardening plasticity to describe Fe\({}_{21.5}\)Ni\({}_{78.5}\) with temperature-dependent mechanical properties, including thermal expansion coefficient \(\alpha\), Young's modulus \(E\), yield stress \(\sigma_{y}\), and hardening
tangent modulus \(E_{\text{t}}\). Spatial interpolation of the mechanical properties according to the order parameter \(\rho\), which is \(\rho=1\) in the materials and \(\rho=0\) in the atmosphere/pores, is also performed to consider structural inhomogeneities due to pore formation. Details are described in the section on _Methods_. The domain-average quantities are defined as
\[\sigma_{\text{e}}^{\text{D}}=\frac{\int_{\Omega}\rho\sigma_{\text{e}}\,\text{d} \Omega}{\int_{\Omega}\rho\Omega},\qquad\rho_{\text{e}}^{\text{D}}=\frac{\int_{ \Omega}\rho P_{\text{e}}\,\text{d}\Omega}{\int_{\Omega}\rho\Omega},\qquad T^{ \text{D}}=\frac{\int_{\Omega}\rho T\,\text{d}\Omega}{\int_{\Omega}\rho\Omega}, \tag{1}\]
where \(\Omega\) is the simulation domain volume, and \(\rho\) is the order parameter indicating the substance. \(\sigma_{\text{e}}\) is the von Mises stress and \(p_{\text{e}}\) is the accumulated (effective) plastic strain. To eliminate the boundary effects, which lead to heat accumulation on the boundary that intersects with the scan direction, the simulation domain with a geometry \(250\times 250\times 250\) um is selected from the center of the domain for processing with the transient \(T\) and \(\rho\) fields mapped, as shown in Fig. 2.
Fig. 4a presents the evolution of the domain-average von Mises stress during the SLS and cooling stages. \(\sigma_{\text{e}}\) develops with the temperature rise due to the generation of laser-induced heat in the powder bed. However, once the laser spot moves into the domain, followed by the overheated region, the \(\sigma_{\text{e}}^{\text{D}}\) drops along with \(T^{\text{D}}\) continuing rising to the maximum. This is because the points inside the overheated region lose their stiffness, as the material is fully/partially melted, thereby presenting zero stress (Fig. 4b1). Surroundings of the overheated region also present relatively low stress due to the sufficient reduction of stiffness at high temperatures. When the laser spot moves out of the domain, the thermal stress starts to develop along with the cooling of the domain (Fig. 4b2-4). The stress around the concave morphologies on the powder bed, incl. concaves on the surface and sintering necks among particles, rises faster than the one around the traction-free convex morphologies on the surface and unfused powders away from the fusion zone. This may attribute to the locally high temperature gradient around the concave morphologies during the SLS process and, thereby, the strong thermal traction. At the end of the cooling, the convex morphologies and unfused powders have relatively lower stress developed, while the locally concentrated stress can be observed around the concave morphologies on the powder bed, like the surface concaves and sintering necks among particles (Fig. 4b4). Due to the convergence of the \(\sigma_{\text{e}}\) vs. time in the cooling stage, the stress at the end of the cooling stage (\(t=20\) ms) will be regarded as the residual stress in the following discussions.
The development of the accumulated plastic strain \(p_{\text{e}}\) presents an overall increasing tendency vs. time during the SLS and cooling stages, as shown in Fig. 4c. To emulate the effects of full/partial melting, we reset the \(p_{\text{e}}\) in the overheated region (\(T>T_{\text{M}}\)). This reset does not, however, influence the accumulation of \(p_{\text{e}}\) outside of the overheated region (Fig. 4d1), distinguished from \(\sigma_{\text{e}}\) that suffers reduction not only inside but also outside of the overheated region due to loss of stiffness at high temperature. As a result, the growth of the \(p_{\text{e}}^{\text{D}}\) slows down only when the laser spot moves into the domain rather than tends to reduce as \(\sigma_{\text{e}}^{\text{D}}(t)\). The continuous accumulation of \(p_{\text{e}}\) also results in the distinctively concentrated \(p_{\text{e}}\) at the fusion zone's outer boundary (Fig. 4d2-4d), attributing to the high temperature gradient at the front and bottom of the overheated region during the SLS where the existing high thermal stress locally activates the plastic deformation and then contribute to the rise of the \(p_{\text{e}}\). The same reason can be used to explain the concentrated \(p_{\text{e}}\) around the pores and concave morphologies like sintering necksing near the fusion zone. In contrast, unfused powders and substrate away from the fusion zone present nearly no accumulation of \(p_{\text{e}}\), as the onsite thermal stress due to relatively low local temperature is not high enough to initiate the plastification of the material.
### Nanoscopic \(\gamma/\gamma^{\prime}\) transition with residual stress
Before sampling many points inside the fusion zone for studying the subsequent \(\gamma/\gamma^{\prime}\) transition and carrying out micromagnetic simulations, we first examined five selected points from the middle section of the SLS simulation domain, counting its repeatability along the \(x\)-direction (SD). Profiles of \(\sigma_{\text{e}}\) and \(p_{\text{e}}\) are presented on this selected middle section in Fig. 5a. These five points are taken from the profiling path along \(z\)-direction (BD) and \(y\)-direction, as \(\sigma_{\text{e}}\) and \(p_{\text{e}}\) along these two paths are explicitly presented in Fig. 5b1-b2. \(\sigma_{\text{e}}\) gradually rises along \(z\)-profiling path, as the gap between the normal stresses (specifically between \(\sigma_{xx}\) and \(\sigma_{zz}\), and between \(\sigma_{yy}\) and \(\sigma_{zz}\) since there are almost no differences between \(\sigma_{xx}\) and \(\sigma_{yy}\). \(\sigma_{\text{e}}\) reaches a peak around the fusion zone boundary (FZB) and then decreases. Notably, there is a reverse of \(\sigma_{zz}\) from positive (tension) to negative (compression) across the FZB. Similarly, \(p_{\text{e}}\) increases to a peak around the FZB and then decreases along \(z\)-profiling path, and the reversed components of \(\epsilon_{\text{pl}}\) are observed across the FZB. This implies the bending of the SLS-processed mesostructure caused by the thermal contraction between the fusion zone and substrate, where the high temperature gradient is depicted. Along the
\(y\)-profiling path, both \(\sigma_{\rm e}\) and \(p_{\rm e}\) present no monotonic tendencies, receiving the influences from the morphologies, yet still reach peaks around the FZB, respectively.
The Lame's stress ellipsoids are illustrated in Fig. 5c for visualizing the stress state of selected points with the directions of the principal stress denoted. Points near the surface and in the fusion zone (P\({}_{1}\), P\({}_{2}\), P\({}_{4}\), and P\({}_{5}\)) are under tensile stress states, while the point P\({}_{3}\) near the FBZ has one negative principle stress (compressive) along BD due to the bending caused by the thermal contraction in the fusion zone. The Ni concentration and nanoscopic stress redistribution due to the \(\gamma/\gamma^{\prime}\) transition are simulated in the middle section perpendicular to SD as well (Fig. S2a\({}_{3}\)). It is also coherent with the diffuse-controlled 2D growth of \(\gamma^{\prime}\) phase experimentally examined by the Johnson-Mehl-Avrami-Kolmogorov (JMAK) theory [29]. We initiate the \(\gamma^{\prime}\) nuclei randomly but with a minimum spacing of 100 nm according to the experimental observation in Fig. 1c using Poisson disk sampling [35]. A relatively longer annealing time of 1200 h to obtain sufficient \(\gamma^{\prime}\) phase formation, as presented in Fig. 5d. The Ni concentration (\(X_{\rm Ni}\)) at the centers of the grown \(\gamma^{\prime}\) phase is relatively low, close to the equilibrium value 0.764 at \(T_{\gamma/\gamma^{\prime}}=766\ K\). With the growth of the \(\gamma^{\prime}\) phase, Ni is accumulated at the interface due to the relatively large \(\gamma/\gamma^{\prime}\) interface mobility compared with the inter-diffusive mobility of Ni species, agreeing with its diffuse-controlled growth examined experimentally [29]. Among the points, P\({}_{3}\) with one principal compressive stress has relatively larger \(\gamma^{\prime}\) grown after annealing. This is due to the growing \(\gamma^{\prime}\) phase with relatively smaller lattice parameters (in other words, inciting shrinkage eigenstrain inside \(\gamma^{\prime}\) phase) being mechanically preferred with compressive stress. As P\({}_{2}\) and P\({}_{5}\) with similar stress states, similar \(\gamma^{\prime}\) phase formations are shown. P\({}_{1}\) has relatively less \(\gamma^{\prime}\) phase grown, as it has the highest normal stresses as tensile (\(\sigma_{xx}\), \(\sigma_{yy}\), and \(\sigma_{zz}\)) compared to other points, as shown in Fig. 5b\({}_{1}\).
### Magnetic hysteresis behavior of stressed nanostructures
The nanoscopic distributions of stress and Ni concentration are imported from the results of the \(\gamma/\gamma^{\prime}\) transition, in which both the long-range and short-range factors are embodied. The magneto-elastic coupling is implemented by considering an extra term \(f_{\rm em}\) in the magnetic free energy density along with the exchanging, magneto-crystalline anisotropy, magnetostatic, and Zeeman contributions [30, 36, 37], as described in the section on _Methods_. To consider contributions from normal and shearing stresses, a homogeneous in-plane configuration for the normalized magnetization \(\mathbf{m}\) is oriented in an angle of \(\vartheta=45^{\circ}\) relative to the crystalline orientation \(\mathbf{u}\), which is assumed to be the \(z\)-direction. Meanwhile, an effective coupling field \(\mathbf{B}_{\rm em}=-\partial f_{\rm em}/\partial(M_{\rm s}\mathbf{m})\) is calculated for analyzing the magneto-elastic coupling effects in vectorial aspect (Fig. 5e). It should be noticed that within the range of segregated \(X_{\rm Ni}\) from 0.781 to 0.810 as shown in Fig. 5d, \(\lambda_{100}\) ranges from 1.274 \(\times 10^{-5}\) to 5.249 \(\times 10^{-6}\), presenting a positive shearing magnetostriction. On the other hand, \(\lambda_{111}\) inside of the \(\gamma^{\prime}\) phase (\(X_{\rm Ni}=0.781\sim 0.785\)) and in the \(\gamma\) matrix (\(X_{\rm Ni}=0.785\)) has positive values, ranging from 2.41 \(\times 10^{-6}\) to 1.96 \(\times 10^{-6}\), while \(X_{\rm Ni}\) shifts from 0.781 to 0.785. On the \(\gamma/\gamma^{\prime}\) interface with \(X_{\rm Ni}=0.810\), however, a negative value of \(\lambda_{111}=-8.429\) \(\times 10^{-7}\) is obtained. This implies that there is a negative normal magnetostriction on the \(\gamma/\gamma^{\prime}\) interfaces and a positive normal magnetostriction in the bulk of \(\gamma\) and \(\gamma^{\prime}\) phases. Due to the existing shrinkage eigenstrain, there are locally high \(f_{\rm em}\) contributions inside the \(\gamma^{\prime}\) phase compared to the \(\gamma\) matrix, which causes strong magneto-elastic coupling effects. Among all points P\({}_{1}\)-P\({}_{5}\), P\({}_{1}\) has comparably lower magneto-elastic coupling in both \(\gamma^{\prime}\) and \(\gamma\) phases. Owing to similar stress states, P\({}_{2}\), P\({}_{4}\), and P\({}_{5}\) have similar profiles of \(f_{\rm em}\) and \(\mathbf{B}_{\rm em}\), where P\({}_{5}\) has slightly stronger coupling effects. Remarkably, P\({}_{2}\), P\({}_{4}\), and P\({}_{5}\) all have the local \(\mathbf{B}_{\rm em}\) lying at an angle about 135\({}^{\circ}\), as emphasized by dashed-dotted circle Fig. 5e. As for P\({}_{3}\), however, the angle between \(\mathbf{m}\) and \(\mathbf{B}_{\rm em}\) further increases to 142 to 165\({}^{\circ}\) together with larger magnitude (\(|\mathbf{B}_{\rm em}|\)) comparing to other points, demonstrating enhanced reversal effects to the \(\mathbf{m}\).
The hysteresis curves of the nanostructures also reflect the \(X_{\rm Ni}\)-dependence of \(K_{\rm u}\), \(\lambda_{100}\) and \(\lambda_{111}\). For comparison, the hysteresis curves simulated on a stress-free reference with a homogeneous \(X_{\rm Ni}=0.785\) are also plotted. In order to take numerical fluctuations into account, ten cycles of the hysteresis were examined for each nanostructure/reference with the averaged one presented in Fig. 5f. Notice that \(K_{\rm u}\) ranges from \(-0.137\) to \(-0.364\) kJ m\({}^{-3}\) according to Fig. 1a, which can be regarded as the easy-plane anisotropy as the magnetization prefer to orient in the plane perpendicular to BD. Comparing selected points, P\({}_{2}\) and P\({}_{5}\) have similar coercivity, which is 0.55 mT for P\({}_{2}\) and 0.58 mT for P\({}_{5}\), as these two points have similar stress states. P\({}_{4}\) and \(P_{1}\) have the coercivity of 0.26 mT and 0.09 mT, respectively. However, the coercivity at P\({}_{3}\) is infinitesimal. This is phenomenologically due to the strong coupling field \(\mathbf{B}_{\rm em}\) that reverses \(\mathbf{m}\) at the relatively low external field, as shown in Fig. 5e. As the coupling field \(\mathbf{B}_{\rm em}\) is directly related to the stress state, the relatively strong shear stress implied by the Lame's stress ellipsoids at P\({}_{3}\) (Fig. 5c) may be one of the reasons for the infinitesimal coercivity. This should be further examined in future studies.
Discussion
In the following, we discuss the relation between fusion zone geometries and the processing parameters, notably the beam power \(P\) and scan speed \(v\). It should notice that the size of the fusion is directly linked to the overheated region, which varies with the different combinations of \(P\) and \(v\), as shown in Fig. 3b. The usage of the conserved OP \(\rho\) in representing the powder bed morphologies and the coupled kinetics between \(\rho\) and local temperature allows us to simulate the formation of the fusion zone in a way close to the realistic setup of SLS. Here the indicator \(\xi\) is utilized to mark the fusion zone, which is initialized as zero and would irreversibly turn to one once the temperature is above \(T_{\rm M}\)[38]. Two characteristic sizes, namely the fusion zone width \(b\) and the fusion zone depth \(d\), are defined by the maximum width and depth of the fusion zone, as shown in the inset of Fig. 6. Fig. 6a and Fig. 6b present the maps of \(b\) and \(d\) vs. \(P\) and \(v\), with the isolines indicating the identical volumetric specific energy input \(U_{\rm V}\) that is calculated as
\[U_{\rm V}=\frac{\eta P}{\hbar\omega v}, \tag{2}\]
where the width of the laser scan track \(w\) takes \(D_{\rm FWHM}=117.6\) um and the average thickness of the powder bed \(\bar{h}=25\) um. Since 50% total power is concentrated in the spot with \(D_{\rm FWHM}\), the efficiency takes \(\eta=0.5\). According to the observed geometry of the fusion zone, the maps can be divided into three regions: \(P\) and \(v\) located in the region (R1) would result in a continuous fusion zone, as shown in Fig. 6c1, c3-c6. The depth of the fusion zone in (R1) normally penetrates the substrate, implying the formation of a considerable size of the overheated region, within which the melting-resolidification would take the dominant role. \(P\) and \(v\) located in the region (R2) generate small and discontinuous fusion zones, as two classical geometries as shown in Fig. 6c2 and c7. Its limited size implies the typical partial melting and liquid-state sintering mechanism, as the melt flow is highly localized and can only help the bonding among a subset of powder particles. It is worth noting that these discontinuous fusion zones should attribute to the thermal inhomogeneity induced by local stochastic morphology rather than the mechanisms like the Plateau-Rayleigh instability and balling, where a significant melting phenomenon is required [39, 40, 41]. In the region labeled as (R3), no fusion zone is generated under the chosen \(P\) and \(v\), and the solid-state sintering process remains dominant in the powder bed.
Fig. 7a and b present the maps of average residual stress \(\bar{\sigma}_{\rm e}\) and plastic strain \(\bar{p}_{\rm e}\) inside the fusion zone vs. \(P\) and \(v\), with the specific energy input \(U_{\rm V}\). The average residual stress and plastic strain inside the fusion zone are calculated with the fusion zone indicator \(\xi\) from the relations
\[\bar{\sigma}_{\rm e}=\frac{\int_{\Omega}\bar{\xi}\sigma_{\rm e}\,\mathrm{d} \Omega}{\int_{\Omega}\xi\,\mathrm{d}\Omega},\quad\bar{p}_{\rm e}=\frac{\int_{ \Omega}\bar{\xi}p_{\rm e}\,\mathrm{d}\Omega}{\int_{\Omega}\bar{\xi}\,\mathrm{d }\Omega}. \tag{3}\]
Since the properties inside the fusion zone are focused, only the results with \(P\) and \(v\) located in the continuous fusion zones (R1) and discontinuous ones (R2) are selected and discussed. Generally, the increase of the \(\bar{\sigma}_{\rm e}\) follows the direction of increasing specific energy input \(U_{\rm V}\), leading to the enlargement of the fusion zone. In other words, increasing the size of the fusion zone receives larger contractions between itself and the substrate, leading to the rise of the residual stress inside the fusion zone, comparing Fig. 7c1-c3 and c1, c4-c5. It worth noting that \(\bar{\sigma}_{\rm e}\) presents a rapid increase on \(U_{\rm V}\) below \(120\) J mm\({}^{-3}\), as shown in Fig. S3. When \(U_{\rm V}>60\) J mm\({}^{-3}\), there is almost no increasing of \(\bar{\sigma}_{\rm e}\), which is reflected as the sparse contours beyound the isoline \(U_{\rm V}=60\) J mm\({}^{-3}\). This implies the saturation of residual stress in the fusion zone when \(U_{\rm V}\) is sufficiently large, despite the fusion zone would continue enlarging along with the further increase of \(U_{\rm V}\).
The map of \(\bar{p}_{\rm e}\) presents a ridge at \(P=30\) W, which is different from the monotonic dependence of \(\bar{\sigma}_{\rm e}\) on \(U_{\rm V}\). This means the \(\bar{p}_{\rm e}\) reaches a minimum at \(P=30\) W for every selected \(v\). This may reflect the competition between enlarging fusion zone and the accumulation of plastic strain in the fusion zone. In the low-power range, the accumulation of plastic strain is slower than the growth of the fusion zone, as the \(\bar{p}_{\rm e}\) reduces along with increasing \(P\). When \(P>30\) W, the accumulation of plastic strain is faster than the growth of the fusion zone, as the \(\bar{p}_{\rm e}\) increases along with increasing \(P\). Interestingly, such a tendency is not observed in reducing \(v\) with fixing \(P\), as the \(\bar{p}_{\rm e}\) always grows monotonically with decreasing \(v\) at every selected \(P\).
Moreover, \(\sigma_{\rm e}\) drastically rises across the former boundary between powder bed and substrate for all selected combinations of \(P\) and v, as shown in Fig. 7c1-c5. This is due to the different mechanical responses between the porous powder bed and homogeneous substrate to the thermal stress formed together with the overheated zone. As the size of the fusion zone enlarged, the high concentration of \(\sigma_{\rm e}\) extends further into the interior of the substrate, as the thermal contraction becomes enhanced between the fusion zone and the substrate. Meanwhile, a relatively low concentration of \(p_{\rm e}\) is observed inside the fusion zone.
And a relatively higher concentration is located at the fusion zone's outer boundary, reflecting the continuing accumulation of certain regions, as shown in Fig. 7d\({}_{1}\)-d\({}_{5}\).
Many points located on the mid-section of the fusion zones were sampled, considering their repeatability along the \(x\)-direction (SD), as shown in Fig. S4. It can also eliminate the boundary effects and distractions from the quantities outside the fusion zone. Nanoscopic \(\gamma/\gamma^{\prime}\) transition and hysteresis simulations were performed on each point subsequently. The average coercivity of the fusion zone \(\bar{H}_{\rm c}\) is then calculated directly as the point-wise average of the resulting local hysteresis. At least three hysteresis cycles were performed on each point to reduce the fluctuations due to the numerical scheme. In Fig. 8a, we present the average coercivity \(\bar{H}_{\rm c}\) map with respect to the \(P\) and \(v\). The resulting coercivities under all examined \(P\) and \(v\) locate in the experimentally measured range (from 0.06 to 4 mT). In the map, the lower-right region shows a smaller coercivity, and the upper-left region shows a higher coercivity. Increasing \(P\) from 27.5 to 35 W results in the rise of \(\bar{H}_{\rm c}\) from 0.35 to 0.41 mT (by 17%) when \(v=100\) mm s\({}^{-1}\), and decreasing \(v\) from 125 to 50 mm s\({}^{-1}\) results in the rise of \(\bar{H}_{\rm c}\) from 0.34 to 0.44 mT (by 29%) when \(P=30\) W. This is mainly due to the increasing fraction of region with high local coercivity in the fusion zone for increasing \(P\) and decreasing \(v\) (comparing Fig. 8c\({}_{1}\), c\({}_{2}\), c\({}_{3}\), and c\({}_{1}\), c\({}_{4}\), c\({}_{5}\), respectively). It is also evident that the high local \(H_{\rm c}\) points also possess a high local volume fraction of \(\gamma^{\prime}\) phase \(\Psi_{\gamma^{\prime}}\) (comparing Fig. 8c\({}_{1}\)-c\({}_{2}\) with d\({}_{1}\)-d\({}_{5}\)). This implies an enhanced magneto-elastic coupling effect at high \(\Psi_{\gamma^{\prime}}\), as discussed in Fig. 5e. However, the effect from the local residual stress on the pattern of high local \(H_{\rm c}\) region should also be stressed, as the points with low local \(H_{\rm c}\) around where the local \(\sigma_{\rm e}\) has a drastic change, e.g., the former boundary between powder bed and substrate, comparing (Fig. 8c\({}_{1}\)-c\({}_{2}\) with e\({}_{1}\)-e\({}_{5}\), especially c\({}_{3}\) with e\({}_{3}\) and c\({}_{4}\) and e\({}_{4}\)). Moreover, it remains unclear for the existing "islands" of low local \(H_{\rm c}\) located in the high local \(\sigma_{\rm e}\) and \(\Psi_{\gamma^{\prime}}\) region. They may be subject to a stress state similar to the P\({}_{3}\) in Fig. 5 where nearly zero coercivity is obtained. Nonetheless, a data-driven investigation should be conducted as one upcoming work to connect the local stress state to the coercivity.
To examine the dependence of \(\bar{H}_{\rm c}\) from the phenomenological aspect, we firstly performed the nonlinear regression analysis of \(\bar{H}_{\rm c}\) on the specific energy input \(U_{\rm V}\). The results are presented in Fig. 8b. Notably, \(\bar{H}_{\rm c}\) relates to \(U_{\rm V}\) by allometric scaling rule, i.e., \(H_{\rm c}=C_{H}(U_{\rm V})_{H}^{I}\) with \(C_{H}\) and \(I_{H}\) the parameters. The analysis gives the correlation coefficient \(R^{2}=86.54\%\) with relatively large uncertainty located on low and high \(U_{\rm V}\) regions. Regressed \(I_{H}=0.31\pm 0.03\) also implies a diminishing scaling of \(\bar{H}_{\rm c}\) by \(U_{\rm V}\), as the increment of \(\bar{H}_{\rm c}\) decreases with the increase of \(U_{\rm V}\). However, similar to the one for \(\sigma_{\rm e}\), such a simple scaling rule between the specific energy input and coercivity might be challenged since \(U_{\rm V}\) may not be able to identify the uniquely \(H_{\rm c}\) for the SLS-processed part, as the isoline of \(U_{\rm V}\) in Fig. 8 an evidently intersects with the contour of \(\bar{H}_{\rm c}\).
Regression analysis of \(\bar{H}_{\rm c}\) on \(\sigma_{\rm e}\) was also performed. The exponential growth rule was chosen based on the tendency of \(\bar{H}_{\rm c}(\sigma_{\rm e})\), i.e., \(\bar{H}_{\rm c}=A_{\sigma}\exp(\frac{\bar{\sigma}_{\rm e}}{S_{\rm e}})+H_{\sigma}\) with \(A_{\sigma}\), \(S_{\sigma}\) and \(H_{0}\) as the parameters. Here we adopt the \(A_{\sigma}\) as the growth pre-factor and \(S_{\sigma}\) as the stress scale. When \(\sigma_{\rm e}=0\), we have \(H_{\rm c}|_{\sigma_{\rm e}=0}=A_{\sigma}+H_{0}\approx H_{0}\), meaning \(H_{0}\) can be regarded as the stress-free coercivity. The analysis gives a relatively higher correlation coefficient \(R^{2}=91.31\%\) cf. the one on \(U_{\rm V}\), with relatively large uncertainty located on low \(U_{\rm V}\) region. Compared to the homogeneous stress-free reference in Fig. 5f that is 0.45 mT, this \(H_{0}\) is around 24% smaller, owing to the contributions from the infinitesimal-coercivity points. It also presents the rapid growth of \(\bar{H}_{\rm c}\) after \(\sigma_{\rm e}=206\) MPa with around 40% increment, demonstrating evident effects on \(\bar{H}_{\rm c}\) with increasing residual stress.
## 4 Conclusion
In summary, processing-property relationship in tailoring magnetic hysteresis of Fe\({}_{21.5}\)Ni\({}_{78.5}\) has been demonstrated in this work by conducting multiphysics-multiscale simulation. The residual stress is unveiled to be the key thread since it readily carries both long-range (morphology and morphology-induced chronological-spatial thermal inhomogeneity) and short-range information (misfit-induced fluctuations) after the processes. Influences of beam power and scan speed have been investigated and presented on distinctive phenomena, including the geometry of fusion zones, the residual stress and accumulated strain, and the resultant coercivity of the manufactured parts. The following conclusions can be drawn from the present work:
1. The simulated mesoscopic residual stress states are coherent with TGM interpretation. Further details like the concentrated stress around the concave morphologies (surface concave, sintering necks, etc) beyond the TGM interpretation are also delivered. The accumulated plastic strain is evidently observed at the fusion zone's outer boundary.
2. Nanoscopic Ni segregation at the \(\gamma/\gamma^{\prime}\) interface due to the diffusion-controlled \(\gamma/\gamma^{\prime}\) transitions is observed with local composition Fe\({}_{19}\)Ni\({}_{81}\), which has comparably smaller saturation magnetization and stronger easy-plane magnetocrystalline
anisotropy. Notably, the interface also locally presents negative normal magnetostriction (\(\lambda_{111}=-8.429\times 10^{-7}\)) and positive shearing magnetostriction (\(\lambda_{100}=5.249\times 10^{-6}\)), while both the \(\gamma^{\prime}\) phase and \(\gamma\) matrix present positive normal and shearing magnetostriction.
3. Large magneto-elastic coupling energy is observed inside \(\gamma^{\prime}\) phase with the corresponding effective field imposing rotating effects on the magnetization. These effects vary point-wisely according to residual stress states and \(\gamma^{\prime}\) phase formation, and eventually lead to different local coercivity. Remarkably, the point around the bottom of the fusion zone is examined to have nearly zero coercivity, which may attribute to the on-site stress state with a principal compressive stress along the building direction with another two tensile ones, i.e., implying a relatively significant shear stress.
4. The relation between the average residual stress and coercivity of the fusion zone is examed to follow the exponential growth rule with a correlation coefficient of 91.31%. The stress-free coercivity derived from the exponential law is 0.34 mT, and the rapid growth of average coercivity is observed when average residual stress exceeds 206 MPa, implying a potential threshold of average residual stress for restricting coercivity of the manufactured parts around the stress-free value. On the other hand, average residual stress beyond the threshold can be considered in the context of effectively increasing the resultant coercivity of the manufactured permalloy parts.
Despite the present findings, several points should be further examined and discussed in future works:
1. The conditions deriving the near-zero coercivity should be extensively investigated in the sense of residual stress states rather than its effective value, noting that residual stress is a 2\({}^{\text{nd}}\)-order tensor. Magneto-elastic coupled micromagnetic simulations should be performed under the classified residual stress states to rationalize the factors leading to the vanishing of coercivity as in the present findings.
2. The present findings are only examined at relatively low specific energy input and, correspondingly, low generated residual stress, as the SLS is chosen in this work. It is anticipated to conduct the simulations with relatively high energy input, like SLM, and examine the influences of magneto-elastic coupling on the magnetic hysteresis with comparably higher residual stress cases. Influences on residual stress development and, eventually, the coercivity of manufactured permalloy from multilayer and multitrack AM strategies should also be examined.
## 5 Method
### Thermodynamic framework
In order to describe the microstructure of an SLS-manufactured Fe-Ni alloy, a conserved order parameter (OP) \(\rho\) is employed to represent the substance and atmosphere/pores, and a set of non-conserved OPs \(\{\phi_{\chi}^{\rho}\}\) are employed to represent the grains with the superscript \(\varphi=\gamma,\gamma^{\prime}\) representing the phases and the subscript \(\chi=1,\,2,\,\,\ldots,\,N\) representing the orientations, extended from our former works [27, 32]. Counting the thermal, chemical, and mechanical contributions, the temperature field \(T\), the strain field \(\epsilon\), and sets of local chemical molar fraction \(\{X_{A}^{\varphi}\}\) of the chemical constituents \(A=\text{Fe,Ni}\) are considered. On the other hand, as a ferromagnetic material under the Curie temperature \(T_{\text{C}}=880\) K for the composition Fe\({}_{21.5}\)Ni\({}_{78.5}\), the thermodynamic contribution due to the existing spontaneous magnetization \(\mathbf{m}\) is also counted. The framework of the free energy density functional of the system is then formulated as follows
\[\mathcal{F}=\int_{\Omega}\left[f_{\text{ch}}+\frac{f_{\text{mat}}}{f_{\text{ loc}}+f_{\text{grad}}}+f_{\text{el}}+f_{\text{mag}}\right]\text{d}\Omega, \tag{4}\]
where \(f_{\text{ch}}\) represents the contributions from the chemical constituents. \(f_{\text{loc}}\) and \(f_{\text{grad}}\), together as the \(f_{\text{init}}\), presents the contributions from the surface and interfaces (incl. grain boundaries and phase boundaries) [42]. \(f_{\text{el}}\) is the contribution from the elastic deformation, and \(f_{\text{mag}}\) is the contribution from spontaneous magnetization and magnetic-coupled effects.
It is worth noting that this uniform thermodynamic framework does not imply that a single vast inter-coupled problem with all underlining physics should be solved interactively and simultaneously crossing all involved scales. As sufficiently elaborated in the subsection _Multiphysics-multiscale simulation scheme_, it is more practical and effective to conduct the multiphysics-multiscale
simulations in a subsequent scheme and concentrate on rationalizing and bridging the physical quantities and processes among problems and scales. In that sense, the free energy density functional, originated from Eq. (4), should be sufficiently simplified regarding the distinctiveness of each problem at the corresponding scale. This will be explicitly introduced in the following sections.
### Mesoscopic processingssing simulations
Here we consider the SLS processing on a mesoscopic powder bed by using \(\rho\) to differentiate pore-substance and \(\{\phi_{X}^{\overline{\rho}}\}\) to differentiate polycrystalline orientations. According to the high-temperature phase diagram of the Fe-Ni system [43], the \(\gamma\) phase exists within a relatively large temperature range (from \(T_{\rm M}=1709\) K to the transition starting temperature \(T_{\gamma/\gamma^{\prime}}=766\) K) for the composition Fe\({}_{21.5}\)Ni\({}_{78.5}\). On the other hand, since the SLS together with cooling stages would only last a relatively infinitesimal time (on the order of 10 ms) to the following annealing stage (more than 10 h), there is almost no change for \(\gamma^{\prime}\) phase to grow into mesoscopic size. In this regard, we treat all existing polycrystals during the SLS stage as the \(\gamma\) phase. Considering Fe-Ni as a binary system where the constraints \(X_{\rm Ni}+X_{\rm Fe}=1\) and \(\phi_{X}^{\gamma^{\prime}}+\phi_{X}^{\gamma}=\rho\) always holds, we then only take the OP set \(\{\phi_{X}^{\gamma}\}\) (\(\chi=1,2,...,N\)) as well as \(\rho\) for the mesoscopic simulations due to the absence of the \(\gamma^{\prime}\) phase on the mesostructures. The profiles of \(\rho\) and \(\{\phi_{X}^{\gamma}\}\) across the surface and grain boundary between two adjacent \(\gamma\)-grains are illustrated in Fig. 9b. We also take simplified notations \(X=X_{\rm Ni}\) in this subsection as the independent concentration indicators, while \(X_{\rm Fe}=1-X\).
Due to the co-existing of substance (\(\gamma\)-grains with Ni composition \(X_{0}=0.785\)) and pores/atmosphere, the chemical free energy density should be formulated as
\[f_{\rm ch}(T,\rho)=h_{\rm ss}(\rho)f_{\rm ch}^{\gamma}(T,X^{\gamma}=X_{0})+h_{ \rm at}(\rho)f_{\rm ch}^{\rm at}(T), \tag{5}\]
where \(h_{\rm ss}\) and \(h_{\rm at}\) are monotonic interpolation functions with subscripts "ss" and "at" representing the substance and pore/atmosphere and are assumed to have the polynomial forms as
\[h_{\rm ss}(\rho)=\rho^{3}\left(10-15\rho+6\rho^{2}\right),\quad h_{\rm at}( \rho)=1-\rho^{3}\left(10-15\rho+6\rho^{2}\right).\]
The temperature-dependent chemical free energy \(f_{\rm ch}^{\gamma}\) is modeled by the CALPHAD approach
\[f_{\rm ch}^{\gamma}=f_{\rm ref}^{\gamma}+f_{\rm id}^{\gamma}+f_{\rm mix}^{ \gamma}+f_{\rm mm}^{\gamma}, \tag{6}\]
with
\[f_{\rm ref}^{\gamma}(T,X^{\gamma}) =X^{\gamma}f_{\rm Ni}(T)+(1-X^{\gamma})f_{\rm Fe}(T),\] \[f_{\rm id}^{\gamma}(T,X^{\gamma}) =\frac{\mathcal{R}T}{V_{\rm m}^{\rm sys}}\left[X^{\gamma}\ln X^{ \gamma}+(1-X^{\gamma})\ln(1-X^{\gamma})\right],\] \[f_{\rm mix}^{\gamma}(T,X^{\gamma}) =X^{\gamma}(1-X^{\gamma})L^{\gamma},\] \[f_{\rm mm}^{\gamma}(T,X^{\gamma}) =\frac{\mathcal{R}T}{V_{\rm m}^{\rm sys}}\ln(\beta^{\gamma}+1) \mathds{P}\left(\frac{T}{T_{\rm C}^{\gamma}}\right),\]
where \(f_{\rm ref}^{\gamma}\) is the term corresponding to the mechanical mixture of the chemical constituents (in this case, Fe and Ni), \(f_{\rm id}^{\gamma}\) is the contribution from the configurational entropy for an ideal mixture, \(f_{\rm mix}^{\gamma}\) is the excess contribution due to mixing, and \(f_{\rm mag}^{\gamma}\) is the contribution due to the magnetic moment. The parameters fed in Eq. (6), including the atom magnetic moment \(\beta^{\gamma}\), the Curie temperature \(T_{\rm C}^{\gamma}\), and the interaction coefficient \(L^{\gamma}\), are described in the way of Reddish-Kister polynomials [44] which is generally formulated for a binary system as \(p^{\theta}=X_{A}P_{A}^{\phi}+X_{B}p_{B}^{\phi}+X_{A}X_{B}\sum_{n}p_{A,B}^{ \theta,n}(X_{A}-X_{B})^{n}\) with the temperature-dependent parameters \(p_{A}^{\theta}\), \(p_{B}^{\phi}\) and \(p_{A,B}^{\theta,n}\) for optimization. \(\mathds{P}(T/T_{\rm C}^{\gamma})\) represents the Inden polynomial, obtained by expanding the magnetic specific heat onto a power series of the normalized temperature \(T/T_{\rm C}^{\gamma}\)[45, 46]. \(\mathcal{R}\) is the ideal gas constant. \(V_{\rm m}^{\rm sys}\) is the molar volume of the system. All the thermodynamic parameters for the CALPHAD are obtained from Ref. [43] while the molar volume of the system is obtained from the database TCFE8 from the commercial software Thermo-Calc(r) [47].
Since the variation of Ni composition is negligible in between \(T_{\rm M}\) and \(T_{\gamma/\gamma^{\prime}}\), we pursue a simple but robust way of implementing \(f_{\rm ch}\) under a drastically varying \(T\) during the SLS stage. Taking \(T_{\rm M}\) as referencing temperature, Eq. (5) is then re-written
as
\[f_{\rm ch}(T,\rho)=c_{\rm r}\left[(T-T_{\rm M})-T\ln\frac{T}{T_{\rm M}}\right]+f_{ \rm ref}^{T_{\rm M}}-h_{\rm ml}\frac{T-T_{\rm M}}{T_{\rm M}}\mathcal{L}, \tag{7}\]
where \(f_{\rm ref}^{T_{\rm M}}\) is a referencing chemical free energy density at \(T_{\rm M}\), which can be omitted in the following calculations. \(c_{\rm r}\) is a relative specific heat landscape, i.e., \(c_{\rm r}(\rho,T)=h_{\rm ss}(\rho)c_{\rm v}^{\gamma}(T)+h_{\rm at}(\rho)c_{\rm v }^{\rm at}(T)\) with the volumetric specific heat for \(\gamma\) grains and pores/atmosphere. Notably, \(c_{\rm v}^{\gamma}\) can be thermodynamically calculated as follows at a fixing pressure \(p_{0}\) and composition \(X_{0}\).
\[c_{\rm v}^{\gamma}(T)=-T\left(\frac{\partial^{2}f_{\rm ch}^{\gamma}}{\partial T ^{2}}\right)_{p_{0},X_{0}}. \tag{8}\]
It should be noticed that the \(c_{\rm v}^{\gamma}\) obtained by Eq. (8) has a discontinuous point at \(T_{\rm C}\), which is due to the 2nd-order Curie transition, as shown in Fig. S5a. \(\mathcal{L}\) is the latent heat due to the partial/full melting, which is mapped by the interpolation function \(h_{\rm ml}\). Here \(h_{\rm ml}\) adopts a sigmoid form with a finite temperature band \(\Delta_{T}\)
\[h_{\rm ml}=\frac{1}{2}\left[1+\tanh\frac{2(T-T_{\rm M})}{\Delta_{T}}\right].\]
which reaches unity once \(T\to T_{\rm M}\) and is smooth enough to ease the drastic change in \(f_{\rm ch}\).
On the other hand, to explain the free energy landscape across the surface and \(\gamma/\gamma\) interface (or \(\gamma\) grain boundary) under varying temperatures, we adopt the non-isothermal multi-well Landau polynomial and gradient terms from our former works [27, 32], i.e.,
\[f_{\rm loc}(T,\rho,\{\phi_{\chi}^{\gamma}\})= \underline{W}_{\rm ss}(T)\left[\rho(1-\rho)^{2}\right]+\underline{ W}_{\gamma/\gamma}(T)\left\{\rho^{2}+6(1-\rho)\sum_{\chi}(\phi_{\chi}^{ \gamma})^{2}\right.\] \[\left.-4(2-\rho)\sum_{\chi}(\phi_{\chi}^{\gamma})^{3}+3\left[ \sum_{\chi}(\phi_{\chi}^{\gamma})^{2}\right]^{2}\right\}, \tag{9}\] \[f_{\rm grad}(T,\nabla\rho,\{\nabla\phi_{\chi}^{\gamma}\})=\frac{ 1}{2}\left[\underline{\kappa}_{\rm ss}(T)|\nabla\rho|^{2}+\sum_{\chi} \underline{\kappa}_{\gamma/\gamma}(T)|\nabla\phi_{\chi}^{\gamma}|^{2}\right],\]
with
\[\underline{W}_{\rm sf}(T)=W_{\rm sf}\tau_{\rm sf}(T), \underline{\kappa}_{\rm sf}(T)=\kappa_{\rm sf}\tau_{\rm sf}(T),\] \[\underline{W}_{\gamma/\gamma}(T)=W_{\gamma/\gamma}\tau_{\gamma/ \gamma}(T), \underline{\kappa}_{\gamma/\gamma}(T)=\kappa_{\gamma/\gamma}\tau_{ \gamma/\gamma}(T),\]
\(\underline{W}_{\gamma/\gamma^{\prime}}\) and \(\underline{\kappa}_{\gamma/\gamma^{\prime}}\) are temperature-independent parameters obtained from the surface and \(\gamma/\gamma\) interface energy \(\Gamma_{\rm sf}\), \(\Gamma_{\gamma/\gamma}\) and diffuse-interface width \(\ell_{\gamma/\gamma^{\prime}}\), and \(\tau_{\rm sf}(T)\) and \(\tau_{\gamma/\gamma}(T)\) are the dimensionless tendencies inherited from the temperature dependency of \(\Gamma_{\rm sf}\) and \(\Gamma_{\gamma/\gamma}\), i.e.,
\[\Gamma_{\rm sf}(T)=\frac{\sqrt{2}}{6}\tau_{\rm sf}(T)\sqrt{(W_{ \rm sf}+7W_{\gamma/\gamma})(\kappa_{\rm sf}+\kappa_{\gamma/\gamma})},\] \[\Gamma_{\gamma/\gamma}(T)=\frac{2\sqrt{3}}{3}\tau_{\gamma/\gamma}(T )\sqrt{W_{\gamma/\gamma}\kappa_{\gamma/\gamma}}, \tag{10a}\] \[\ell_{\gamma/\gamma}\approx\frac{2\sqrt{3}}{3}\sqrt{\frac{\kappa_ {\gamma/\gamma}}{W_{\gamma/\gamma}}},\]
along with the constraint among parameters for having the sample profile of \(\rho\) and \(\phi_{\chi}^{\gamma}\) across the surface [27], i.e.,
\[\frac{W_{\rm sf}+W_{\gamma/\gamma}\gamma}{\kappa_{\rm sf}}=\frac{6W_{\gamma/ \gamma}}{\kappa_{\gamma/\gamma}}. \tag{10b}\]
In this work, we give \(\ell_{\gamma/\gamma}=2\) um, the temperature-dependent \(\Gamma_{\rm sf}\) and \(\Gamma_{\gamma/\gamma}\) are presented in Fig. S5c. The total free energy density landscape at stress-free condition (\(f_{\rm tot}=f_{\rm ch}+f_{\rm intf}\)) is illustrated in Fig. 9c. We can tell that the term \(f_{\rm ch}\) modifies the
relative thermodynamic stability of the substance by shifting the free energy minima via temperature changes. In contrast, \(\gamma\) grains at the same temperature do not show a difference in stability until the on-site temperature of one is changed.
The governing equations for the coupled thermo-structural evolution are formulated as follows [27, 28]
\[\frac{\partial\rho}{\partial t}=\nabla\cdot\mathbf{M}\cdot\nabla \frac{\delta\mathcal{F}}{\delta\rho}, \tag{11a}\] \[\frac{\partial\phi_{X}^{\gamma}}{\partial t}=-L\,\frac{\delta \mathcal{F}}{\delta\phi_{X}^{\gamma}},\] (11b) \[c_{\mathrm{r}}\left(\frac{\partial T}{\partial t}-\mathbf{v} \cdot\nabla T\right)=\nabla\cdot\mathbf{K}\cdot\nabla T+q_{\mathbf{v}}, \tag{11c}\]
where Eq. (11a) is the Cahn-Hilliard equation with the mobility tensor \(\mathbf{M}\) specifically considering various mass transfer paths, incl. the mobilities for the mass transfer through the substance (ss), atmosphere (at), surface (sf) and grain boundary (gb). As elaborated in our former work [27], the localized melt flow driven by the local curvature is also modeled by one effective surface mobility \(M_{\mathrm{ml}}^{\mathrm{eff}}\). \(\mathbf{M}\) is then formulated as [32]
\[\mathbf{M}=h_{\mathrm{ss}}M_{\mathrm{ss}}\mathbf{I}+h_{\mathrm{at}}M_{ \mathrm{at}}\mathbf{I}+h_{\mathrm{sf}}M_{\mathrm{sf}}\mathbf{T}_{\mathrm{sf} }+h_{\mathrm{gb}}M_{\mathrm{gb}}\mathbf{T}_{\mathrm{gb}}+h_{\mathrm{ml}}\left( T\right)M_{\mathrm{ml}}^{\mathrm{eff}}\mathbf{T}, \tag{12}\]
with the 2\({}^{\mathrm{nd}}\)-order identity tensor \(\mathbf{I}\) and projection tensors \(\mathbf{T}_{\mathrm{sf}}\) and \(\mathbf{T}_{\mathrm{gb}}\) for surface and grain boundary, respectively [32, 48]. The \(T\)-dependent values for \(M_{\mathrm{sf}}\), \(M_{\mathrm{gb}}\), \(M_{\mathrm{ss}}\), and \(M_{\mathrm{ml}}^{\mathrm{eff}}\) are presented in Fig. S5d. The interpolation functions on the surface and \(\gamma\) grain boundaries are defined as
\[h_{\mathrm{sf}}=16\rho^{2}(1-\rho)^{2},\qquad h_{\mathrm{gb}}=16\sum_{i\neq j} (\phi_{i}^{\gamma}\phi_{j}^{\gamma})^{2},\]
Eq. (11b) is the Allen-Cahn equation with the scalar mobility \(L\), which is derived from the \(\gamma\) grain boundary mobility \(G_{\gamma/\gamma}\) as [49, 50]
\[L=\frac{G_{\gamma/\gamma}T_{\gamma/\gamma}}{\mathbb{E}_{\gamma/\gamma}}, \tag{13}\]
which is also presented in Fig. S5d.
Eq. (11c) is the heat transfer equation that considers the laser-induced thermal effect as a volumetric heat source \(q_{\mathbf{v}}\).
\[q_{\mathbf{v}}(\mathbf{r},t)=Pp_{xy}[\mathbf{r}_{\mathrm{O}}(\mathbf{v},t)] \frac{\mathrm{d}a}{\mathrm{d}z},\]
in which \(p_{xy}\) indicates the in-plane Gaussian distribution with a moving center \(\mathbf{r}_{\mathrm{O}}(\mathbf{v},t)\). \(P\) is the beam power and \(\mathbf{v}\) is the scan velocity with its magnitude \(v=|\mathbf{v}|\) as the scan speed. The absorptivity profile function along depth \(\mathrm{d}a/\mathrm{d}z\) is calculated based on Refs. [27, 51]. The phase-dependent thermal conductivity tensor is formulated in a form considering the continuity of the thermal flux along the normal/tangential direction of the surface [48, 52], i.e.,
\[\mathbf{K}=K_{\perp}\mathbf{N}_{\mathrm{sf}}+K_{\parallel}\mathbf{T}_{ \mathrm{sf}} \tag{14}\]
with
\[K_{\perp}=h_{\mathrm{ss}}K_{\mathrm{ss}}+h_{\mathrm{at}}K_{\mathrm{at}},\qquad K _{\parallel}=\frac{K_{\mathrm{ss}}K_{\mathrm{at}}}{h_{\mathrm{ss}}K_{\mathrm{at} }+h_{\mathrm{at}}K_{\mathrm{ss}}},\]
where \(K_{\mathrm{ss}}\) and \(K_{\mathrm{at}}\) are the thermal conductivity of the substance and \(\mathrm{pore/atmosphere}\). \(\mathbf{N}\) is the 2\({}^{\mathrm{nd}}\)-order normal tensor of the surface [48, 33]. Thermal resistance on the surface and \(\gamma/\gamma\) interface are disregarded, and will be presented in the upcoming works. While temperature-dependent \(K_{\mathrm{ss}}\) takes the linear form in this work (Fig. S5b), \(K_{\mathrm{at}}\) specifically considers the radiation contribution via \(\mathrm{pore/atmosphere}\) and is formulated as
\[K_{\mathrm{at}}=K_{0}+4FT^{3}\sigma_{\mathrm{B}}\ell_{\mathrm{rad}}, \tag{15}\]
where \(K_{0}\approx 0.06\)\(\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}\) is the thermal conductivity of the Argon gas, \(F=1/3\) is the Damkohler view factor [53], and \(\sigma_{\mathrm{B}}=5.67\times 10^{-8}\)\(\mathrm{W}\,\mathrm{m}^{-2}\,\mathrm{K}^{-4}\) is the Stefan-Boltzmann constant. \(\ell_{\mathrm{rad}}\) is the effective radiation path between particles, which usually takes the average diameter of the powders [54].
As the boundary conditions (BC), no mass transfer is allowed on all the boundaries of the mesoscopic domain, which is achieved by setting Neumann BC on \(\rho\) as zero. The temperature at the bottom of the substrate mesh (\(z_{\mathrm{min}}\)) is fixed at \(T_{0}\) via Dirichlet BC on \(T\). Heat transfer is allowed only via the \(\mathrm{pore/atmosphere}\), achieved by the combined BC of convection and radiation and masked by \(h_{\mathrm{at}}\)), as illustrated in Fig. S2b\({}_{1}\).
### Mesoscopic thermo-elasto-plastic simulations
As elaborated in the subsection _Multiphysics-multiscale simulation scheme_ and our former work [28], the subsequent thermo-elasto-plastic simulation was carried out for the calculation of the thermal stress and deformation of the mesostructures from the non-isothermal phase-field simulations of SLS processing. The transient temperature field and substance field \(\rho\) are imported into the quasi-static elasto-plastic model as the thermal load and the phase indicator for interpolating mechanical properties. Adopting small deformation and quasi-static assumptions, the mechanical equilibrium reads
\[\nabla\cdot\mathbf{\sigma}=\mathbf{0}, \tag{16}\]
where \(\mathbf{\sigma}\) is the 2\({}^{\text{nd}}\)-order stress tensor. The top boundary is set to be traction free, and the other boundaries adopt rigid support BCs, which only restrict the displacement component in the normal direction of the boundary Fig. S2b\({}_{2}\).
Taking the Voigt-Taylor interpolation scheme (VTS), where the total stress is interpolated according to the amount of the substance and pore/substance across the interface, i.e., \(\mathbf{\sigma}=h_{\gamma}\mathbf{\sigma}^{\gamma}+h_{\gamma^{\prime}}\mathbf{\sigma}^{ \gamma^{\prime}}\), while assuming identical strain among phases [55, 56, 57]. In this regard, the stress can be eventually formulated by the linear constitutive equation
\[\mathbf{\sigma}=\mathbf{C}(\rho,T):\left(\mathbf{\epsilon}-\mathbf{\epsilon}_{\text{th}}-\mathbf{ \epsilon}_{\text{pl}}\right), \tag{17}\]
where the 4\({}^{\text{th}}\)-order elastic tensor is interpolated from the substance one \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\) and the pores/atmosphere one \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}\) and the pores/atmosphere one \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}\), i.e.,
\[\mathbf{C}(\rho,T)=h_{\text{ss}}(\rho)\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} }\mathbf{C_{\text{s}}}(T)+h_{\text{at}}(\rho)\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}}}}}}}} \mathbf{C_{\rm{\rm{s}}}}}}\mathbf{C_{\text{at}}. \tag{18}\]
In this work, isotropic mechanical properties are considered. \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} }\) is calculated from the Youngs' modulus \(E(T)\) and Poisson's ratio \(\nu\). In contrast, \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}\) is assigned with a sufficiently small value to guarantee the numerical convergence. The thermal eigenstrain \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} }\) is calculated using the interpolated coefficient of thermal expansion, i.e.,
\[\mathbf{\epsilon}_{\text{th}} =\alpha(\rho,T)[T-T_{0}]\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbmbmbmbmbm }}}}}}}}}}}}}}}}}}}}}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\bm
the different grain orientations, is dropped in the following discussions. Magnetic contribution \(f_{\rm mag}\) is also dropped since the magnetic-field-free \(\gamma/\gamma^{\prime}\) transition is scoped in this work. The profiles of \(\phi^{\gamma}\) and \(\phi^{\gamma^{\prime}}\) across the \(\gamma/\gamma^{\prime}\) interface are illustrated in Fig. 9b. Similarly, we take simplified notations in this subsection, such as \(\phi=\phi^{\gamma^{\prime}}\) and \(X=X_{\rm Ni}\) as the independent phase and concentration indicators, while \(X_{\rm Fe}=1-X\) and \(\phi^{\gamma}=1-\phi\).
Due to the co-existing of both \(\gamma\) and \(\gamma^{\prime}\) once the temperature is below \(T_{\gamma/\gamma^{\prime}}=766\) K, the chemical free energy density should be formulated as
\[f_{\rm ch}(T,\phi,\{X^{\varphi}\})=h_{\gamma}(\phi)f_{\rm ch}^{\gamma}(T,X^{ \gamma})+h_{\gamma^{\prime}}(\phi)f_{\rm ch}^{\gamma^{\prime}}(T,X^{\gamma^{ \prime}}), \tag{22}\]
where \(h_{\gamma}\) and \(h_{\gamma^{\prime}}\) are monotonic interpolation functions and can adopt the polynomial formulation as
\[h_{\gamma^{\prime}}(\phi)=\phi^{3}\left(10-15\phi+\phi\phi^{2}\right),\quad h _{\gamma}(\phi)=1-\phi^{3}\left(10-15\phi+6\phi^{2}\right).\]
Similarly the elastic contribution is formulated as [60, 61]
\[f_{\rm el}(T,\{\epsilon_{\rm el}^{\varphi}\})=h_{\gamma}(\phi)f_{\rm el}^{ \gamma}(T,\epsilon_{\rm el}^{\gamma})+h_{\gamma^{\prime}}(\phi)f_{\rm el}^{ \gamma^{\prime}}(T,\epsilon_{\rm el}^{\gamma^{\prime}}), \tag{23}\]
where
\[f_{\rm el}^{\varphi}(T,\epsilon_{\rm el})=\frac{1}{2}\sigma^{\varphi}: \epsilon_{\rm el}^{\varphi}.\]
On top of the CALPHAD modeling of the free energy density of the chemical-disordered \(\gamma\) phase, the four-sublattice model is employed to describe the phase with chemical ordering. The model takes the element fractions \(Y_{i}^{(s)}\) (\(i={\rm Fe}\) or Ni indicating the chemical constituents, \(s=1,2,3,4\) indicating the sublattice site, see inset of Fig. 9c) in each sublattice as the inner degree-of-freedoms, representing
\[f_{\rm ch}^{\gamma^{\prime}}=f_{\rm ch}^{\gamma}+\{f_{4{\rm sl}}-f_{4{\rm sl} }(Y^{(s)}=X^{\gamma^{\prime}})\}_{\rm min}, \tag{24}\]
with
\[\begin{split} f_{4{\rm sl}}(T,Y_{i}^{(s)})&=\sum_{ i,j,k,l}Y_{i}^{(1)}Y_{j}^{(2)}Y_{k}^{(3)}Y_{l}^{(4)}f_{ijkl}^{\rm FCC}+ \frac{\mathcal{R}T}{4V_{\rm m}^{sss}}\sum_{s,j}Y_{i}^{(s)}\ln Y_{i}^{(s)}\\ &+\sum_{s}\left[Y_{i}^{(s)}\left(1-Y_{i}^{(s)}\right)\sum_{j,k,l }Y_{j}^{(u)}Y_{k}^{(v)}Y_{l}^{(w)}\frac{(x_{s}^{(s)}j)d}{V_{\rm m}^{sss}} \right],\end{split} \tag{25}\]
where \(f_{ijkl}^{\rm FCC}\) are the free energies of the stoichiometric compounds with only one constituent (\(i,j,k,l=\)Ni or Fe) occupied on each site [43]. \(L_{(s)ijkl}^{\rm FCC}\) is the interaction parameter, corresponding to the mixing of constituents on the \(s\)-th site while others (\(u,v,w\neq s\)) are with the fractions \(Y_{i}^{(u)}\), \(Y_{k}^{(v)}\) and \(Y_{l}^{(w)}\). Note that the constraints \(\sum_{s}Y_{i}^{(s)}=X_{i}\) and \(Y_{\rm Ni}^{(s)}+Y_{\rm Fe}^{(s)}=1\) should be applied to guarantee the conservation of atom. It is also worth noting that due to the thermodynamic equivalence of four sublattice sites, the equivalence of \(f_{ijkl}^{\rm FCC}\) and \(L_{(s)ijkl}^{\rm FCC}\) regarding the combination of sublattice constituents must be considered, as explicitly explained in Ref. [43]. All the thermodynamic parameters for the CALPHAD are obtained from Ref. [43]. In Fig. S8, we present the calculated \(f_{\rm ch}^{\gamma}\) and \(f_{\rm ch}^{\gamma^{\prime}}\) from the \(T_{\gamma/\gamma^{\prime}}\) to the pre-heating temperature \(T_{0}=600\) K with the varying equilibrium concentration of \(X_{\rm e}^{\gamma}\) and \(X_{\rm e}^{\gamma^{\prime}}\), the site element fraction \(Y_{i}^{(s)}\), and the calculated phase fraction \(\psi^{\gamma}\) and \(\psi^{\gamma^{\prime}}\) under the equilibrium.
On the other hand, since there is only the \(\gamma/\gamma^{\prime}\) coherent interface, the non-isothermal local and gradient free energy density are then formulated in the typical double-well fashion, i.e.,
\[f_{\rm loc}(T,\phi)=12\underline{W}_{\gamma/\gamma^{\prime}}(T)\phi^{2}(1-\phi )^{2},\qquad f_{\rm grad}(T,\nabla\phi)=\frac{1}{2}\underline{\kappa}_{\gamma/ \gamma^{\prime}}(T)|\nabla\phi|^{2} \tag{26}\]
with
\[\underline{W}_{\gamma/\gamma^{\prime}}(T)=W_{\gamma/\gamma^{\prime}}\tau_{ \gamma/\gamma^{\prime}}(T),\qquad\underline{\kappa}_{\phi}(T)=\kappa_{\gamma/ \gamma^{\prime}}\tau_{\gamma/\gamma^{\prime}}(T),\]
adapting the same non-isothermal form as the one used in the SLS simulations with the dimensionless temperature tendency \(\tau_{\gamma/\gamma^{\prime}}(T)\). The temperature-independent parameters \(W_{\gamma/\gamma^{\prime}}\) and \(\kappa_{\gamma/\gamma^{\prime}}\) are obtained from the interface energy \(\Gamma_{\gamma/\gamma^{\prime}}\) and diffuse-interface width \(\ell_{\gamma/\gamma^{\prime}}\), i.e.,
\[\Gamma_{\gamma/\gamma^{\prime}}(T)=\frac{\sqrt{6}}{3}\tau_{\gamma/\gamma^{ \prime}}(T)\sqrt{W_{\gamma/\gamma^{\prime}}\kappa_{\gamma/\gamma^{\prime}}}, \qquad\ell_{\gamma/\gamma^{\prime}}\approx\frac{\sqrt{6}}{3}\sqrt{\frac{\kappa_{ \gamma/\gamma^{\prime}}}{W_{\gamma/\gamma^{\prime}}}}, \tag{27}\]
noting that the relation of \(\ell_{\gamma/\gamma^{\prime}}\) here corresponds to the case when adjusting parameter as two in Eq. (53) of the Ref. [62]. In this work, we tentatively take \(\tau_{\gamma/\gamma^{\prime}}\) as one and estimate \(\Gamma_{\gamma/\gamma^{\prime}}=0.025\,\mathrm{J}\,\mathrm{m}^{-2}\), which is a commonly estimated value for the coherent interfaces and lies between the experimental range from 0.008 to 0.080 \(\mathrm{J}\,\mathrm{m}^{-2}\) for the Ni-base alloys [63]. The diffuse-interface width \(\ell_{\gamma/\gamma^{\prime}}\) is given as \(5\) nm. The total free energy at stress-free condition (\(f_{\mathrm{tot}}=f_{\mathrm{ch}}+f_{\mathrm{init}}\)) is illustrated in Fig. 9d, where the free energy density path obeying the mixing rule is also illustrated across the \(\gamma/\gamma^{\prime}\) interface between two equilibrium phases (i.e., with \(X_{\mathrm{e}}^{\gamma}\) and \(X_{\mathrm{e}}^{\gamma^{\prime}}\)).
The governing equations of the nanoscopic \(\gamma/\gamma^{\prime}\) transition is formulated as follows [62, 60]
\[X=h_{\gamma}X^{\gamma}+h_{\gamma^{\prime}}X^{\gamma}, \tag{28a}\] \[\frac{\mathrm{d}f_{\mathrm{ch}}^{\gamma}}{\mathrm{d}X^{\gamma}}= \frac{\mathrm{d}f_{\mathrm{ch}}^{\gamma^{\prime}}}{\mathrm{d}X^{\gamma^{ \prime}}},\] (28b) \[\frac{\partial X}{\partial t}=\nabla\cdot M_{X}\nabla\,\frac{ \delta\mathcal{F}}{\delta X},\] (28c) \[\frac{\partial\phi}{\partial t}=-M_{\phi}\,\frac{\delta\mathcal{ F}}{\delta\phi},\] (28d) \[\nabla\cdot\mathbf{\sigma}=\mathbf{0}. \tag{28e}\]
Notably, Eq. (28a) embodies the mixing rule of the local Ni concentration \(X\) from the phase ones \(X^{\gamma}\) and \(X^{\gamma^{\prime}}\), considering the \(\gamma/\gamma^{\prime}\) interface as a two-phase mixture with \(\phi\) as the local phase fraction. This detaches the chemical and local contributions to the interface energy to allow rescalability of the diffuse-interface width. Eq. (28b) is the constraint to the phase concentration \(X^{\gamma}\) and \(X^{\gamma^{\prime}}\) to obtain the maximum driving force for the interface migration, as briefly elaborated in Fig. S9. In return, the drag effect might be eliminated along with the vanishing of the driving force for trans-interface diffusion [64, 65, 66, 67], which should be specifically evaluated and discussed for the \(\gamma/\gamma^{\prime}\) transition in the Fe-Ni system. The diffusive mobility \(M_{X}\) here is directly formulated by the atom mobilities \(M_{\mathrm{Fe}}\) and \(M_{\mathrm{Ni}}\) in the FCC lattice considering the inter-diffusion phenomena [68], i.e.,
\[M_{X}(X,T)=V_{\mathrm{m}}^{\mathrm{sys}}X(1-X)\left[(1-X)M_{\mathrm{Fe}}(T)+ XM_{\mathrm{Ni}}(T)\right], \tag{29}\]
and the interface migration mobility \(M_{\phi}\) is derived by considering the thin-interface limit of the model and the interface migration rate that was originally derived by Turnbull [65, 50], i.e.,
\[M_{\phi}(T)=\frac{2}{3}\frac{V_{\mathrm{m}}^{\mathrm{sys}}\Phi_{\gamma^{ \prime}/\gamma}}{\mathrm{Ch}|\mathbf{b}|^{2}}\left[(1-X_{\gamma^{\prime}/\gamma})M _{\mathrm{Fe}}(T)+X_{\gamma^{\prime}/\gamma}M_{\mathrm{Ni}}(T)\right], \tag{30}\]
where the dimensionless Cahn number \(\mathrm{Ch}=\ell_{\gamma/\gamma^{\prime}}/\hat{\ell}_{\gamma/\gamma^{\prime}}\), characterizing the degree of the rescaling of the diffuse-interface width \(\ell_{\gamma/\gamma^{\prime}}\) from the realistic interface width \(\hat{\ell}_{\gamma/\gamma^{\prime}}\), which is estimated as \(\hat{\ell}_{\gamma/\gamma^{\prime}}=5\bar{a}\) with \(\bar{a}\) the average lattice parameter of the \(\gamma\) and \(\gamma^{\prime}\) phases based on the experimental observation [69]. Length of the burgers vector is also calculated from \(\bar{a}\) by \(|\mathbf{b}|=\bar{a}/\sqrt{2}\). \(\Phi_{\gamma/\gamma^{\prime}}\) is a newly defined thermodynamic factor with the estimated interface concentration \(X_{\gamma/\gamma^{\prime}}\) as
\[\Phi_{\gamma^{\prime}/\gamma}=X_{\gamma^{\prime}/\gamma}(1-X_{ \gamma^{\prime}/\gamma})\frac{V_{\mathrm{m}}^{\mathrm{sys}}}{\mathcal{R}T}\, \left.\frac{\partial^{2}f_{\mathrm{ch}}}{\partial X^{2}}\right|_{\gamma^{ \prime}/\gamma},\] \[X_{\gamma^{\prime}/\gamma}=\frac{X_{\mathrm{e}}^{\gamma}/A_{ \mathrm{m}}^{\gamma}+X_{\mathrm{e}}^{\gamma^{\prime}}/A_{\mathrm{m}}^{\gamma^{ \prime}}}{1/A_{\mathrm{m}}^{\gamma}+1/A_{\mathrm{m}}^{\gamma^{\prime}}}\]
with the equilibrium concentrations \(X_{\mathrm{e}}^{\gamma}\) and \(X_{\mathrm{e}}^{\gamma^{\prime}}\) as well as the molar area of the phases \(A_{\mathrm{m}}^{\gamma}\) and \(A_{\mathrm{m}}^{\gamma^{\prime}}\). The detailed derivations are shown in the Supplementary Note 2. The temperature-dependent atom mobility \(M_{\mathrm{Ni}}(T)\) and \(M_{\mathrm{Fe}}(T)\) are obtained from the mobility database MOBFE3 from the commercial software Thermo-Calc(r) [47].
In this work, it should be highlighted that a temperature-dependent dimensionless calibration factor \(\omega(T)\) is additionally associated with the atom mobilities, which is utilized to be calibrated from the experimentally measured \(\gamma/\gamma^{\prime}\) transition with respect to time at various temperatures. The calibrated atom mobility is then shown as (noting \(A=\mathrm{Fe},\mathrm{Ni}\))
\[M_{A}^{\mathrm{s}}(T)=\omega(T)M_{A}(T). \tag{31}\]
Based on the Arrhenius relation on temperature for \(M_{A}(T)\) and \(M_{A}^{*}(T)\)[68, 70], this \(\omega(T)\) is postulated to follow the Arrhenius relation as well, i.e., \(\omega(T)=\omega_{0}\exp(-Q_{\omega}/\mathcal{R}T)\) with the pre-factor \(\omega_{0}\) and the activation energy \(Q_{\omega}\). We implemented a simple calibration algorithm by iteratively performing the regression on \(\omega\) as the time scaling factor to the simulated transient volume fraction of \(\gamma^{\prime}\) phase, i.e., \(\Psi_{\gamma^{\prime}}(\omega t)\) with respect to the experimental measurements obtained from [29], as shown in Fig. 10a. The IC of the \(\gamma^{\prime}\) nuclei was generated using Poisson disk sampling [35] with the prescribed minimum nuclei distance according to the observation shown in Fig. 1b. The calibrated \(\omega(T)\) indeed shows consistency to the Arrhenius relation, confirming our postulate.
As for momentum balance in Eq. (28e), we have to explicitly consider both long-range (morphology and morphology-induced chronological-spatial thermal inhomogeneity) and short-range factors (misfit-induced fluctuation) factors of the mechanical response on the current scale. In that sense, the stress should be considered in the following form
\[\mathbf{\sigma}=\mathbf{\sigma}_{\text{rms}}+\mathbf{\hat{v}}, \tag{32}\]
where \(\mathbf{\sigma}_{\text{rms}}\) comes from the mesoscale and \(\mathbf{\hat{v}}\) is incited due to the misfit of growing \(\gamma^{\prime}\) phase. Assuming the stiffness tensor \(\mathbf{C}_{\text{ss}}\) has no differences between the two phases, we then take a uniform elastic strain that attributes to the mesoscopic stress, i.e., \(\mathbf{\sigma}_{\text{ms}}=\mathbf{C}_{\text{ss}}:\mathbf{\varepsilon}_{\text{el}}^{ \text{ms}}\). The constitutive relation can then be represented as
\[\mathbf{\sigma} =\mathbf{C}_{\text{ss}}:\left(\mathbf{\varepsilon}_{\text{el}}+\mathbf{ \varepsilon}_{\text{el}}^{\text{ms}}\right) \tag{33}\] \[=\mathbf{C}_{\text{ss}}:\left[\left(\mathbf{\varepsilon}-h_{\gamma^{ \prime}}\mathbf{\varepsilon}_{\text{mis}}^{\gamma^{\prime}}\mathbf{I}\right)+\mathbf{ \varepsilon}_{\text{el}}^{\text{ms}}\right],\]
where \(\mathbf{\varepsilon}\) is the total strain calculated on the nanoscopic domain, and \(\mathbf{\varepsilon}_{\text{mis}}^{\gamma^{\prime}}\) is the misfit strain induced by growing \(\gamma^{\prime}\) phase. \(\mathbf{\varepsilon}_{\text{mis}}^{\gamma^{\prime}}\) is the relative difference between lattice parameters of the \(\gamma\) and \(\gamma^{\prime}\) phases, i.e., \(\mathbf{\varepsilon}_{\text{mis}}^{\gamma^{\prime}}=(a_{\gamma^{\prime}}-a_{ \gamma})/a_{\gamma}\) with \(a_{\gamma}\) and \(a_{\gamma^{\prime}}\) obtained from the temperature-dependent molar volume \(V_{\text{m}}^{\gamma}\) and \(V_{\text{m}}^{\gamma^{\prime}}\), respectively. This is presented in Fig. S7b. At \(T_{0}=600\) K, this \(\mathbf{\varepsilon}_{\text{mis}}^{\gamma^{\prime}}=-1.32\times 10^{-3}\). Alongside with \(\mathbf{\varepsilon}_{\text{el}}^{\text{ms}}\) as an eigenstrain, the periodic displacement BC are applied to the nanoscopic domain, as shown in Fig. S2b\({}_{3}\)
### Micromagnetic hysteresis simulations
Below the Curie temperature, the magnetization of most ferromagnetic materials saturates with constant magnitude (\(M_{\text{s}}\)). Therefore in micromagnetics, it is important to have a normalized magnetization vector that is position-dependent, i.e., \(\mathbf{m}\). This vector field can be physically interpreted as the mean field of the local atom magnetic moments, but yet sufficiently small in scale to resolve the magnetization transition across the domain wall. However, variation of \(\mathbf{m}(\mathbf{r})\) across the \(\gamma/\gamma^{\prime}\) interface is tentatively disregarded as an ideal exchange coupling between two phases. Magnetic properties in the ferromagnetic \(\gamma\) phase are also tentatively assumed to be identical to the ferromagnetic \(\gamma^{\prime}\) at the same Ni-concentration due to the lack of experimental/theoretical investigations on the magnetic properties of individual phases. In other words, only the Ni-concentration dependency of magnetic parameters is explicitly considered in this work, while the exchange constant \(A_{\text{ex}}\) takes constant as 13 pJ/m [71]. In that sense, superscript \(\varphi\), indicating the phase differences, is dropped in the following explanation. We let the orientation \(\mathbf{u}\) of the nanoscopic subdomain align on the \(z\)-direction (BD), and the magnetic free energy density is eventually formulated as
\[f_{\text{mag}}=f_{\text{ex}}+f_{\text{ani}}+f_{\text{ms}}+f_{\text{zm}}+f_{ \text{em}} \tag{34}\]
with
\[f_{\text{ex}}(\nabla\mathbf{m})=A_{\text{ex}}\|\nabla\mathbf{m} \|^{2},\] \[f_{\text{ani}}(\mathbf{m})=-K_{\text{u}}\left(\mathbf{u}\cdot \mathbf{m}\right)^{2},\] \[f_{\text{ms}}(\mathbf{m})=-\frac{1}{2}\mu_{0}M_{\text{s}}\mathbf{ m}\cdot\mathbf{H}_{\text{dm}},\] \[f_{\text{zm}}(\mathbf{m},\mathbf{H}_{\text{ext}})=-\mu_{0}M_{ \text{s}}\mathbf{m}\cdot\mathbf{H}_{\text{ext}},\] \[f_{\text{em}}(\mathbf{m},\mathbf{\sigma})=-\mathbf{\sigma}:\mathbf{\varepsilon }_{\text{em}},\]
and the magnetostrictive strain \(\epsilon_{\text{em}}\) on the cubic basis as follows [36, 37]
\[\epsilon_{\text{em}}=\frac{3}{2}\left[\begin{array}{ccc}\lambda_{100}\left(m_{x }^{2}-\frac{1}{3}\right)&\lambda_{111}m_{x}m_{y}&\lambda_{111}m_{x}m_{z}\\ &\lambda_{100}\left(m_{y}^{2}-\frac{1}{3}\right)&\lambda_{111}m_{y}m_{z}\\ \text{symm.}&\lambda_{100}\left(m_{z}^{2}-\frac{1}{3}\right)\end{array}\right].\]
Here, \(f_{\text{ex}}\) is the exchange contribution, recapitulating the parallel-aligning tendency among neighboring magnetic moments due to the Heisenberg exchange interaction. The norm \(\|\nabla\mathbf{m}\|\) here represents \(\sum_{j}|\nabla m_{j}|^{2}\) with \(j=x,y,z\) and \(\mathbf{m}=[m_{x},m_{y},m_{z}]\). \(f_{\text{ani}}\) represents the contribution due to the magneto-crystalline anisotropy. It provides the energetically preferred orientation to local magnetizations with respect to the crystalline orientation \(\mathbf{u}\) according to the sign of the \(K_{\text{u}}\). \(f_{\text{ani}}\) represents the contribution due to the magneto-crystalline anisotropy. It provides the energetically preferred orientation to local magnetizations with respect to the crystalline orientation \(\mathbf{u}\), concerning the sign of the \(K_{\text{u}}\). Defining an orientation angle by \(\theta=\arccos\mathbf{u}\cdot\mathbf{m}\), the case when \(K_{\text{u}}>0\) leads two energetic minima at \(\vartheta=0\) and \(\pi\), that is when the magnetization lies along the positive or negative \(\mathbf{u}\) direction with no preferential orientation, i.e., the easy-axis anisotropy. When \(K_{\text{u}}<0\), the energy is minimized for \(\vartheta=\pi/2\), meaning that any direction in the plane perpendicular to \(\mathbf{u}\) is thermodynamically preferred, i.e., the easy-plane anisotropy [30], as shown in Fig. 9e. As the resulting \(X_{\text{Ni}}\) varies from 0.781 to 0.810 as presented in Fig. 5, local \(K_{\text{u}}\) always takes the negative value in this work. The magnetostatic term \(f_{\text{ms}}\) counts the energy of each local magnetization under the demagnetizing field created by the surrounding magnetization. The Zeeman term \(f_{\text{zm}}\) counts the energy of each local magnetization under an extrinsic magnetic field \(\mathbf{H}_{\text{ext}}\). \(f_{\text{em}}\) is the contribution due to the magneto-elastic coupling effects.
To simulate the hysteresis behavior of the structure during a cycling \(\mathbf{H}_{\text{ext}}\), we calculate the magnetization configuration \(\mathbf{m}(\mathbf{r})\) under every incremental \(\mathbf{H}_{\text{ext}}\) change by conducting the constrained optimization of a stationary Landau-Lifshitz-Gilbert equation, which is mathematically formulated as
\[\mathbf{m}\times\frac{\delta\mathcal{F}}{\delta\mathbf{m}}+a_{ \text{d}}\mathbf{m}\times\left(\mathbf{m}\times\frac{\delta\mathcal{F}}{\delta \mathbf{m}}\right)=\mathbf{0},\] (35) subject to \[|\mathbf{m}|=1,\]
where \(a_{\text{d}}\) is the damping coefficient, taking \(a_{\text{d}}=0.02\)[72]. This also means that the magnetic hysteresis is evaluated under the quasi-static condition. The simulation domains with the FD grids have the same construction as the FE meshes used in the \(\gamma/\gamma^{\prime}\) transition simulations to ease the quantity mapping in-between. Periodic BC was applied on the boundaries perpendicular to \(z\)-direction by macro geometry approach [73], while Neumann BC was applied on the other boundaries [34].
It is also worth noting that the magneto-elastic coupling constants \(B_{1}\) and \(B_{2}\), calculated by \(\lambda_{100}=-2B_{1}/3(C_{11}-C_{12})\) and \(\lambda_{111}=-B_{2}/3C_{44}\)[74, 36] are implemented in the package MuMax\({}^{3}\). The elastic strain field that attributes to the residual stress, i.e., \(\sigma=\mathbf{C}_{\text{ss}}:\epsilon_{\text{el}}\), are mapped from the nanoscopic \(\gamma/\gamma^{\prime}\) transition results.
### Implementations and parallel computations
Both non-isothermal phase-field and thermo-elasto-plastic models are numerically implemented by the finite element method within the program NIsoS[27, 32], developed by the authors based on the MOOSE framework (Idaho National Laboratory, ID, USA) [75, 76]. The 8-node hexahedron Lagrangian elements were chosen to mesh the geometry. A transient solver with preconditioned Jacobian-Free Newton-Krylov method (PJFNK) was employed in both models. Each simulation was executed with 96 AVX512 processors and 3.6 GByte RAM per processor based on MPI parallelization. The associated CALPHAD calculations were conducted by open-sourced package PyCALPHAD [77], and the thermodynamic data intercommunication was carried out by customized Python and C++ codes. The DEM-based powder bed generation is conducted by the open-sourced package YADE [27, 78].
For SLS simulations, the Cahn-Hilliard equation in Eq. (11a) was solved in a split way. The constraint of the order parameters was enforced by the penalty method. To reduce computation costs, h-adaptive meshing and time-stepping schemes are used. The initial structured mesh is presented in Fig. S2a1. The additive Schwarz method (ASM) preconditioner with the incomplete LU-decomposition sub-preconditioner was also employed for parallel computation of the vast linear system, seeking the balance between memory consumption per core and computation speed [79]. The backward Euler method was employed for the time
differentials, and the constraint of the order parameters was fulfilled using the penalty method. Due to the usage of h-adaptive meshes, the computational costs vary from case to case. The peak DOF number is on order 10,000,000 for both the nonlinear system and the auxiliary system. The peak computational consumption is on the order of 10,000 core-hour. More details about the FEM implementation are shown in the supplementary information of Ref. [27].
For thermo-elasto-plastic simulations, a static structured mesh was utilized Fig. S2a2 to avoid the hanging nodes generated from the h-adaptive meshing scheme. In that sense, the transient fields \(T\) and \(\rho\) of each calculation step were uni-directionally mapped from the non-isothermal phase-field results (with h-adaptive meshes) into the static meshes. This is achieved by the MOOSE-embedded SolutionUserObject class and associated functions. The parallel algebraic multigrid preconditioner BoomerAMG was utilized with the Eisenstat-Walker (EW) method to determine linear system convergence. It is worth noting that a vibrating residual of non-linear iterations would show without employing the EW method for this work. The DOF number of each simulation is on the order of 1,000,000 for the nonlinear system and 10,000,000 for the auxiliary system. The computational consumption is on the order of 1,000 CPU core-hour.
For \(\gamma\)/\(\gamma^{\prime}\) transition simulations, a static uniform mesh was utilized Fig. S2a3. 2nd backward Euler method was employed. The additive Schwarz method (ASM) preconditioner with the complete LU-decomposition sub-preconditioner was also employed for parallel computation. The simulations were performed in a high-throughput fashion with 100\(\sim\)1,000 transition simulations as a batch for one set of processing parameters. The DOF number of each simulation is on the order of 1,000,000 for the nonlinear system and 10,000,000 for the auxiliary system. The computational consumption of each simulation is 500 CPU core-hour by average.
The micromagnetic simulations were carried out by the FDM-based steepest conjugate gradient (SCG) solver to optimize Eq. (35) in the open-sourced package MuMax\({}^{3}\)[34] with numerical details elaborated in Ref. [80]. The high-throughput GPU-parallel computations were performed with 100\(\sim\)1000 micromagnetic simulations as a batch.
## Data Availability
The authors declare that the data supporting the findings of this study are available within the paper. Source codes of MOOSE-based application NIsoS and related utilities are cured in the online repository bitbucket.org/mfm_tuda/nisos.git. The simulation results, statistics and metadata are cured in the online dataset (DOI: xx.xxxx/zenodo.xxxxxx).
## Acknowledgements
Authors acknowledge the financial support of German Science Foundation (DFG) in the framework of the Collaborative Research Centre Transregio 270 (CRC-TRR 270, project number 405553726, sub-projects A06, B07, Z-INF) and 361 (CRC-TRR 361, project number 492661287, sub-projects A05), the Research Training Groups 2561 (GRK 2561, project number 413956820, sub-project A4), the Priority Program 2256 (SPP 2256, project number 441153493) and 2122 (SPP 2122, project number 493889809). The authors also greatly appreciate the access to the Lichtenberg High-Performance Computer and the technique supports from the HHLR, Technische Universitat Darmstadt, and the GPU Cluster from the CRC-TRR 270 sub-project Z-INF. Y. Yang also highly thanks the Master's student Akinola Ayodeji Clement for helping with SLS and thermo-elasto-plastic simulations.
## 6 Competing Interests
The authors declare no competing financial or non-financial interests.
## 7 Author Contributions
Conceptualization: B.-X.X. and Y.Y.; methodology: Y.Y. and B.-X.X.; software: Y.Y. and X.Z.; validation: T.D.O. and Y.Y.; investigation: Y.Y. and T.D.O.; formal analysis: Y.Y. and T.D.O.; resources, Y.Y. and K.A.; data curation, Y.Y.; writing--original draft preparation, Y.Y. and T.D.O.; writing--review and editing, Y.Y., T.D.O., X.Z., K.A. and B.-X.X.; visualization, Y.Y.; supervision,
B.-X.X.; consultation and discussion, K.A.; funding acquisition, B.-X.X. All authors have read and agreed to the published version of the manuscript.
Figure 1: (a) Ni concentration dependent magnetic properties, incl. magneto-crystalline anisotropic constant \(K_{\mathrm{u}}\), magnetostriction constants \(\lambda_{100}\) and \(\lambda_{111}\), saturation magnetization \(M_{\mathrm{s}}\) and initial relative permeability \(\mu_{\mathrm{r}}\), modified with permission from Balakrishna et al. [22]. (b) Schematics of temperature gradient mechanism (TGM) in explaining the generation of residual stress in heating and cooling modes, where the tensile \(\sigma_{\mathrm{(t)}}\) and compressive \(\sigma_{\mathrm{(c)}}\) stress states are denoted. Inset: A bent AM-processed part due to residual stress. Image is reprinted with permission from Takezawa et al. [26] under the terms of the Creative Commons CC-BY 4.0 license. (c) Bright-field image of a Fe-Ni permalloy microstructure after annealing at 723 K for 50 h. Inset: electron diffraction pattern of the microstructure. SEM and electron diffraction images are reprinted with permission from Ustinovshikov et al. [19].
Figure 2: Multiphyscis-multiscale simulations scheme proposed in this work with the workflow and data interaction among methods illustrated schematically. All the involved quantities are explicitly introduced in the _Method_ section. Notice here \(\lambda_{1mn}\) represents \(\lambda_{100}\) and \(\lambda_{111}\).
Figure 3: (a\({}_{1}\))-(a\({}_{4}\)) Simulation results of SLS processing of a Fe\({}_{21.5}\)Ni\({}_{78.5}\) powder bed and substrate with beam power 30 W and scan speed 100 mm s\({}^{-1}\) at different time points; Figs. (b\({}_{1}\))-(b\({}_{4}\)) show results with varying beam power and scan speed at the time point when the laser center locates at \(x=310\) μm. Overheated regions, where \(T>T_{M}\), are drawn with a continuous color map, while areas with \(T\leq T_{M}\) are shown as isotherms. The laser spot characterized by \(D_{\rm L}\) and \(D_{\rm FWHM}\) is also indicated.
## 6 Conclusion
Figure 4: (a) Development of simulated von Mises stress (\(\sigma_{\rm e}^{\rm D}\)) and temperature (\(\bar{T}^{\rm D}\)) in domain average vs. time with the profile of \(\sigma_{\rm e}\) at the denoted states shown in (b\({}_{1}\))-(b\({}_{4}\)). (c) Development of simulated accumulated plastic strain in domain average (\(\bar{p}_{\rm e}^{\rm D}\)) vs. time with the profile of \(p_{\rm e}\) at the denoted states shown in (d\({}_{1}\))-(d\({}_{4}\)).
Figure 5: Profiles of (a) the von Mises residual stress \(\sigma_{\rm e}\) and the accumulated plastic strain \(p_{\rm e}\) on the middle section perpendicular to the \(x\)-direction (SD). The selected points P\({}_{1}\)-P\({}_{5}\) are also indicated. The components and effective values of (b\({}_{1}\)) the residual stress and (b\({}_{2}\)) the plastic strain along the z- and y-profiling paths as indicated in (a). (c) The Lame’s stress ellipsoids, representing the stress state at P\({}_{1}\)-P\({}_{5}\), with the principle stresses denoted and colored (marine: tension; red: compression). (d) Ni concentration profile after annealing for 1200 h at \(T_{0}=600\)\(K\). (e) the magneto-elastic coupling energy \(f_{\rm em}\) calculated under a homogeneous in-plane \(\mathbf{m}\) configuration with \(\vartheta=45^{\circ}\) w.r.t. the easy axis \(\mathbf{u}\). The corresponding effective coupling field \(\mathbf{B}_{\rm em}\) is also illustrated. (f) The average hysteresis loop of 10 cycles each performed on the nanostructures (NS) at P\({}_{1}\)-P\({}_{5}\). The reference curve (Ref.) is performed on the stress-free homogeneous nanostructure with only \(\gamma^{\prime}\) phase and the composition Fe\({}_{21.5}\)Ni\({}_{78.5}\).
Figure 6: Contour maps of (a) the width \(b\) and (b) the normalized depth \(d\) of the fusion zone w.r.t. the beam power and the scan speed. The dotted lines represent isolines of specific energy input \(U_{\mathrm{V}}\). (c\({}_{1}\))-(c\({}_{5}\)) Fusion zone geometries on the powder bed with different processing parameters. Inset: \(b\) and \(d\) measured from the fusion zone geometries.
Figure 7: Contour maps of (a) average residual stress \(\sigma_{\rm e}\) and (b) average plastic strain \(\bar{p}_{\rm e}\) in the fusion zone w.r.t. the beam power and the scan speed. The dotted lines represent isolines of different specific energy inputs \(U_{\rm V}\). Profiles of (c\({}_{1}\))-(c\({}_{5}\)) residual stress and (d\({}_{1}\))-(d\({}_{5}\)) plastic strain are plotted for different processing parameters. The boundary of the fusion zone is also indicated.
Figure 8: (a) Contour map of the average coercivity \(\bar{H_{\rm c}}\) in the fusion zone w.r.t. the beam power and the scan speed. The dotted lines represent different specific energy input isolines \(U_{\rm V}\). (b) Nonlinear regression of \(\bar{H_{\rm c}}\) on specific energy input \(U_{\rm V}\) and average residual stress \(\bar{\sigma}_{\rm e}\), with the regression parameters indicated correspondingly. It presents that \(\bar{H_{\rm c}}\) subjects to scaling rule on \(U_{\rm V}\) and to exponential growth law on \(\sigma_{\rm e}\) with the correlation coefficient \(R^{2}=86.54\%\) and \(91.31\%\), respectively. Profiles of (c\({}_{1}\))-(c\({}_{5}\)) the local coercivity; (d\({}_{1}\))-(d\({}_{5}\)) the local volume fraction of \(\gamma^{\prime}\) phase; and (e\({}_{1}\))-(e\({}_{5}\)) the local residual stress on the mid-section of the fusion zone for different processing parameters.
Figure 9: (a) Schematic of the physical processes during SLS, i.e., localized melting, multiple mass transfer paths, and grain boundary migration. Here, the bulk diffusion GB-N represents the path from grain boundary to neck through the bulk, while SF-N presents the path from the surface to neck through the bulk. (b) Profiles of different OPs across corresponding phases with the corresponding scales. (c) The landscape of mesoscopic stress-free \(f_{\text{tot}}\) across the surface and the \(\gamma/\gamma\) grain boundaries at various temperatures. (d) The landscape of nanoscopic stress-free \(f_{\text{tot}}\) across the \(\gamma/\gamma^{\prime}\) coherent interface at 600 K. Transition path, \(f_{\text{ch}}^{\gamma}\), \(f_{\text{ch}}^{\gamma^{\prime}}\), and the energy band of the four-sublattice states \(\{\Delta f_{\text{4s}}\}=\{f_{\text{4s}}(Y^{(s)})-f_{\text{4s}}(Y^{(s)}=X^{ \gamma^{\prime}})\}\) are also denoted. Inset: sublattice sites for the \(L_{12}\) ordering of Fe-Ni. (e\({}_{1}\))-(e\({}_{2}\)) Energy surface of magneto-crystalline anisotropy energy, which alters w.r.t. the sign of anisotropy constant \(K_{\text{u}}\), i.e., (e\({}_{1}\)) the easy-axis anisotropy when \(K_{\text{u}}>0\) and (e\({}_{2}\)) the easy-plane anisotropy when \(K_{\text{u}}<0\).
## 6 Conclusion
Figure 10: (a) Workflow for the calibration of the atom mobility by iteratively performing regression on the factor \(\omega\) from \(\Psi_{\gamma^{\prime}}(\omega t)\) w.r.t. the experimental measured \(\Psi_{\gamma^{\prime}}^{\text{exp}}(t)\). The \(\omega\) that has a difference of less than 0.5% to the last interaction is identified as the converged value for the current temperature. (b) Simulated \(\Psi_{\gamma^{\prime}}(t)\) at 784 K with the mobilities before (\(\omega=1\)) and after (\(\omega=3\)) calibration cf. \(\Psi_{\gamma^{\prime}}^{\text{exp}}(t)\) from [29], with the nanostructures denoted at corresponding time point. (c) Simulated \(\Psi_{\gamma^{\prime}}(t)\) vs \(\Psi_{\gamma^{\prime}}^{\text{exp}}(t)\) at different temperatures. Inset: Regression of calibrated \(\omega(T)\) to the Arrhenius equation, which presents consistency. Notice that both experiments and simulations are performed with composition Fe\({}_{25}\)Ni\({}_{75}\).
|
2306.07698
|
Public-Key Encryption with Quantum Keys
|
In the framework of Impagliazzo's five worlds, a distinction is often made
between two worlds, one where public-key encryption exists (Cryptomania), and
one in which only one-way functions exist (MiniCrypt). However, the boundaries
between these worlds can change when quantum information is taken into account.
Recent work has shown that quantum variants of oblivious transfer and
multi-party computation, both primitives that are classically in Cryptomania,
can be constructed from one-way functions, placing them in the realm of quantum
MiniCrypt (the so-called MiniQCrypt). This naturally raises the following
question: Is it possible to construct a quantum variant of public-key
encryption, which is at the heart of Cryptomania, from one-way functions or
potentially weaker assumptions?
In this work, we initiate the formal study of the notion of quantum
public-key encryption (qPKE), i.e., public-key encryption where keys are
allowed to be quantum states. We propose new definitions of security and
several constructions of qPKE based on the existence of one-way functions
(OWF), or even weaker assumptions, such as pseudorandom function-like states
(PRFS) and pseudorandom function-like states with proof of destruction
(PRFSPD). Finally, to give a tight characterization of this primitive, we show
that computational assumptions are necessary to build quantum public-key
encryption. That is, we give a self-contained proof that no quantum public-key
encryption scheme can provide information-theoretic security.
|
Khashayar Barooti, Alex B. Grilo, Loïs Huguenin-Dumittan, Giulio Malavolta, Or Sattath, Quoc-Huy Vu, Michael Walter
|
2023-06-13T11:32:28Z
|
http://arxiv.org/abs/2306.07698v2
|
# Public-Key Encryption with Quantum Keys
###### Abstract
In the framework of Impagliazzo's five worlds, a distinction is often made between two worlds, one where public-key encryption exists (Cryptomania), and one in which only one-way functions exist (MiniCrypt). However, the boundaries between these worlds can change when quantum information is taken into account. Recent work has shown that quantum variants of oblivious transfer and multi-party computation, both primitives that are classically in Cryptomania, can be constructed from one-way functions, placing them in the realm of quantum MiniCrypt (the so-called MiniQCrypt). This naturally raises the following question: _Is it possible to construct a quantum variant of public-key encryption, which is at the heart of Cryptomania, from one-way functions or potentially weaker assumptions?_
In this work, we initiate the formal study of the notion of quantum public-key encryption (qPKE), i.e., public-key encryption where keys are allowed to be quantum states. We propose new definitions of security and several constructions of qPKE based on the existence of one-way functions (OWF), or even weaker assumptions, such as pseudorandom function-like states (PRFS) and pseudorandom function-like states with proof of destruction (PRFSPD). Finally, to give a tight characterization of this primitive, we show that computational assumptions are necessary to build quantum public-key encryption. That is, we give a self-contained proof that no quantum public-key encryption scheme can provide information-theoretic security.
## 1 Introduction
The use of quantum resources to enable cryptographic tasks under weaker assumptions than classically needed (or even _unconditionally_) were actually the first concrete proposals of quantum computing, with the seminal quantum money protocol of Wiesner [26] and the key-exchange protocol of Bennett and Brassard [1]. Ever since, the field of quantum cryptography has seen a surge of primitives that leverage quantum information to perform tasks that classically require stronger assumptions, or are downright impossible. Recent works [1, 2] have shown that there exist quantum protocols for oblivious transfer, and therefore arbitrary multi-party computation (MPC), based solely on the existence of one-way functions (OWF) [1, 2], or pseudorandom states (PRS) [10], which potentially entail even weaker computational assumptions [11, 2]. It is well-known that, classically, oblivious transfer and MPC are "Cryptomania" objects, i.e., they can only be constructed from more structured assumptions that imply public-key encryption (PKE). Thus, the above results seem to challenge the boundary between Cryptomania and MiniCrypt, in the presence of quantum information. Motivated by this state of affairs, in this work we investigate the notion of _PKE itself_, the heart of Cryptomania, through the lenses of quantum computing. That is, we ask the following question:
_Does public-key encryption (PKE) belong to MiniQCrypt?_
Known results around this question are mostly negative: It is known that PKE cannot be constructed in a black-box manner from OWFs [14], and this result has been recently re-proven in the more challenging setting where the encryption or decryption algorithms are quantum [1]. However, a tantalizing possibility left open by these works is to realize PKE schemes from OWFs (or weaker assumptions), where public-key or ciphertexts are quantum states.
### Our results
In this work we initiate the systematic study of quantum public-key encryption (qPKE), i.e., public-key encryption where public-keys and ciphertexts are allowed to be quantum states. We break down our contributions as follows.
_1. Definitions._ We provide a general definitional framwork for qPKE, where both the public-key and ciphertext might be general quantum states. In the classical setting, there is no need to provide oracle access to the encryption, since the public-key can be used to implement that. In contrast, if the public-key is a quantum state, it might be measured during the encryption procedure, and the ciphertexts might depend on the measurement outcome. In fact, this is the approach taken in some of our constructions. This motivates a stronger security definition, similar to the classical counterpart, in which the adversary gets additional access to an encryption oracle that uses the same quantum public-key that is used during the challenge phase. We define IND-CPA-EO (respectively, IND-CCA-EO) security by adding the encryption oracle (EO) to the standard IND-CPA (respectively, IND-CCA) security game.
_2. Constructions._ With our new security definition at hand, we propose three protocols for implementing qPKE from OWF and potentially weaker assumptions, each with its own different advantages and disadvantages. More concretely, we show the existence of:
1. A qPKE scheme with quantum public-keys and classical ciphertexts that is IND-CCA-EO6 secure, based on post-quantum OWF, in Section 4.1. Footnote 6: Throughout this paper, unless explicitly specified, by IND-CCA we refer to the notion of adaptive IND-CCA2 security.
2. A qPKE scheme with quantum public-key and quantum ciphertext that is IND-CCA1 secure, based on pseudo-random function-like states (PRFS) with super-logarithmic input-size7, in Section 4.2. Since this scheme is not EO secure, each quantum public-key enables the encryption of a single message. Footnote 7: Note that PRS implies PRFS with logarithmic size inputs, but no such implication is known for super-logarithmic inputs.
3. A qPKE scheme with quantum public-key and classical ciphertext that is IND-CPA-EO secure based on pseudo-random function-like states with proof of destruction (PRFSPDs), in Section 5.
We wish to remark that it has been recently shown that OWF imply PRFS with super-logarithmic input-size [1] and PRFSPDs [2]. Therefore, the security of the second and third protocols is based on a potentially weaker cryptographic assumption than the first one. Furthermore, PRFS with super-logarithmic input-size are _oracle separated_ from one-way functions [15]; therefore, our second result shows a black-box separation between a certain form of quantum public-key encryption and one-way functions. On the other hand, for the other two constructions, even if the
public-key is a quantum state, the ciphertexts are classical and, furthermore, one quantum public-key can be used to encrypt multiple messages. The first protocol is much simpler to describe and understand since it only uses standard (classical) cryptographic objects. Moreover, we show that this scheme guarantees the notion of adaptive CCA2 security and is the only scheme that achieves perfect correctness.
_3. Lower Bounds._ To complete the picture, we demonstrate that _information-theoretically secure_ qPKE does not exist. Due to the public-keys being quantum states, this implication is much less obvious than for the classical case. In fact, some of the existing constructions of qPKE [10] have been conjectured to be unconditionally secure, a conjecture that we invalidate in this work. While this general statement follows by known implications in the literature (see Section 6 for more details), in this work we present a self-contained proof of this fact, borrowing techniques from shadow tomography, which we consider to be of independent interest.
### Technical overview
In this section, we provide a technical overview of our results. In Section 1.2.1, we explain the challenges and choices to define qPKE and its security definition. In Section 1.2.2, we present 3 instantiations of qPKE, each based on a different assumption and with different security guarantees. Ultimately, Section 1.2.3 is dedicated to the impossibility of information-theoretically secure qPKE and a high-level overview of the proof technique.
#### 1.2.1 Definitions of qPKE
In order to consider public-key encryption schemes with quantum public-keys, we need to revisit the traditional security definitions. In the case of quantum public-keys, there are several immediate issues that require revision.
The first issue is related to the access the adversary is given to the public-key. In the classical-key case (even with quantum ciphertexts), the adversary is given the classical public-key pk. Given a single quantum public-key, one cannot create arbitrary number of copies of the quantum public-key, due to no-cloning. Hence, to naturally extend notions such as IND-CPA security, we provide multiple copies of the quantum public-key to the adversary (via the mean of oracle access to the quantum public-key generation algorithm).
The second issue concerns the quantum public-key's _reusability_. Classically, one can use the public-key to encrypt multiple messages. With quantum public-keys, this might not be the case: the quantum public-key might be consumed during the encryption. In a non-reusable scheme, the user needs a fresh quantum public-key for every plaintext they wish to encrypt. This is not only a theoretical concern: in the PRFS-based construction (see Section 4.2), part of the quantum public-key is sent as the (quantum) ciphertext, so clearly, this construction is _not_ reusable.
Thirdly, it could be the case that in a reusable scheme, each encryption call changes the public-key state \(\rho_{\textit{qpk}}\) in an irreversible way. Hence, we make a syntactic change: \(\mathsf{Enc}(\rho_{\textit{qpk}},m)\) outputs \((\rho^{\prime}_{\textit{qpk}},c)\), where \(c\) is used as the ciphertext and \(\rho^{\prime}_{\textit{qpk}}\) is used as the key to encrypt the next message. Note that in this scenario the updated public-key is not publicly available anymore and is only held by the party who performed the encryption.
Lastly, the syntactic change mentioned above also has security effects. Recall that classically, there is no need to give the adversary access to an encryption oracle, since the adversary can generate
encryption on their own. Alas, with quantum public-keys, the distribution of ciphers might depend on the changes that were made to the quantum public-key by the challenger whenever the key is used to encrypt several messages. Therefore, for reusable schemes, we define two new security notions, denoted CPA-EO and CCA-EO, that are similar to CPA and CCA but where the adversary is given access to an encryption oracle (EO). We note there are several works considering the notions of chosen-ciphertext security in the quantum setting, because it is not clear how to prevent the adversary from querying the challenge ciphertext, if it contains a quantum states. However, we only consider CCA-security for schemes with classical ciphertexts, and therefore this issue does not appear in this work.
Pure vs Mixed States.We mention explicitly that we require our public-keys to be _pure states_. This is motivated by the following concern: there is no general method to authenticate quantum states. One proposal to ensure that the certificate authority (CA) is sending the correct state is to distribute various copies of the keys to different CAs and test whether they are all sending the same state [14]. This ensures that, as long as at least one CA is honest, the user will reject a malformed key with some constant probability. However, this argument crucially relies on the public-key being a pure state (in which case comparison can be implemented with a SWAP-test). On the other hand, if the public-key was a mixed state, there would be no way to run the above test without false positives.
We also mention that, if mixed states are allowed, then there is a trivial construction of qPKE from any given symmetric encryption scheme (\(\mathsf{SKE.key-gen},\mathsf{SKE.Enc},\mathsf{SKE.Dec}\)), as also observed in [13, Theorem C.6], which we describe in the following. To generate the keys, we use the output of \(\mathsf{SKE.key-gen}\) as the secret-key and use it to create the uniform mixture
\[\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}|x\rangle\langle x|\otimes|\mathsf{Enc}_{ sk}(x)\rangle\langle\mathsf{Enc}_{sk}(x)| \tag{1}\]
as the public-key. The ciphertext corresponding to a message \(m\) is given by \((\mathsf{Enc}_{x}(m),\mathsf{Enc}_{sk}(x))\). To decrypt, the decryptor would first recover \(x\) by decrypting the second element in the ciphertext using \(sk\), and then recover \(m\) by decrypting the first item using \(x\) as the secret key.
#### 1.2.2 Constructions for qPKE
As previously mentioned, we propose in this work three schemes for qPKE, based on three different assumptions, each providing a different security guarantee.
qPKE from OWF.Our first instantiation of qPKE is based on the existence of post-quantum OWFs. For this construction, we aim for the strong security notion of indistinguishability against adaptive chosen ciphertext attacks with encryption oracle referred to as IND-CCA-EO. We start with a simple bit-encryption construction that provides IND-CCA security and we discuss how one can modify the scheme to encrypt multi-bit messages and also provide EO security.
Our first scheme assumes the existence of a _quantum-secure pseudorandom function (PRF)_, which can be built from quantum-secure one-way functions [16]. Given a PRF ensemble \(\{f_{k}\}_{k}\), the public key consists of a pair of pure quantum states \(\textit{qpk}=(|\textit{qpk}_{0}\rangle,|\textit{qpk}_{1}\rangle)\) and the secret key consists of a pair of bit-strings \(\mathsf{dk}=(\mathsf{dk}_{0},\mathsf{dk}_{1})\) such that, for all \(b\in\{0,1\}\),
\[|\textit{qpk}_{b}\rangle=\frac{1}{\sqrt{2^{n}}}\sum_{x\in\{0,1\}^{n}}|x,f_{ \mathsf{dk}_{b}}(x)\rangle,\]
where \(f_{k}\) denotes the quantum-secure PRF keyed by \(k\). To encrypt a bit \(b\), one simply measures all qubits of \(|\mathpzc{qp}\xi_{b}\rangle\) in the computational basis. The result takes the form \((x,f_{\mathsf{dk}_{b}}(x))\) for some uniformly random \(x\in\{0,1\}^{n}\) and this is returned as the ciphertext, i.e., \((\mathpzc{qc}_{0},\mathpzc{qc}_{1})=(x,f_{\mathsf{dk}_{b}}(x))\).
To decrypt a ciphertext \((\mathpzc{qc}_{0},\mathpzc{qc}_{1})\), we apply both \(f_{\mathsf{dk}_{0}}\) and \(f_{\mathsf{dk}_{1}}\) to \(\mathpzc{qc}_{0}\) and return the value of \(b\in\{0,1\}\) such that \(f_{\mathsf{dk}_{b}}(\mathpzc{qc}_{0})=\mathpzc{qc}_{1}\). In case this does happen for neither or both of the keys, the decryption aborts.
The IND-CCA security of the simple bit-encryption scheme can be proven with a hybrid argument (see Appendix A). However, there are a few caveats to the scheme that can be pointed out. First, the scheme is not reusable. It can be easily noticed that after using a public-key for an encryption, the public-key state collapses, meaning that all the subsequent encryption calls are derandomized. This would mean if the same public-key is reused, it can not even guarantee IND-CPA security as the encryption is deterministic.
The second issue is lifting this CCA-secure bit-encryption scheme to a many-bit CCA-secure encryption scheme. Note that although not trivial, as proven by Myers and Shelat [14], classically it is possible to construct CCA-secure many-bit encryption from CCA-secure bit-encryption. However, the argument cannot be extended to qPKE in a generic way. The main issue is that the construction from [14], similar to the Fujisaki-Okamoto transform, derandomizes the encryption procedure for some fixed random coins. Later these fixed random coins are encrypted and attached to the ciphertext, so that the decryptor can re-encrypt the plaintext to make sure they were handed the correct randomness. Looking at our construction, it is quite clear that it is not possible to derandomize the encryption procedure as the randomness is a consequence of the measurement.
Let us show how the same approach can be modified to circumvent the issues mentioned. Our main observation is that we can use public-keys of the form mentioned before for a key agreement stage and then use the agreed key to encrypt many-bit messages with a symmetric-key encryption scheme (SKE). Let us elaborate. Let \(\{f_{k}\}_{k}\) be a PRF family and \((\mathsf{SE.Enc},\mathsf{SE.Dec})\) be a symmetric-key encryption scheme. Note that quantum-secure one-way functions imply a quantum-secure PRF [13], and post-quantum IND-CCA symmetric encryption [1]8. Consider the following scheme: the secret key \(\mathsf{dk}\) is a uniformly random key for the PRF, and for a fixed \(\mathsf{dk}\), the quantum public-key state is
Footnote 8: IND-CCA SKE can be built from an IND-CPA SKE and a MAC using the encrypt-then-MAC paradigm.
\[|\mathpzc{qp}\xi_{\mathsf{dk}}\rangle=\frac{1}{\sqrt{2^{\lambda}}}\sum_{x\in \{0,1\}^{\lambda}}|x\rangle|f_{\mathsf{dk}}(x)\rangle. \tag{2}\]
The encryption algorithm will then measure \(|\mathpzc{qp}\xi_{\mathsf{dk}}\rangle\) in the computational basis leading to the outcome \((x^{*},y^{*}=f_{\mathsf{dk}}(x^{*}))\). The ciphertext of a message \(m\) is given by \((x^{*},\mathsf{SE.Enc}(y^{*},m))\). To decrypt a ciphertext \((\hat{x},\hat{c})\), we first compute \(\hat{y}=f_{\mathsf{dk}}(x)\) and return \(\hat{m}=\mathsf{SE.Dec}(f_{\mathsf{dk}}(\hat{x}),\hat{c})\).
We emphasize that this scheme is reusable since it allows the encryption of many messages using the same measurement outcome \((x^{*},f_{\mathsf{dk}}(x^{*}))\). Using a hybrid argument, it can be shown that if the underlying SKE guarantees IND-CCA security, this construction fulfills our strongest security notion, i.e. IND-CCA-EO security. A formal description of the scheme, along with a security proof can be found in Section 4.1.
QPKE from PRFS.The second construction we present in this paper is an IND-CCA1 secure public-key scheme based on the existence of pseudorandom function-like state generators. Our
approach is based on first showing bit-encryption, and the discussion regarding how to lift that restriction is discussed in Section 4.2. The ciphertexts generated by our scheme are quantum states, and as the public-keys of this construction are not reusable, we do not consider the notion of EO security. A family of states \(\{|\psi_{k,x}\rangle\}_{k,x}\) is pseudo-random function-like [1] if
1. There is a quantum polynomial-time algorithm \(\mathsf{Gen}\) such that \[\mathsf{Gen}(k,\sum_{x}\alpha_{x}|x\rangle)=\sum_{x}\alpha_{x}|x\rangle|\psi_{ k,x}\rangle,\text{ and}\]
2. No \(\mathsf{QPT}\) adversary can distinguish \((|\psi_{1}\rangle,...,|\psi_{\ell}\rangle)\) from \((|\phi_{1}\rangle,...,|\phi_{\ell}\rangle)\), where \(|\psi_{i}\rangle=\sum_{x}\alpha_{x}^{i}|x\rangle|\psi_{k,x}\rangle\), \(|\phi_{i}\rangle=\sum_{x}\alpha_{x}^{i}|x\rangle|\phi_{x}\rangle\), and \(\{|\phi_{x}\rangle\}_{x}\) are Haar random states and the states \(|\sigma_{i}\rangle=\sum_{x}\alpha_{x}^{i}|x\rangle\) are chosen by the adversary.
We continue by providing a high-level description of the scheme. The key generation algorithm picks a uniform PRFS key \(\mathsf{dk}\) and generates the corresponding public-keys as stated below:
\[\frac{1}{\sqrt{2^{\lambda}}}\sum_{x\in\{0,1\}^{\lambda}}|x\rangle|\psi_{ \mathsf{dk},x}\rangle^{\otimes n}, \tag{3}\]
where \(\{|\psi_{k,x}\rangle\}_{k,x}\) is a PRFS family, the size of the input \(x\) is super-logarithmic in the security parameter and \(n\) is a polynomial in the security parameter.
To encrypt a bit \(m\), the encryptor will then measure the first register of \(|\mathpzc{qp}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The encryptor will then measure the first register of \(|\mathpzc{qp}\!\!\xi\rangle\) and the post-measurement state is \(|x^{*}\rangle|\psi_{\mathsf{dk},x^{*}}\rangle.\) The encryptor will then generate a (classical) proof of destruction \(\pi=\mathpzc{Destruct}(|\psi_{\mathsf{dk},x^{*}}\rangle).\) The encryption procedure also picks \(\mathsf{dk}_{1}\) uniformly at random, generated \(|\psi_{\mathsf{dk}_{1},x^{*}}\rangle\) and generates the proof of destruction \(\pi^{\prime}=\mathpzc{Destruct}(|\psi_{\mathsf{dk}_{1},x^{*}}\rangle).\) The corresponding ciphertext for a bit \(b\) is given by \(c=(x^{*},y),\) where
\[y=\begin{cases}\pi^{\prime},&\text{if }b=0\\ \pi,&\text{if }b=1\end{cases}.\]
The decryptor will receive some value \((\hat{x},\hat{y})\) and decrypt the message \(\hat{b}=\mathpzc{V}\!\!r(\mathsf{dk},\hat{x},\hat{y}).\) The proof of the security of the aforementioned construction follows from a hybrid argument reminiscent of the security proof of the previous schemes (see Section 5). Notice that repeating such a process in parallel trivially gives a one-shot security of the encryption of a string \(m\) and moreover, such an encryption is classical. Therefore, in order to achieve IND-CPA-EO secure qPKE scheme, we can actually encrypt a secret key \(\mathsf{sk}\) that is chosen by the encryptor, and send the message encrypted under \(\mathsf{sk}.\) We leave the details of such a construction and its proof of security to Section 5.
#### 1.2.3 Impossibility of Information-Theoretically Secure qPKE
So far, we have established that qPKE can be built from assumptions weaker than the ones required for the classical counterpart, and potentially even weaker than those needed to build secret-key encryption classically. This naturally leads to the question of whether it is possible to build an information-theoretically secure qPKE. In the following, we present a self-contained proof of this fact, using techniques from the literature on shadow tomography. Although proving the impossibility for classical PKE is immediate, there are a few challenges when trying to prove a result of a similar flavor for qPKE. Even when considering security against a computationally unbounded adversary, there is a limitation that such adversary has, namely, they are only provided with polynomially many copies of the public-key.
The first step of the proof is reducing winning the IND-CPA game to finding a secret-key/public-key pair \((\mathsf{dk},|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle)\) such that
\[\langle\mathpzc{qp}\!\!\xi^{*}|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle\approx 1.\]
In other words, we show that if \(|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle\) is relatively close to \(|\mathpzc{qp}\!\!\xi^{*}\rangle\), there is a good chance that \(\mathsf{dk}\) can decrypt ciphertexts encrypted by \(|\mathpzc{qp}\!\!\xi^{*}\rangle\) correctly. A formal statement and the proof of this argument can be found in Lemma 1.
Given this lemma, the second part of the proof consists in constructing an adversary that takes polynomially many copies of \(|\mathpzc{qp}\!\!\xi^{*}\rangle\) as input and outputs \((\mathsf{dk},|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle)\) such that \(|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle\) is relatively close to \(|\mathpzc{qp}\!\!\xi^{*}\rangle.\) The technique to realize this adversary is _shadow tomography_, which shows procedures to estimate the values \(\langle\mathpzc{qp}\!\!\xi_{\mathsf{dk}}|\mathpzc{qp}\!\!\xi^{*}\rangle\) for all \((|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle,\mathsf{dk})\) pairs. Note that doing this naively, i.e. by SWAP-testing multiple copies of \(|\mathpzc{qp}\!\!\xi^{*}\rangle\) with each \(|\mathpzc{qp}\!\!\xi_{\mathsf{dk}}\rangle,\) would require exponentially many copies of the public-key \(|\mathpzc{qp}\!\!\xi^{*}\rangle.\) The way we circumvent this problem is by using the a recent result by Huang, Kueng, and Preskill [10]. Informally, this theorem states that for \(M\) rank 1 projective measurements \(O_{1},\ldots,O_{M}\) and an unknown \(n\)-qubit state \(\rho,\) it is possible to estimate \(\operatorname{Tr}(O_{i}\rho)\) for all \(i,\) up to precision \(\epsilon,\) by only performing \(T=O(\log(M)/\epsilon^{2})\) single-copy random Clifford measurements on \(\rho.\)
Employing this theorem, we show that a computationally unbounded adversary can estimate all the values \(\langle\mathpzc{qp}\!\!\xi_{\mathsf{dk}}|\mathpzc{qp}\!\!\xi^{*}\rangle\) from random Clifford measurements on polynomially many copies of \(|\mathpzc{qp}\!\!\xi^{*}\rangle.\)
Having the estimated values of \(\langle\,q\!\!pk_{\mathsf{dk}}|\,q\!\!pk^{\star}\rangle\) the adversary picks a \(\mathsf{dk}\) such that the estimated value is relatively large and uses this key to decrypt the challenge ciphertext. Now invoking Lemma 1 we conclude that the probability of this adversary winning the IND-CPA game is significantly more than \(1/2\).
### Related works
The notion of qPKE was already considered in the literature, although without introducing formal security definitions. For instance, Gottesman [10] proposed a candidate construction in an oral presentation, without a formal security analysis. The scheme has quantum public-keys and quantum ciphers, which consumes the public-key for encryption. Kawachi et al. [11] proposed a construction of qPKE (with quantum keys and ciphertexts) from a newly introduced hardness assumption, related to the graph automorphism problem. [12] defines and constructs a public-key encryption where the keys, plaintexts and ciphers are classical, but the algorithms are quantum the (key-generation uses Shor's algorithm). One of the contributions of this work, is to provide a unifying framework for these results, as well as improve in terms of computational assumptions and security guarantees.
In [13], the authors define and provide impossibility results regarding encryption with quantum public-keys. Classically, it is easy to show that a (public) encryption scheme cannot have deterministic ciphers; in other words, encryption must use randomness. They show that this is also true for a quantum encryption scheme with quantum public-keys. In [14], a secure encryption scheme with quantum public keys based on the LWE assumption is introduced. That work shows (passive) indistinguishable security, and is not IND-CPA secure.
In [13, 14], the authors study digital signatures with quantum signatures, and more importantly in the context of this work, quantum public-keys.
The study of quantum pseudorandomness and its applications has recently experienced rapid advancements. One of the most astonishing aspects is that PRS (Pseudorandom states) and some of its variations are considered weaker than one-way functions. In other words, they are implied by one-way functions, and there exists a black-box separation between them. However, it has been demonstrated that these primitives are sufficient for many applications in Minicrypt and even extend beyond it. A graph presenting the various notions of quantum pseudorandomness and its application is available at [https://sattath.github.io/qcrypto-graph/](https://sattath.github.io/qcrypto-graph/).
### Concurrent and subsequent work
This work is a merge of two concurrent and independent works [1, 1], with a unified presentation and more results.
In a concurrent and independent work, Coladangelo [15] proposes a qPKE scheme with a construction that is very different from ours, and uses a quantum trapdoor function, which is a new notion first introduced in their work. Their construction is based on the existence of quantum-secure OWF. However, in their construction, each quantum public-key can be used to encrypt a single message (compared to our construction from OWF, where the public-key can be used to encrypt multiple messages), and the ciphertexts are quantum (whereas our construction from OWF has classical ciphertexts). They do not consider the stronger notion of IND-CCA security.
Our paper has already generated interest in the community: Two follow-up works [12, 13], consider a _stronger_ notion of qPKE where the public-key consists of a classical and a quantum part,
and the adversary is allowed to tamper arbitrarily with the quantum part (but not with the classical component).10 The authors provide constructions assuming quantum-secure OWF. While their security definition is stronger, we remark that our approach is more general, as exemplified by the fact that we propose constructions from potentially weaker computational assumptions. In [1], the authors give another solution for the quantum public-key distribution problem using time-dependent signatures, which can be constructed from quantum-secure OWF, but the (classical) verification key needs to be continually updated.
Footnote 10: Because of this stronger security definition, here the notion of public-keys with mixed states is meaningful since there is an alternative procedure to ensure that the key is well-formed (e.g., signing the classical component).
## 2 Preliminaries
### Notation
Throughout this paper, \(\lambda\) denotes the security parameter. The notation \(\mathsf{negl}(\lambda)\) denotes any function \(f\) such that \(f(\lambda)=\lambda^{-\omega(1)}\), and \(\mathsf{poly}(\lambda)\) denotes any function \(f\) such that \(f(\lambda)=\lambda^{\mathcal{O}(1)}\). When sampling uniformly at random a value \(a\) from a set \(\mathcal{U}\), we employ the notation \(a\leftarrow\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox t o o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox t o o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.
**Fact 1**.: _Let \(f\colon\{0,1\}^{n}\to\{0,1\}^{m}\) be a function which is efficiently computable by a classical circuit. Then there exists a unitary \(U_{f}\) on \((\mathbb{C}^{2})^{\otimes n+m}\) which is efficiently computable by a quantum circuit (possibly using ancillas) such that, for all \(x\in\{0,1\}^{n}\) and \(y\in\{0,1\}^{m}\),_
\[U_{f}\colon|x\rangle|y\rangle\mapsto|x\rangle|y\oplus f(x)\rangle.\]
### Quantum-Secure Pseudorandom Functions
Throughout this work, we often refer to a _pseudorandom function_ (PRF) first introduced in [1]. This is a keyed function, denoted \(\mathsf{PRF}\), that can be evaluated in polynomial time satisfying a certain security property. In this work, we require \(\mathsf{PRF}\) to be _quantum-secure_, which, loosely speaking, says that an adversary with oracle access to \(\mathsf{PRF}\) cannot distinguish it from a truly random function, even given superposition queries. It is known that quantum-secure PRF can be constructed from any quantum-secure one-way function [12].
**Definition 1** (Quantum-secure PRF).: _We say that a keyed family of functions \(\{f_{k}\}_{k}\) is a quantum-secure pseudorandom function (PRF) ensemble if, for any \(\mathsf{QPT}\) adversary \(\mathcal{A}\), we have_
\[\left|\Pr\left[1\leftarrow\mathcal{A}(1^{\lambda})^{f_{k}}\right]-\Pr\left[1 \leftarrow\mathcal{A}(1^{\lambda})^{f}\right]\right|\leq\mu(\lambda),\]
_where \(k\xleftarrow{\$}\{0,1\}^{\lambda}\), \(f\) is a truly random function, and the oracles can be accessed in superposition, that is, they implement the following unitaries_
\[|x\rangle|z\rangle\stackrel{{ U_{f_{k}}}}{{\longmapsto}}|x \rangle|z\oplus f_{k}(x)\rangle\quad\text{and}\quad|x\rangle|z\rangle \stackrel{{ U_{f}}}{{\longmapsto}}|x\rangle|z\oplus f(x)\rangle,\]
_respectively._
### Post-Quantum IND-CCA Symmetric-Key Encryption
We briefly recall the definition of a symmetric-key encryption scheme (SKE).
**Definition 2**.: _An SKE consists of 2 algorithms with the following syntax:_
1. \(\mathsf{Enc}(\mathsf{sk},\mathsf{pt})\)_: a_ \(\mathsf{PPT}\) _algorithm, which receives a symmetric-key_ \(\mathsf{sk}\in\{0,1\}^{\lambda}\) _and a plaintext_ \(\mathsf{pt}\)_, and outputs a ciphertext_ \(\mathsf{ct}\)_._
2. \(\mathsf{Dec}(\mathsf{sk},\mathsf{ct})\)_: a deterministic polynomial-time algorithm, which takes a symmetric-key_ \(\mathsf{sk}\) _and a ciphertext_ \(\mathsf{ct}\)_, and outputs a plaintext_ \(\mathsf{pt}\)_._
We say that a SKE scheme is perfectly _correct_ if for every plaintext \(\mathsf{pt}\in\{0,1\}^{*}\) and symmetric-key \(\mathsf{sk}\in\{0,1\}^{\lambda}\), \(\mathsf{Dec}(\mathsf{sk},\mathsf{Enc}(\mathsf{sk},\mathsf{pt}))=\mathsf{pt}\).
**Definition 3**.: _An SKE is post-quantum IND-CCA secure if for every \(\mathsf{QPT}\) adversary \(\mathcal{A}\coloneqq(\mathcal{A}_{1},\mathcal{A}_{2})\), there exists a negligible function \(\epsilon\) such that the following holds for all \(\lambda\):_
\[\Pr\left[\tilde{b}=b\begin{array}{l}\mathsf{sk}\xleftarrow{\$}\{0,1\}^{ \lambda}\\ \mathsf{pt}_{0},\mathsf{pt}_{1}\leftarrow\mathcal{A}_{1}^{\mathsf{Enc}( \mathsf{sk},\cdot),\mathsf{Dec}(\mathsf{sk},\cdot)}(1^{\lambda})\\ b\xleftarrow{\$}\{0,1\}\\ \mathsf{ct}^{*}\leftarrow\mathsf{Enc}(\mathsf{sk},\mathsf{pt}_{b})\\ \tilde{b}\leftarrow\mathcal{A}_{2}^{\mathsf{Enc}(\mathsf{sk},\cdot),\mathsf{ Dec}^{*}(\mathsf{sk},\cdot)}(\mathsf{ct}^{*},1^{\lambda})\end{array}\right]\leq 1/2+\epsilon(\lambda),\]
_Where \(\mathsf{Dec}^{*}(\mathsf{sk},\cdot)\) is the same as \(\mathsf{Dec}(\mathsf{sk},\cdot)\) but returns \(\bot\) on input the challenge ciphertext \(\mathsf{ct}^{*}\)._
Note that as the adversary is not given superposition access to the \(\mathsf{Enc}\), \(\mathsf{Dec}\) oracles one can build post-quantum IND-CCA SKE from quantum-secure OWF the same way as it is done classically with message authentication codes.
### Pseudorandom Function-Like State (PRFS) Generators
The notion of pseudorandom function like states was first introduced by Ananth, Qian and Yuen in [1]. A stronger definition where the adversary is allowed to make superposition queries to the challenge oracles was introduced in the follow-up work [1]. We reproduce their definition here:
**Definition 4**: **(Quantum-accessible PRFS generator).** _We say that a QPT algorithm \(G\) is a quantum-accessible secure pseudorandom function-like state generator if for all QPT (non-uniform) distinguishers \(A\) if there exists a negligible function \(\epsilon\), such that for all \(\lambda\), the following holds:_
\[\left|\Pr_{k\leftarrow\{0,1\}^{1\lambda}}\left[A_{\lambda}^{|\mathcal{O}_{ \mathsf{PRFS}}(k,\cdot)|}(\rho_{\lambda})=1\right]-\Pr_{\mathcal{O}_{\mathsf{ Haar}}}\left[A_{\lambda}^{|\mathcal{O}_{\mathsf{Har}}(\cdot)|}(\rho_{\lambda})=1 \right]\right|\leq\epsilon(\lambda),\]
_where:_
* \(\mathcal{O}_{\mathsf{PRFS}}(k,\cdot)\)_, on input a_ \(d\)_-qubit register_ \(\mathbf{X}\)_, does the following: it applies an isometry channel that is controlled on the register_ \(\mathbf{X}\) _containing_ \(x\)_, it creates and stores_ \(G_{1^{\lambda}}(k,x)\) _in a new register_ \(\mathbf{Y}\)_. It outputs the state on the registers_ \(\mathbf{X}\) _and_ \(\mathbf{Y}\)_._
* \(\mathcal{O}_{\mathsf{Haar}}(\cdot)\)_, modeled as a channel, on input a_ \(d\)_-qubit register_ \(\mathbf{X}\)_, does the following: it applies a channel that controlled on the register_ \(\mathbf{X}\) _containing_ \(x\)_, stores_ \(|\vartheta_{x}\rangle\langle\vartheta_{x}|\) _in a new register_ \(\mathbf{Y}\)_, where_ \(|\vartheta_{x}\rangle\) _is sampled from the Haar distribution. It outputs the state on the registers_ \(\mathbf{X}\) _and_ \(\mathbf{Y}\)_._
_Moreover, \(A_{1^{\lambda}}\) has superposition access to \(\mathcal{O}_{\mathsf{PRFS}}(k,\cdot)\) and \(\mathcal{O}_{\mathsf{Haar}}(\cdot)\) (denoted using the ket notation)._
_We say that \(G\) is a \((d(\lambda),n(\lambda))\)-PAPRFS generator to succinctly indicate that its input length is \(d(\lambda)\) and its output length is \(n(\lambda)\)._
### Quantum Pseudorandomness with Proofs of Destruction
We import the definition of pseudorandom function-like states with proofs of destruction (PRFSPD) from [1].
**Definition 5** (PRFS generator with proof of destruction).: _A PRFSPD scheme with key-length \(w(\lambda)\), input-length \(d(\lambda)\), output length \(n(\lambda)\) and proof length \(c(\lambda)\) is a tuple of QPT algorithms \(\mathsf{Gen},\mathcal{D}\)estruct, \(\mathcal{V}\!\mathsf{r}\) with the following syntax:_
1. \(|\psi_{k}^{x}\rangle\leftarrow\mathsf{Gen}(k,x)\)_: takes a key_ \(k\in\{0,1\}^{w}\)_, an input string_ \(x\in\{0,1\}^{d(\lambda)}\)_, and outputs an_ \(n\)_-qubit pure state_ \(|\psi_{k}^{x}\rangle\)_._
2. \(p\leftarrow\mathsf{Destruct}(|\phi\rangle)\)_: takes an_ \(n\)_-qubit quantum state_ \(|\phi\rangle\) _as input, and outputs a_ \(c\)_-bit classical string,_ \(p\)_._
3. \(b\leftarrow\mathcal{V}\!\mathsf{r}(k,x,q)\)_: takes a key_ \(k\in\{0,1\}^{w}\)_, a_ \(d\)_-bit input string_ \(x\)_, a_ \(c\)_-bit classical string_ \(p\) _and outputs a boolean output_ \(b\)_._
_Correctness. A PRFSPD scheme is said to be correct if for every \(x\in\{0,1\}^{d}\),_
\[\Pr_{k\leftarrow\{0,1\}^{w}}[1\leftarrow\mathcal{V}\!\mathsf{r}(k,x,p)\mid p \leftarrow\mathsf{Destruct}(|\psi_{k}^{x}\rangle);|\psi_{k}^{x}\rangle \leftarrow\mathsf{Gen}(k,x)]=1\]
Security.
1. **Pseudorandomness:** _A PRFSPD scheme is said to be (adaptively) pseudorandom if for any QPT adversary_ \(\mathcal{A}\)_, and any polynomial_ \(m(\lambda)\)_, there exists a negligible function_ \(\mathsf{negl}(\lambda)\)_, such that_ \[\mid \Pr_{k\leftarrow\{0,1\}^{w}}[\mathcal{A}^{|\mathsf{Gen}(k,\cdot))}(1^ {\lambda})=1]\] \[- \Pr_{\forall x\in\{0,1\}^{d},\mid\phi^{x}\rangle\leftarrow\mu_{ \{\mathsf{C}^{2}\}^{\otimes n}}}[\mathcal{A}^{|\mathsf{faar}^{\{|\phi^{x} \rangle\}_{x\in\{0,1\}^{d}}(\cdot)\}}(1^{\lambda})=1]|=\mathsf{negl}(\lambda)\] _where_ \(\forall x\in\{0,1\}^{d}\)_,_ \(\mathsf{faar}^{\{|\phi^{x}\rangle\}_{x\in\{0,1\}^{d}}}(x)\) _outputs_ \(|\phi^{x}\rangle\)_. Here we note that_ \(\mathcal{A}\) _gets quantum access to the oracles._
2. **Unclonability-of-proofs**_: A PRFSPD scheme satisfies_ \(\mathsf{Unclonability-of-proofs}\) _if for any QPT adversary_ \(\mathcal{A}\) _in cloning game (see Game_ 1_), there exists a negligible function_ \(\mathsf{negl}(\lambda)\) _such that_ \[\Pr[\mathsf{ Cloning\mbox{-}Exp}_{\lambda}^{\mathcal{A},\mathsf{PRFSPD}}=1]= \mathsf{negl}(\lambda).\]
```
1:Given input \(1^{\lambda}\), Challenger samples \(k\leftarrow\{0,1\}^{w(\lambda)}\) uniformly at random.
2:Initialize an empty set of variables, \(S\).
3:\(\mathcal{A}\) gets oracle access to \(\mathsf{Gen}(k,\cdot)\), \(\mathpzc{V}\mathpzc{U}\mathpzc{r}(k,\cdot,\cdot)\) as oracle.
4:for\(\mathsf{Gen}\) query \(x\) made by \(\mathcal{A}\)do
5:if\(\exists\) variable \(t_{x}\in S\)then\(t_{x}=t_{x}+1\).
6:else Create a variable \(t_{x}\) in \(S\), initialized to \(1\).
7:endif
8:endfor
9:\(\mathcal{A}\) outputs \(x,c_{1},c_{2},\ldots,c_{t_{x}+1}\) to the challenger.
10:Challenger rejects if \(c_{i}\)'s are not distinct.
11:for\(i\in[m+1]\)do\(b_{i}\leftarrow\mathpzc{V}\mathpzc{r}(k,x,c_{i})\)
12:endfor
13:Return \(\wedge_{i=1}^{m+1}b_{i}\).
```
**Algorithm 6** (Encryption with quantum public keys).: _Encryption with quantum public keys (qPKE) consists of 4 algorithms with the following syntax:_
1. \(\mathsf{dk}\leftarrow\mathpzc{Gen}(1^{\lambda})\)_: a_ QPT _algorithm, which receives the security parameter and outputs a classical decryption key._
2. \(|\mathpzc{qp}\!\!\xi\rangle\leftarrow\mathpzc{QPXGen}(\mathsf{dk})\): a _\(\mathpzc{QPT}\) algorithm, which receives a classical decryption key \(\mathsf{dk}\), and outputs a quantum public key \(|\mathpzc{qp}\!\!\xi\rangle\). In this work, we require that the output is a pure state, and that \(t\) calls to \(\mathpzc{QPXGen}(\mathsf{dk})\) should yield the same state, that is, \(|\mathpzc{qp}\!\!\xi\rangle^{\otimes t}\)._
3. \((\mathpzc{qp}\!\!\xi^{\prime},\mathpzc{qc})\leftarrow\mathpzc{Enc}(\mathpzc{ qp}\!\!\xi,m)\): a _\(\mathpzc{QPT}\) algorithm_, which receives a quantum public key \(\mathpzc{qp}\!\!\xi\) and a plaintext \(m\), and outputs a (possibly classical) ciphertext \(\mathpzc{qc}\) and a recycled public key \(\mathpzc{qp}\!\!\xi^{\prime}\)._
4. \(m\leftarrow\mathpzc{Dec}(\mathsf{dk},\mathpzc{qc})\): a _\(\mathpzc{QPT}\) algorithm_, which uses a decryption key \(\mathsf{dk}\) and a ciphertext \(\mathpzc{qc}\), and outputs a classical plaintext \(m\)._
We say that a qPKE scheme is _correct_ if for every message \(m\in\{0,1\}^{*}\) and any security parameter \(\lambda\in\mathbb{N}\), the following holds:
\[\Pr\left[\mathpzc{D}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Game 2** Indistinguishability security with an encryption oracle (IND-ATK-EO) for encryption with quantum public key and classical ciphertext schemes.
```
1: The challenger generates \(\mathsf{dk}\leftarrow\mathpzc{G}\mathpzc{en}(1^{\lambda})\).
2: The adversary gets \(1^{\lambda}\) as an input, and oracle access to \(\mathpzc{QPXGen}(\mathsf{dk})\).
3: The challenger generates \(|\mathpzc{qPC}\rangle\leftarrow\mathpzc{QPXGen}(\mathsf{dk})\). Let \(\mathpzc{qPC}_{1}:=|\mathpzc{qPC}\rangle\).
4: For \(i=1,\ldots,\ell\), the adversary creates a classical message \(m_{i}\) and send it to the challenger.
5: The challenger computes \((\mathpzc{qc}_{i},\mathpzc{qPC}_{i+1})\leftarrow\mathpzc{Enc}(\mathpzc{qPC}_{i },m_{i})\) and send \(\mathpzc{qc}_{i}\) to the adversary.
6: During step (2) to step (5), the adversary also gets classical oracle access to an oracle \(\mathcal{O}_{1}\).
7: The adversary sends two messages \(m^{\prime}_{0},m^{\prime}_{1}\) of the same length to the challenger.
8: The challenger samples \(b\in_{R}\{0,1\}\), computes \((\mathpzc{qc}^{*},\mathpzc{qPC}_{k+2})\leftarrow\mathpzc{Enc}(\mathpzc{qPC}_{k +1},m^{\prime}_{b})\) and sends \(\mathpzc{qc}^{*}\) to the adversary.
9: For \(i=\ell+2,\ldots,\ell^{\prime}\), the adversary creates a classical message \(m_{i}\) and send it to the challenger.
10: The challenger computes \((\mathpzc{qc}_{i},\mathpzc{qPC}_{i+1})\leftarrow\mathpzc{Enc}(\mathpzc{qPC}_{i },m_{i})\) and send \(\mathpzc{qc}_{i}\) to the adversary.
11: During step (9) to step (10), the adversary also gets classical oracle access to an oracle \(\mathcal{O}_{2}\). Note that after step (7), the adversary no longer gets access to oracle \(\mathcal{O}_{1}\).
12: The adversary outputs a bit \(b^{\prime}\).
We say that the adversary wins the game (or alternatively, that the outcome of the game is 1) iff \(b=b^{\prime}\).
```
Here \(\mathpzc{Dec}^{*}(\mathsf{dk},\cdot)\) is defined as \(\mathpzc{Dec}(\mathsf{dk},\cdot)\), except that it return \(\bot\) on input the challenge ciphertext \(\mathpzc{qc}^{*}\).
**Definition 7**.: _A qPKE scheme is IND-ATK-EO secure if for every \(\mathsf{QPT}\) adversary, there exists a negligible function \(\epsilon\) such that the probability of winning the IND-ATK-EO security game (Game 2) is at most \(\frac{1}{2}+\epsilon(\lambda)\)._
_Remark 1_.: The definition presented in Definition 7 is stated for the single challenge query setting. Using the standard hybrid argument, it is straightforward to show that single-challenge definitions also imply many-challenge definitions where the adversary can make many challenge queries.
_Remark 2_.: Note that the IND-CCA2-EO definition is only well-defined for schemes with classical ciphertexts. The other two notions are well-defined even for quantum ciphertexts, though we do not use those.
### Security Definitions for qPKE with Quantum Ciphertexts
We now give a definition for qPKE with quantum ciphertexts. In the case of adaptive chosen ciphertext security, the definition is non-trivial due to the no-cloning and the destructiveness of quantum measurements. We note there are indeed several works considering the notions of chosen-ciphertext security in the quantum setting: [1] defines chosen-ciphertext security for quantum symmetric-key encryption (when the message is a quantum state), and [1, 2] defines chosen-ciphertext security for classical encryption under superposition attacks. However, extending the technique from [1] to the public-key setting is non-trivial, and we leave this open problem for future work. In this section, we only consider security notions under chosen-plaintext attacks and non-adaptive chosen-ciphertext attacks.
Even though one can similarly define security notions with encryption oracle for schemes with quantum ciphertexts as in Section 3.1, we note that in all constructions of qPKE with quantum ciphertexts present in this work are not reusable, and thus we do not present the definition in which the adversary has oracle access to the encryption oracle for the sake of simplicity. We denote these notions as IND-ATK, where ATK is either chosen-plaintext attacks (CPA) or non-adaptive chosen-ciphertext attacks (CCA1).
**Game 3** IND-ATK security game for encryption with quantum public key and quantum ciphertexts schemes.
```
1:The challenger generates \(\mathsf{dk}\leftarrow\mathpzc{Gm}(1^{\lambda})\).
2:The adversary \(\mathcal{A}_{1}\) gets \(1^{\lambda}\) as an input, and oracle access to \(\mathpzc{QPXGen}(\mathsf{dk})\), \(\mathpzc{Enc}(\mathpzc{pk},\cdot)\) and \(\mathcal{O}_{1}\), and sends \(m_{0},m_{1}\) of the same length to the challenger. \(\mathcal{A}_{1}\) also output a state \(|\mathsf{st}\rangle\) and sends it to \(\mathcal{A}_{2}\).
3:The challenger samples \(b\in_{R}\{0,1\}\), generates \(|\mathpzc{qpk}\rangle\leftarrow\mathpzc{QPXGen}(\mathsf{dk})\) and sends \(c^{*}\leftarrow\mathpzc{Enc}(|\mathpzc{qpk}\rangle,m_{b})\) to the adversary \(\mathcal{A}_{2}\).
4:\(\mathcal{A}_{2}\) gets oracle access to \(\mathpzc{QPXGen}(\mathsf{dk})\), \(\mathpzc{Enc}(\mathpzc{pk},\cdot)\).
5:The adversary \(\mathcal{A}_{2}\) outputs a bit \(b^{\prime}\).
We say that the adversary wins the game (or alternatively, that the outcome of the game is 1) iff \(b=b^{\prime}\).
The oracles \(\mathcal{O}_{1}\) is defined depending on the level of security as follows.
\[\begin{array}{cc}\text{ATK}&\text{Oracle }\mathcal{O}_{1}\\ \hline\mathpzc{CPA}&\varnothing\\ \text{CCA1}&\mathpzc{xp}(\mathsf{dk},\cdot)\end{array}\]
**Definition 8**.: _A qPKE scheme with quantum ciphertexts is IND-ATK secure if for every QPT adversary \(\mathcal{A}\coloneqq(\mathcal{A}_{1},\mathcal{A}_{2})\), there exists a negligible function \(\epsilon\) such that the probability of winning the IND-ATK security game (Game 3) is at most \(\frac{1}{2}+\epsilon(\lambda)\)._
## 4 Constructions of CCA-Secure qPKE
In this section, we present our qPKE constructions from OWF and PRFS and prove that their CCA security. The former (given in Section 4.1) has classical ciphertexts, and allows to encrypt arbitrary long messages. The latter (given in Section 4.2) has quantum ciphertexts, and only allows to encrypt a single-bit message. However, we note that the latter is based on a weaker assumption than the former. Finally, in Section 4.3, we give a remark on the black-box construction of non-malleable qPKE from CPA-secure qPKE using the same classical approach.
### CCA-Secure Many-Bit Encryption from OWF
We start by presenting a simple qPKE construction from OWF which prove that it provides our strongest notion of security, i.e. IND-CCA-EO security. The scheme is formally presented in Construction 1. The ciphertexts produced by the scheme are classical, and the public-keys are reusable. The cryptographic components of our construction are a quantum secure PRF family \(\{f_{k}\}\) and a post-quantum IND-CCA secure symmetric-key encryption scheme (\(\mathsf{SE.Enc},\mathsf{SE.Dec}\)) which can both be built from a quantum-secure OWF [22, 23].
**Construction 1** (IND-CCA-EO secure qPKE from OWF).:
* _Assumptions: A family of quantum-secure pseudorandom functions \(\{f_{k}\}_{k}\), and post-quantum IND-CCA SKE_ (\(\mathsf{SE.Enc},\mathsf{SE.Dec}\)).
* \(\mathpzc{Gen}(1^{\lambda})\)__
1. \(\mathsf{dk}\xleftarrow{\$}\{0,1\}^{\lambda}\)__
2. \(|\mathpzc{qpk}\rangle\leftarrow\sum_{x\in\{0,1\}^{\lambda}}|x,f_{\mathsf{dk}}(x)\rangle\)
* \(\mathcal{Enc}(|\mathpzc{qp}\!\!\!\xi\rangle,m)\) _1. Measure \(|\mathpzc{qp}\!\!\!\xi\rangle\) to obtain classical strings \(x,y\)._ _2. Let \(c_{0}\gets x\) and \(c_{1}\leftarrow\mathsf{SE.Enc}(y,m)\)._ _3. Output \((c_{0},c_{1})\)_
* \(\mathcal{Dec}(\mathsf{dk},(c_{0},c_{1}))\)__ _1. Compute \(y\gets f_{\mathsf{dk}}(c_{0})\)._ _2. Compute \(m\leftarrow\mathsf{SE.Dec}(y,c_{1})\) and return \(m\)._
It can be trivially shown that the scheme achieves perfect correctness if the underlying SKE provides the perfect correctness property.
Theorem 4.1: _Let \(\{f_{k}\}_{k}\) be a quantum secure PRF and \((\mathsf{SE.Enc},\mathsf{SE.Dec})\) be a post-quantum IND-CCA secure SKE. Then, the quantum qPKE given in Construction 1 is IND-CCA-EO secure._
Proof: We proceed with a sequence of hybrid games detailed in
* **Hybrid \(H_{0}\):** This is the IND-CCA game with \(\Pi\) with the challenge ciphertext fixed to \((x^{*},c^{*})=\mathpzc{Enc}(|\mathsf{pk}\rangle,m_{0}^{\prime})\).
* **Hybrid \(H_{1}\):** This is identical to \(H_{0}\) except instead of measuring \(|\mathpzc{qp}\!\!\!\xi\rangle\) when the adversary queries the encryption oracle, the challenger measures a copy of \(|\mathpzc{qp}\!\!\!\xi\rangle\) in advance to obtain \((x^{*},y^{*}=f_{\mathsf{dk}}(x^{*}))\) and answers queries to the encryption oracle using \((x^{*},y^{*})\) instead. The decryption oracle still returns \(\bot\) when queried \((x^{*},c^{*})\). This change is only syntactical so the two hybrids are the same from the adversary's view.
The hybrids \(H_{2}\) to \(H_{5}\) have 2 main goals: (i) to decorrelate the encryption/decryption oracles \(\mathpzc{Dec}^{*},\mathpzc{Enc}\) from the public-keys handed to the adversary and (ii) to remove the oracles' dependency on \(\mathsf{dk}\).
* **Hybrid \(H_{2}\):** This is identical to \(H_{1}\), except \((x^{*},y^{*})\) is removed from the copies of \(|\mathpzc{qp}\!\!\!\xi\rangle\) handed to the adversary. More precisely, the adversary is handed \(|\mathpzc{qp}\!\!\!\xi^{\prime}\rangle\) of the following form: \[|\mathpzc{qp}\!\!\!\xi^{\prime}\rangle=\frac{1}{\sqrt{2^{|x^{*}|}-1}}\sum_{x: x\neq x^{\prime}}|x\rangle|f_{\mathsf{dk}}(x)\rangle\] (6) The decryption oracle still returns \(\bot\) when queried on the challenge ciphertext. Note that \(|\mathpzc{qp}\!\!\!\xi\rangle\) and \(|\mathpzc{qp}\!\!\!\xi^{\prime}\rangle\) have \(\mathsf{negl}(\lambda)\) trace distance so the advantage of distinguishing \(H_{1}\) and \(H_{2}\) is \(\mathsf{negl}(\lambda)\).
* **Hybrid \(H_{3}\):** This (inefficient) hybrid is identical to \(H_{2}\) other than \(f_{\mathsf{dk}}\) being replaced with a truly random function \(f\), i.e. the public-keys are change to: \[|\mathpzc{qp}\!\!\!\xi^{\prime}\rangle=\frac{1}{\sqrt{2^{|x^{*}|}-1}}\sum_{x: x\neq x^{\prime}}|x\rangle|f(x)\rangle\] (7) The encryption and decryption oracle can be simulated by oracle access to \(f\). The decryption oracle returns \(\bot\) when queried \((x^{*},c^{*})\). The indistinguishability of \(H_{3}\) and \(H_{2}\) follows directly from pseudorandomness property of \(\{f_{k}\}_{k}\).
* **Hybrid \(H_{4}\):** This hybrid is identical to \(H_{3}\) other than \(y^{*}\) being sampled uniformly at random. Upon quering \((c_{0},c_{1})\) to the decryption oracle if \(c_{0}\neq x^{*}\), the oracle computes \(y=f(c_{0})\) and returns \(m=\mathsf{SE.Dec}(y,c_{1})\). In case \(c_{0}=x^{*}\) and \(c_{1}\neq c^{*}\), the decryption oracle returns \(m=\mathsf{SE.Dec}(y^{*},c_{1})\). On \((x^{*},c^{*})\) the oracle returns \(\bot\). The encryption oracle returns \((x^{*},\mathsf{SE.Enc}(y^{*},m))\) when queried on \(m\). As \(x^{*}\) does not appear in any of the public-keys this change is only syntactical.
* **Hybrid \(H_{5}\):** This hybrid reverts the changes of \(H_{3}\), i.e. \(\mathsf{dk}^{\prime}\) is sampled uniformly at random and the public-keys are changed as follows: \[|\mathpzc{qpk}^{\prime}\rangle=\frac{1}{\sqrt{2^{|x^{*}|}-1}}\sum_{x:x\neq x^{ \prime}}|x\rangle|f_{\mathsf{dk}^{\prime}}(x)\rangle\] (8) With this change, on query \((c_{0},c_{1})\) if \(c_{0}\neq x^{*}\), the decryption oracle computes \(y=f_{\mathsf{dk}^{\prime}}(c_{0})\) and returns \(m=\mathsf{SE.Dec}(y,c_{1})\). In case \(c_{0}=x^{*}\), the decryption oracle simply returns \(m=\mathsf{SE.Dec}(y^{*},c_{1})\) when \(c_{1}\neq c^{*}\) and \(\bot\) otherwise. The encryption oracle is unchanged from \(H_{4}\). The indistinguishability of \(H_{4}\) and \(H_{5}\) follows from the pseudorandomness of \(f\) and the fact that \(|\mathpzc{qpk}^{\prime}\rangle\) and \((x^{*},y^{*})\) are decorrelated. The hybrid is efficient again.
The next step is to remove the dependency of the encryption and decryption oracles on \(y^{*}\). This is done by querying the encryption and decryption oracles of the SKE.
* **Hybrid \(H_{6}\):** Let \(\mathsf{SE.OEnc}\) and \(\mathsf{SE.ODec}^{*}\) be two oracles implementing the encryption and decryption procedures of \(\mathsf{SE}\) with the key \(y^{*}\). \(\mathsf{SE.ODec}^{*}\) returns \(\bot\) when queried \(y^{*}\). In this hybrid, we syntactically change the encryption and decryption oracle using these two oracles. To implement the encryption oracle, on query \(m\) we simply query \(\mathsf{SE.OEnc}\) on message \(m\) and return \((x^{*},\mathsf{SE.OEnc}(m))\). To simulate the decryption oracle, on query \((c_{0},c_{1})\) we act the same as in \(H_{5}\) when \(c_{0}\neq x^{*}\), but on queries of form \((x^{*},c)\) we query \(\mathsf{SE.ODec}^{*}\) on \(c\) and return \(\mathsf{SE.ODec}^{*}(c)\). Due to the definition of \(\mathsf{OEnc}\) and \(\mathsf{ODec}^{*}\) these changes are also just syntactical. Note that although \(\mathsf{SE.ODec}^{*}\) always returns \(\bot\) on \(y^{*}\), it is only queried when \(c_{0}=x^{*}\), i.e. to cause this event the decryption oracle should be queried on the challenge ciphertext \((x^{*},c^{*})\).
* **Hybrid \(H_{7}\):** We provide the adversary with \(x^{*},\mathsf{SE.OEnc},\mathsf{SE.ODec}^{*}\), instead of access to the encryption and decryption oracle. Note that the adversary can implement the encryption and decryption oracles themselves by having access to \(x^{*},\mathsf{SE.OEnc},\mathsf{SE.ODec}^{*}\) and sampling a uniform \(\mathsf{dk}^{\prime}\) themselves and vice versa (\(\mathsf{SE.ODec}^{*}\) can be queried on \(c\) by querying the decryption oracle \((x^{*},c)\) and \(\mathsf{SE.OEnc}\) can be queried on \(m\) by querying the encryption oracle on \(m\)). This demonstrates that the hybrids are only syntactically different and hence are indistinguishable.
* **Hybrid \(H_{8}\):** This hybrid is identical to \(H_{7}\) with the only difference that the challenge ciphertext is swapped with \((x^{*},\mathsf{SE.OEnc}(0))\). Now notice that any adversary that can distinguish \(H_{8}\) from \(H_{7}\) can effectively break the IND-CCA security of \(\mathsf{SE}\). Hence, the indistinguishability of the two hybrids follows directly from the IND-CCA security of \(\mathsf{SE}\). Following the same exact hybrids for challenge ciphertext \(\mathpzc{Enc}(|\mathpzc{qpk}\rangle,m^{\prime}_{1})\) we can deduce that the scheme is IND-CCA-EO secure.
### CCA1-Secure Many-Bit Encryption from PRFS
We continue by presenting a CCA1-secure bit-encryption from PRFS. Extending this scheme to polynomially many bits is discussed at the end of this section, see Remark 3. The description of the scheme is given below in Construction 2.
**Construction 2**: **(IND-CCA1 secure qPKE from PRFS).**__
* _Assumptions:_ _A PRFS family_ \(\{|\psi_{\mathsf{dk},x}\rangle\}_{\mathsf{dk},x}\) _with super-logarithmic input-size. Let_ \(n\coloneqq n(\lambda)\)
* \(\mathcal{G\!en}(1^{\lambda})\) _1. Output_ \(\mathsf{dk}\leftarrow_{R}\{0,1\}^{\lambda}\)_._
* \(\mathcal{Q\!P\!K\!Gen}(\mathsf{dk})\)__ _1. Output_ \(|\mathcal{q\!P\!\!\xi}\rangle\leftarrow\sum_{x}|x\rangle_{R}|\psi_{\mathsf{dk},x}\rangle_{S}^{\otimes n}\)_, where_ \(x\in\{0,1\}^{\omega(\log\lambda)}\)_._
* \(\mathcal{F\!nc}(|\mathcal{q\!P\!\!\xi}\rangle,m)\) _for_ \(m\in\{0,1\}\)__ _1. Measure the_ \(R\) _registers of_ \(|\mathcal{q\!P\!\!\xi}\rangle\) _to obtain a classical string_ \(x\)_. Let_ \(|\phi\rangle\coloneqq|\psi_{\mathsf{dk},x}\rangle^{\otimes n}\) _denote the residual state._ _2. If_ \(m=0\)_, output the ciphertext as_ \((x,|\phi\rangle)\)_._ _3. Else, sample a uniformly random key_ \(\mathsf{dk}_{1}\)_, and output the ciphertext as_ \((x,|\psi_{\mathsf{dk}_{1},x}\rangle^{\otimes n})\)_._
* \(\mathcal{D\!ec}(\mathsf{dk},(x,\Psi))\)__
* _Compute_ \(|\psi_{\mathsf{dk},x}\rangle^{\otimes n}\) _and perform_ \(n\) _SWAP tests for each subsystem of_ \(\Psi\) _of the same size as_ \(|\psi_{\mathsf{dk},x}\rangle\) _with_ \(|\psi_{\mathsf{dk},x}\rangle\)_._ _2. If the outcome of the SWAP tests is_ \(0\) _all the time, output_ \(0\)_, otherwise output_ \(1\)_._
The correctness of the scheme follows from the fact that the states \(|\psi_{\mathsf{dk}_{1},x}\rangle\) are relatively well spread out for a random choice of \(\mathsf{dk}\). This is due to the pseudorandomness of the state generator. Note that if in step 3 instead of picking \(\mathsf{dk}_{1}\) randomly and computing \(|\psi_{\mathsf{dk}_{1},x}\rangle\), the encryption algorithm sampled \(|\vartheta\rangle^{\otimes n}\), from the Haar measure, the expected probability of \(n\) SWAP tests between \(|\psi_{x,\mathsf{dk}}\rangle\) and \(|\vartheta\rangle\) all returning \(0\) would be \(2^{-n}\). Hence, if the probability is more than negligibly apart for \(n\) SWAP tests between \(|\psi_{x,\mathsf{dk}_{1}}\rangle\) and \(|\psi_{x,\mathsf{dk}}\rangle\) for a random choice of \(\mathsf{dk}_{1}\), with a Chernoff bound argument one can show that this would lead to a distinguisher for the PRFS. Hence, for \(n\) polynomial in \(\lambda\) the scheme has negligible correctness error.
Theorem 4.1: _The construction in Construction 2 is IND-CCA1 secure (see Definition 8), assuming \(\{|\psi_{\mathsf{dk},x}\rangle\}_{\mathsf{dk},x}\) is a PRFS with super-logarithmic input-size._
Proof: We prove the theorem via a series of hybrids.
* **Hybrid \(H_{0}\).** The original security game as defined in Definition 8.
* **Hybrid \(H_{1}\).** This is identical to hybrid \(H_{0}\), except that the challenger, instead of measuring \(|\mathcal{q\!P\!\!\xi}\rangle\) when the adversary queries the encryption oracle for the first time, the challenger measures (the \(R\) registers of) this state before providing the copies of \(|\mathcal{q\!P\!\!\xi}\rangle\) to the adversary. Note that by measuring \(|\mathcal{q\!P\!\!\xi}\rangle\) in the computational basis, the challenger would obtain a classical uniformly random string \(x^{*}\), let the residual state be \(|\phi^{*}\rangle\coloneqq|\psi_{\mathsf{dk},x^{*}}\rangle^{\otimes n}\). Note that the two operations corresponding to the challenger's measurement of \(|\mathcal{q\!P\!\!\xi}\rangle\) and the creation of the copies of \(|\mathcal{q\!P\!\!\xi}\rangle\) given to the adversary commute. Thus, the distribution of the two hybrids are identical and no adversary can distinguish \(H_{0}\) from \(H_{1}\) with non-zero advantage.
* **Hybrid \(H_{2}\).** This is identical to hybrid \(H_{1}\), except that the challenger samples \(x^{*}\) as in the previous hybrid, and instead of providing \(|\mathcal{q\!P\!\!\xi}\rangle\) to the adversary, it provides \[|\mathcal{q\!P\!\!\xi}^{\prime}\rangle\coloneqq\frac{1}{\sqrt{2^{|x^{*}|-1}}} \sum_{x:x\neq x^{*}}|x\rangle|\psi_{\mathsf{dk},x}\rangle^{\otimes n}.\] Moreover, in the challenge query, the challenger uses \((x^{*},|\phi^{*}\rangle)\) for the encryption of the chosen message \(m\), without measuring a fresh copy of \(|\mathcal{q\!P\!\!\xi}\rangle\) (that is, it skips the first step of the encryption algorithm). We note that this state \(|\mathcal{q\!P\!\!\xi}^{\prime}\rangle\) can be efficiently prepared.
The distinguishing probability of the two hybrids \(H_{1}\) and \(H_{2}\) implies that we can distinguish the following quantum states \(|\mathpzc{qp}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* Given a qPKE length-restricted CCA1 encryption, and a (non-restricted length) symmetric key encryption, we can define a hybrid encryption scheme, where the qPKE scheme is used first to encrypt a random (fixed length) secret key, which is later used to encrypt an arbitrarily long message. The entire scheme is CPA- (respectively, CCA1-) secure if the symmetric key encryption has CPA- (respectively, CCA1-) security.
* Finally, we note that the following many-bit symmetric key encryption scheme can be proven to be CCA1 secure, using the same proof strategy as in Theorem 2, based on the existence of PRFS alone. Given a secret key \(\mathsf{dk}\), to encrypt a message \(m\in\{0,1\}^{\ell}\), we sample \(\ell\) distinct uniformly random strings \(x_{i}\), and compute \(|\psi_{\mathsf{dk},x_{i}}\rangle^{\otimes n}\). Then each bit \(m_{i}\) will be encrypted using as \((x_{i},|\psi_{\mathsf{dk},x_{i}}\rangle^{\otimes n})\) if \(m_{i}=0\), or \((x_{i},|\psi_{\mathsf{dk}^{\prime},x_{i}}\rangle^{\otimes n})\) if \(m_{i}=1\) for a fresh key \(\mathsf{dk}^{\prime}\).
### Generic Construction of Non-Malleable qPKE
We remark that known implications from the literature can be used to show that IND-CPA secure qPKE _with classical ciphertexts_ implies non-malleable qPKE: The work of [1] shows a black-box compiler from IND-CPA encryption to non-malleable encryption, which also applies to the settings of quantum public-keys. The only subtlety is that the compiler assumes the existence of a one-time signature scheme to sign the ciphertext. In [14, 15] it is shown that one-time signatures (with quantum verification keys) exist assuming one-way state generators, which in turn are implied by qPKE. Combining the implications of these two works, we obtain a generic construction of non-malleable qPKE from any IND-CPA secure one.
## 5 IND-CPA-EO secure qPKE from PRFSPD
In this section, we propose a construction for qPKE from pseudo-random function-like states with proof of destruction. The construction is reusable, has classical ciphers, and is CPA-EO secure.
We first import the following result that builds _symmetric_-key encryption from PRFSPD.
Proposition 1 ([1]): _If quantum-secure PRFSPD exists, then there exists a quantum CPA symmetric encryption with classical ciphertexts._
We give the formal construction for many-bit reusable encryption scheme from PRFSPD in Construction 3.
**Construction 3** (IND-CPA-EO secure qPKE from PRFSPD): \(\bullet\)__Assumptions: _A PRFSPD family_ \(\{|\psi_{\mathsf{dk},x}\rangle\}_{\mathsf{dk},x}\) _and a quantum symmetric encryption scheme with classical ciphers_ \(\{\mathsf{Enc},\mathsf{Dec}\}\)_._ \(\bullet\)__\(\mathsf{Gen}(1^{\lambda})\)__
1. _Let_ \(\mathsf{dk}_{0,i}\leftarrow_{R}\{0,1\}^{\lambda}\) _for all_ \(i\in[1,\lambda]\)_._ 2. _Output_ \(\mathsf{dk}\leftarrow\{\mathsf{dk}_{0,i}\}_{i\in[1,\lambda]}\)_._ \(\bullet\)__\(\mathsf{QPKGen}(\mathsf{dk})\)__
1. _Output_ \(|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2. _Measure the left registers of_ \(|\mathpzc{qp}\!\!\xi_{i}\rangle\) _to obtain classical strings_ \(x_{i}\)_. Denote the post-measurement states as_ \(|\psi_{i}^{\prime}\rangle\)_._
3. _Set_ \(y_{i}\leftarrow\mathpzc{Destruct}(|\psi_{i}^{\prime}\rangle)\)_._
4. _For_ \(i\in[1,\lambda]\)_, pick_ \(\mathsf{dk}_{1,i}\leftarrow\{0,1\}^{\lambda}\) _and compute_ \(|\psi_{\mathsf{dk}_{1,i},x_{i}}\rangle\)_._
5. _Set_ \(y_{i}^{\prime}\leftarrow\mathpzc{Destruct}(|\psi_{\mathsf{dk}_{1,i},x_{i}} \rangle)\) _for all_ \(i\in[\lambda]\)_._
6. _Pick a uniformly random key_ \(k\leftarrow\{0,1\}^{\lambda}\)_._
7. _Set_ \(\tilde{y}_{i}=\begin{cases}y_{i}^{\prime}&\text{,if }k_{i}=0\\ y_{i}&\text{,if }k_{i}=1\end{cases}\)_._
8. _Output_ \((\mathsf{Enc}(k,m),((x_{i},\tilde{y}_{i}))_{i})\) _as ciphertext and_ \((k,((x_{i},\tilde{y}_{i}))_{i})\) _as the recycled public-key._
* \(\mathpzc{Dec}(\mathsf{dk},c)\)__
1. _Interpret_ \(c\) _as_ \((c^{\prime},((x_{i},\tilde{y}_{i}))_{i})\)_._
2. _Let_ \(k_{i}^{\prime}=\mathpzc{V}\mathpzc{v}\mathpzc{r}(\mathsf{dk}_{0,i},x_{i}, \tilde{y}_{i})\) _and let_ \(k^{\prime}=k_{0}^{\prime}\ldots k_{\lambda}^{\prime}\)_._
3. _Output_ \(\mathsf{Dec}(k^{\prime},c^{\prime})\)_._
The correctness of our scheme relies on the existence of PRFSPD with pseudorandomness and unclonability of proofs properties. The proof of correctness can be shown similarly to that of Construction 2. Next, we show that this construction achieves IND-CPA-EO security in Theorem 3.
Theorem 3.1: _If quantum-secure PRFSPD with super-logarithmic input size exists, then there exists a public-key encryption with classical ciphertexts which is IND-CPA-EO secure._
Proof: Our construction is given in Construction 3. It uses a PRFSPD family \(\{|\psi_{\mathsf{dk},x}\rangle\}_{\mathsf{dk},x}\) and a quantum symmetric encryption scheme with classical ciphers \(\{\mathsf{Enc},\mathsf{Dec}\}\). We prove the security of our scheme through a series of hybrids.
* **Hybrid \(H_{0}\).** The original security game as defined in Definition 7.
* **Hybrid \(H_{1}\).** This is identical to hybrid \(H_{0}\), except that the challenger, instead of measuring \(|\mathpzc{qp}\!\!\xi_{i}\rangle\) (for all \(i\in[\lambda]\)) when the adversary queries the encryption oracle for the first time, the challenger measures the left register of each \(|\mathpzc{qp}\!\!\xi_{i}\rangle\) before providing the copies of \(|\mathpzc{qp}\!\!\xi\rangle\) to the adversary. Note that by measuring \(|\mathpzc{qp}\!\!\xi_{i}\rangle\) in the computational basis, the challenger would obtain a classical uniformly random string \(x_{i}^{*}\). Note that the two operations corresponding to the challenger's measurement of \(|\mathpzc{qp}\!\!\xi\rangle\) and the creation of the copies of \(|\mathpzc{qp}\!\!\xi\rangle\) given to the adversary commute. Thus, the distribution of the two hybrids are identical and no adversary can distinguish \(H_{0}\) from \(H_{1}\) with non-zero advantage.
* **Hybrid \(H_{2}\).** This is identical to hybrid \(H_{1}\), except that the challenger samples \(x_{i}^{*}\) as in the previous hybrid, and instead of providing \(|\mathpzc{qp}\!\!\xi\rangle\) to the adversary, it provides \[|\mathpzc{qp}\!\!\xi^{\prime}\rangle\coloneqq\bigotimes_{i\in[\lambda]}\frac{ 1}{\sqrt{2|x_{i}^{*}|}-1}\sum_{x_{i}:x_{i}\neq x_{i}^{*}}|x_{i}\rangle|\psi_{ \mathsf{dk}_{0,i},x_{i}}\rangle.\] Moreover, in the challenge query, the challenger uses \((x_{i}^{*},|\psi_{\mathsf{dk}_{0,i},x_{i}^{*}}\rangle)\) for all \(i\in[\lambda]\) for the encryption of the chosen message \(m\), without measuring a fresh copy of \(|\mathpzc{qp}\!\!\xi\rangle\) (that is, it skips the first step of the encryption algorithm). We note that this state \(|\mathpzc{qp}\!\!\xi^{\prime}\rangle\) can be efficiently prepared. The distinguishing probability of the two hybrids \(H_{1}\) and \(H_{2}\) implies that we can distinguish the following quantum states \(|\mathpzc{qp}\!\!\xi\rangle^{\otimes p}\otimes\bigotimes_{i\in[\lambda]}|x_{i }^{*}\rangle\) and \(|\mathpzc{qp}\!\!\xi^{\prime}\rangle^{\otimes p}\otimes\bigotimes_{i\in[ \lambda]}|x_{i}^{*}\rangle\) with the same
probability, but these two quantum states have \(\mathsf{negl}(\lambda)\) trace-distance for any polynomial \(p\). Therefore, any adversary can only distinguish \(H_{1}\) and \(H_{2}\) with success probability at most \(\mathsf{negl}(\lambda)\).
* **Hybrid \(H_{2,i}\) for \(i\in[0,\lambda]\).** We define a series of (inefficient) hybrids \(H_{2,i}\), in which \(H_{2,0}\coloneqq H_{2}\), and we denote \(H_{2,\lambda}\coloneqq H_{3}\). Each \(H_{2,i+1}\) is identical as \(H_{2,i}\), except that the challenger uses a Haar oracle \(\mathcal{O}_{\mathsf{Haar}_{i}}\) in place of \(|\psi_{\mathsf{dk}_{0,i},\cdot}\rangle\). In particular, the quantum public key in the hybrid \(H_{2,i}\) is computed as: \[|\mathpzc{qp}\xi^{\prime}\rangle\leftarrow\bigotimes_{j=1}^{i}\sum_{x_{j}:x_{ j}\neq x_{j}^{*}}|x_{j}\rangle\otimes|\vartheta_{x_{j}}\rangle\otimes \bigotimes_{j=i+1}^{\lambda}\sum_{x_{j}:x_{j}\neq x_{j}^{*}}|x_{j}\rangle| \psi_{\mathsf{dk}_{0,j},x_{j}}\rangle,\] where each \(|\vartheta_{x_{j}}\rangle\) is an output of \(\mathcal{O}_{\mathsf{Haar}_{j}}\) on input \(x_{j}\). For the challenge encryption query, the challenger uses \((x_{j}^{*},|\vartheta_{x_{j}^{*}}\rangle)\) for all \(j\in[1,i]\), and \((x_{j}^{*},|\psi_{\mathsf{dk}_{0,j},x_{j}^{*}}\rangle)\) for all \(j\in[i+1,\lambda]\). By pseudorandomness property of \(|\psi_{\mathsf{dk}_{0,i},\cdot}\rangle\), we have that \(H_{2,i}\) and \(H_{2,i+1}\) are computationally indistinguishable.
* **Hybrid \(H_{3,i}\) for \(i\in[0,\lambda]\).** We define a series of (inefficient) hybrids \(H_{3,i}\), in which \(H_{3,0}\coloneqq H_{3}\), and we denote \(H_{3,\lambda}\coloneqq H_{4}\). In each \(H_{3,i+1}\), we revert the changes in \(H_{3,i}\), except that the challenger samples uniformly random keys \(\mathsf{dk}_{i}^{\prime}\) to compute the \(i\)-the component in \(|\mathpzc{qp}\xi^{\prime}\rangle\), except for the one used to encrypt the challenge query. Similar to the previous argument, \(H_{3,i+1}\) and \(H_{3,i}\) are also computationally indistinguishable due to pseudorandomness property of \(|\psi_{\mathsf{dk}_{i}^{\prime},\cdot}\rangle\).
* **Hybrid \(H_{4,i}\) for \(i\in[0,\lambda]\).** We define a series of (inefficient) hybrids \(H_{4,i}\), in which \(H_{4,0}\coloneqq H_{4}\), and we denote \(H_{4,\lambda}\coloneqq H_{5}\). Each hybrid \(H_{4,i}\) is identical to \(H_{4,i+1}\), except that for the challenge encryption, the challenger does not sample \(\mathsf{dk}_{1,i}\) and compute \(|\psi_{\mathsf{dk}_{1,i},x_{i}^{*}}\rangle\). Instead, the challenger generates \(|\vartheta_{x_{i}^{*}}\rangle\) using a Haar random oracle \(\mathcal{O}_{\mathsf{Haar}_{i}}\) and uses this state to compute \(y_{i}^{\prime}\) (by applying \(\mathpzc{Destruct}\) to \(|\vartheta_{x_{i}^{*}}\rangle\)). By the pseudorandomness of \(|\psi_{\mathsf{dk}_{1,i},\cdot}\rangle\), \(H_{4,i}\) and \(H_{4,i+1}\) are computationally indistinguishable.
* **Hybrid \(H_{6}\).** This hybrid is identical to \(H_{5}\), except that now the challenger sets \(\tilde{y}_{i}=y_{i}\) for all \(i\) for the challenge encryption query. Note that in this hybrid, both \(y_{i}\) and \(y_{i}^{\prime}\) are computed by applying \(\mathpzc{Destruct}\) to a Haar random state, thus they are output of the same distribution. Therefore, \(H_{5}\) and \(H_{6}\) are identical.
* **Hybrid \(H_{6,i}\) for \(i\in[0,\lambda]\).** We define a series of hybrids \(H_{6,i}\), in which \(H_{6,0}\coloneqq H_{6}\), and we denote \(H_{6,\lambda}\coloneqq H_{7}\). Each hybrid \(H_{6,i+1}\) is identical to \(H_{6,i}\), except now instead of using a Haar random oracle in encryption of the challenge query, the challenger samples a fresh key \(\mathsf{dk}_{i}\) and uses this key to compute \(\tilde{y}_{i}\) which is a proof of destruction of the state \(|\psi_{\mathsf{dk}_{i},x_{i}^{*}}\rangle\). By pseudorandomness of \(|\psi_{\mathsf{dk}_{i},\cdot}\rangle\), \(H_{6,i+1}\) and \(H_{6,i}\) are computationally indistinguishable. We also note that the hybrid \(H_{7}\) is now efficient again. In this final hybrid, we note that the secret key \(k\) of the symmetric key encryption scheme is uniformly random and independent from all the other variables in the hybrid. Thus, we can easily reduce the winning probability of the adversary in this hybrid to the security of the symmetric key encryption scheme, which is negligible.
Overall, we obtain the winning probability of the adversary in the first hybrid \(H_{0}\) is negligible, and conclude the proof.
Impossibility of Unconditionally Secure qPKE
In the following, we investigate the question on whether qPKE is possible to construct with information-theoretic security, and we give strong bounds against this. First, let us mention that a recent work by Morimae et al. [14] shows that an object called quantum pseudo-one-time pad (QPOTP) implies the existence of efficiently samplable, statistically far but computationally indistinguishable pairs of (mixed) quantum states (EFI pairs). QPOTP is a one-time symmetric encryption with quantum ciphertexts and classical keys, whose key length is shorter than the message length. qPKE immediately implies the existence of QPOTP, by increasing the message length, using bit-by-bit encryption. Since EFI pairs cannot exist information-theoretically, this chain of implications rules out the existence of unconditionally secure qPKE.11
Footnote 11: This observation was pointed out to us by Takashi Yamakawa.
For the sake of completeness, we provide a new and direct proof of the impossibility statement using a shadow tomography argument.
A Proof from Shadow Tomography.In order to prove our impossibility result, we first show that if two public-keys \(|\mathpzc{qpK}\rangle\) and \(|\mathpzc{qpK}^{*}\rangle\) are close, if we encrypt a random bit using \(|\mathpzc{qpK}^{*}\rangle\), the probability of decrypting correctly with \(\mathsf{dk}\) is high, where \(\mathsf{dk}\) is the corresponding secret-key of \(|\mathpzc{qpK}\rangle\).
Lemma 1: _Let \(\lambda\) be the security parameter and \(\Gamma=(\mathpzc{Gen},\mathpzc{QPKGen},\mathpzc{Enc},\mathpzc{Dec})\) be a qPKE. Let \(\mathsf{dk}^{*},|\mathpzc{qpK}\rangle^{*}\) be a fixed pair of honestly generated keys and for all decryption keys \(\mathsf{dk}\) define \(p_{\mathsf{dk}}\) to be:_
\[p_{\mathsf{dk}}=\Pr\left[\mathpzc{Dec}(\mathsf{dk},\mathpzc{qc})=\mathpzc{ pt}\left|\begin{array}{c}\mathpzc{pt}\stackrel{{\$}}{{\leftarrow}}\{0,1\}\\ (\mathpzc{qc},\cdot)\leftarrow\mathpzc{Enc}(\mathpzc{qpK}^{*},\mathpzc{pt}) \end{array}\right.\right]\]
_and let \(|\mathpzc{qpK}_{\mathsf{dk}}\rangle\leftarrow\mathpzc{QPKGen}(\mathsf{dk})\). For all \(\mathsf{dk}\), if \(\left|\langle\mathpzc{qpK}^{*}|\mathpzc{qpK}_{\mathsf{dk}}\rangle\right|\geq 1-\epsilon\), then \(p_{\mathsf{dk}}\geq 1-\sqrt{3\epsilon}\)._
Proof: Let \(U_{\mathpzc{Enc}}\) be the purified implementation of the encryption procedures, i.e. given the state \(|\mathpzc{qpK}^{*}\rangle|b\rangle|0\rangle\), \(U_{\mathpzc{Enc}}\) computes the state computed by \(\mathpzc{Enc}\) prior to the measurement. We argue that for any \(|\mathpzc{qpK}_{\mathsf{dk}}\rangle\) which is close to \(|\mathpzc{qpK}^{*}\rangle\), the purified ciphertexts generated by the two keys are also close. For any bit \(b\), the purified ciphertext are defined as \(\tilde{\mathpzc{qc}}_{b}=U_{\mathpzc{Enc}}|\mathpzc{qpK}^{*}\rangle|b\rangle|0 \rangle\langle 0|\langle b|\langle\mathpzc{qpK}^{*}|U^{\dagger}_{\mathpzc{Enc}}\) and \(\tilde{\mathpzc{qc}}_{b}^{\prime}=U_{\mathpzc{Enc}}|\mathpzc{qpK}_{\mathsf{ dk}}\rangle|b\rangle|0\rangle\langle 0|\langle b|\langle\mathpzc{qpK}_{\mathsf{dk}}|U^{ \dagger}_{\mathpzc{Enc}}\). We refer to these as purified ciphertexts. Now we can show,
\[\mathrm{Tr}(\tilde{\mathpzc{qc}}_{b}\tilde{\mathpzc{qc}}_{b}^{ \prime\dagger}) =\mathrm{Tr}(U_{\mathpzc{Enc}}\langle\mathpzc{qpK}^{*}|\mathpzc{ qpK}_{\mathsf{dk}}\rangle|\mathpzc{qpK}^{*}\rangle\langle\mathpzc{qpK}_{ \mathsf{dk}}|U^{\dagger}_{\mathpzc{Enc}}) \tag{9}\] \[=\left|\langle\mathpzc{qpK}^{*}|\mathpzc{qpK}_{\mathsf{dk}} \rangle\right|^{2}\geq(1-\epsilon)^{2} \tag{10}\]
The transition from Equation (9) to Equation (10) follows from the trace-preserving property of unitaries. Let \(\{\Pi^{b}_{\mathsf{dk}}\}_{\mathsf{dk}}\) be the POVM corresponding to decrypting a purified ciphertext with key \(\mathsf{dk}\), i.e. the probability of a purified ciphertext \(\mathpzc{qc}\) being decrypted to \(b\) by \(\mathsf{dk}\) is given by \(\mathrm{Tr}(\Pi^{b}_{\mathsf{dk}}\mathpzc{qc})\). Now the term \(p_{\mathsf{dk}}\) can be rewritten as follows:
\[p_{\mathsf{dk}}=\frac{1}{2}[\mathrm{Tr}(\Pi^{0}_{\mathsf{dk}}\tilde{\mathpzc{qc }}_{0})+\mathrm{Tr}(\Pi^{1}_{\mathsf{dk}}\tilde{\mathpzc{qc}}_{1})] \tag{11}\]
Now note that, \(\mathrm{Tr}(\Pi^{0}_{\mathsf{dk}}\mathpzc{q}^{\prime}_{0})=\mathrm{Tr}(\Pi^{1}_{ \mathsf{dk}}\mathpzc{q}^{\prime}_{1})=1-\mathsf{negl}(\lambda)\) as we assumed \(\Gamma\) has negligible correctness error. Now we can bound \(p_{\mathsf{dk}}\) as follows,
\[p_{\mathsf{dk}} =\frac{1}{2}[\mathrm{Tr}(\Pi^{0}_{\mathsf{dk}}\tilde{\mathpzc{q} }_{0})+\mathrm{Tr}(\Pi^{1}_{\mathsf{dk}}\tilde{\mathpzc{q}}_{1})] \tag{12}\] \[\geq 1-\mathsf{negl}(\lambda)-\frac{1}{2}[\mathrm{Tr}(|\Pi^{0}_ {\mathsf{dk}}(\tilde{\mathpzc{q}}_{0}-\tilde{\mathpzc{q}}^{\prime}_{0})|)+ \mathrm{Tr}(|\Pi^{1}_{\mathsf{dk}}(\tilde{\mathpzc{q}}_{1}-\tilde{\mathpzc{q} }^{\prime}_{1})|)]\] (13) \[\geq 1-\mathsf{negl}(\lambda)-\frac{1}{2}[\mathrm{Tr}(|\tilde{ \mathpzc{q}}_{0}-\tilde{\mathpzc{q}}^{\prime}_{0}|)+\mathrm{Tr}(|\tilde{ \mathpzc{q}}_{1}-\tilde{\mathpzc{q}}^{\prime}_{1}|)]\] (14) \[=1-\mathsf{negl}(\lambda)-\frac{1}{2}[\sqrt{1-\mathrm{Tr}( \tilde{\mathpzc{q}}_{0}\tilde{\mathpzc{q}}^{\prime}_{0}{}^{\dagger})}+\sqrt{ 1-\mathrm{Tr}(\tilde{\mathpzc{q}}_{1}\tilde{\mathpzc{q}}_{1}{}^{\prime}{}^{ \dagger})}]\] (15) \[\geq 1-\mathsf{negl}(\lambda)-\sqrt{2\epsilon}\geq 1-\sqrt{3 \epsilon} \tag{16}\]
The transition from Equation (14) to Equation (15) is due to \(\tilde{\mathpzc{q}}_{\mathpzc{C}_{b}}\) and \(\tilde{\mathpzc{q}}^{\prime}_{b}\) being pure states. This concludes the proof of the lemma.
Given Lemma 1 one can reduce the adversary's task in the IND-CPA game to finding a decryption key \(\mathsf{dk}\) such that the state \(|\mathpzc{qp}\mathpzc{K}_{\mathsf{dk}}\rangle\leftarrow\mathpzc{QPXGen}( \mathsf{dk})\) is close to \(|\mathpzc{qp}\mathpzc{K}^{*}\rangle\) in inner product distance. The main technique we use to realize this subroutine of the adversary is shadow tomography introduced by Aaronson et al. [1]. At the core of our proof is the following theorem by Huang, Kueng, and Preskill [10].
Theorem 4 (Theorem 1 and S16 [10]).: _Let \(O_{1},\ldots,O_{M}\) be \(M\) fixed observables and let \(\rho\) be an unknown \(n\)-qubit state. Given \(T=O(\log(M/\delta)/\epsilon^{2}\times\max_{i}\mathrm{Tr}(O_{i}^{2}))\) copies of \(\rho\), there exists a quantum algorithm that performs measurements in random Clifford basis on each copy and outputs \(\tilde{p}_{1},\ldots,\tilde{p}_{M}\) such that, with probability at least \(1-\delta\)_
\[\forall i,|\tilde{p}_{i}-\mathrm{Tr}(O_{i}\rho)|\leq\epsilon\]
At a high level, the theorem states that outcomes of polynomially many random Clifford measurements on a state, i.e. a polynomial number of classical shadows, are enough to reconstruct an estimate of the statistics obtained by measuring an exponential number of observables. Note that, the post-processing required to reconstruct \(\tilde{p}_{i}\) values is often inefficient, however for our purpose, i.e. proving the impossibility of an information-theoretically secure quantum PKE the efficiency of the procedure is not of concern. Using Theorem 4 we are able to prove the impossibility statement.
Theorem 5.: _For any security parameter \(\lambda\) and qPKE \(\Gamma=(\mathpzc{Gen},\mathpzc{QPXGen},\mathpzc{Enc},\mathpzc{Dec})\) there exists a polynomial \(m\) and a computationally unbounded adversary \(\mathcal{A}\) who can win the IND-CPA game with significant advantage only given \(m(\lambda)\) copies of the public-key._
Remark 4.: Actually our attack allows us to recover the secret key with high probability, and thus the attack also breaks the one-wayness security of qPKE (which is a weaker security notion than IND-CPA). Thus, our theorem indeed shows a generic impossibility of unconditionally secure qPKE.
Proof.: Let us describe the adversary given \(m\) copies of the public-key \(|\mathpzc{qp}\mathpzc{K}^{*}\rangle\) alongside a challenge ciphertext \(\mathpzc{qc}\). We set the value of \(m\) later in the proof. For a value \(N\), we define the following rank \(1\) projection ensemble \(\{\Pi^{1}_{\mathsf{dk}}=|\mathpzc{qp}\mathpzc{K}_{\mathsf{dk}}\rangle\langle \mathpzc{qp}\mathpzc{K}_{\mathsf{dk}}|^{\otimes N}\}_{\mathsf{dk}\leftarrow \mathpzc{Gen}(1^{\lambda})}\). The adversary tries to find a decryption
key \(\mathsf{dk}\) such that \(\operatorname{Tr}(\Pi^{1}_{\mathsf{dk}}|\mathpzc{qp}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## Acknowledgments
The authors wish to thank Prabhanjan Ananth and Umesh Vazirani for related discussions, and Takashi Yamakawa for pointing out a simple argument to rule out the existence of information-theoretically secure qPKE. The argument is replicated here with his permission.
ABG and QHV are supported by ANR JCJC TCS-NISQ ANR-22-CE47-0004, and by the PEPR integrated project EPiQ ANR-22-PETQ-0007 part of Plan France 2030. GM was partially funded by the German Federal Ministry of Education and Research (BMBF) in the course of the 6GEM research hub under grant number 16KISK038 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2092 CASA - 390781972. OS was supported by the Israeli Science Foundation (ISF) grant No. 682/18 and 2137/19, and by the Cyber Security Research Center at Ben-Gurion University. KB and LH are supported by the Swiss National Science Foundation (SNSF) through the project grant 192364 on Post Quantum Cryptography.
OS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 756482). MW acknowledges support by the the European Union (ERC, SYMOPTIC, 101040907), by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2092 CASA - 390781972, by the BMBF through project QuBRA, and by the Dutch Research Council (NWO grant OCENW.KLEIN.267). Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
|
2307.08682
|
Implementation of a perception system for autonomous vehicles using a
detection-segmentation network in SoC FPGA
|
Perception and control systems for autonomous vehicles are an active area of
scientific and industrial research. These solutions should be characterised by
high efficiency in recognising obstacles and other environmental elements in
different road conditions, real-time capability, and energy efficiency.
Achieving such functionality requires an appropriate algorithm and a suitable
computing platform. In this paper, we have used the MultiTaskV3
detection-segmentation network as the basis for a perception system that can
perform both functionalities within a single architecture. It was appropriately
trained, quantised, and implemented on the AMD Xilinx Kria KV260 Vision AI
embedded platform. By using this device, it was possible to parallelise and
accelerate the computations. Furthermore, the whole system consumes relatively
little power compared to a CPU-based implementation (an average of 5 watts,
compared to the minimum of 55 watts for weaker CPUs, and the small size (119mm
x 140mm x 36mm) of the platform allows it to be used in devices where the
amount of space available is limited. It also achieves an accuracy higher than
97% of the mAP (mean average precision) for object detection and above 90% of
the mIoU (mean intersection over union) for image segmentation. The article
also details the design of the Mecanum wheel vehicle, which was used to test
the proposed solution in a mock-up city.
|
Maciej Baczmanski, Mateusz Wasala, Tomasz Kryjak
|
2023-07-17T17:44:18Z
|
http://arxiv.org/abs/2307.08682v1
|
# Implementation of a perception system for autonomous vehicles using
###### Abstract
Perception and control systems for autonomous vehicles are an active area of scientific and industrial research. These solutions should be characterised by both high efficiency in recognising obstacles and other environmental elements in different road conditions, real-time capability, and energy efficiency. Achieving such functionality requires an appropriate algorithm and a suitable computing platform. In this paper, we have used the MultiTaskV3 detection-segmentation network as the basis for a perception system that can perform both functionalities within a single architecture. It was appropriately trained, quantised, and implemented on the AMD Xilinx Kria KV260 Vision AI embedded platform. By using this device, it was possible to parallelise and accelerate the computations. Furthermore, the whole system consumes relatively little power compared to a CPU-based implementation (an average of 5 watts, compared to the minimum of 55 watts for weaker CPUs, and the small size (119mm x 140mm x 36mm) of the platform allows it to be used in devices where the amount of space available is limited. It also achieves an accuracy higher than 97% of mAP (mean average precision) for object detection and above 90% of mIoU (mean intersection over union) score for image segmentation. The article also details the design of the Mecanum wheel vehicle, which was used to test the proposed solution in a mock-up city.
Keywords:detection-segmentation neural network, perception, embedded AI, SoC FPGA, eGPU, Vitis AI, Mecanum wheel vehicle
## 1 Introduction
Today, we are witnessing the rapid development of advanced mobile robotics, including autonomous cars and drones (unmanned aerial vehicles, UAV). This would not be possible without advances in the implementation of perception and control systems, including the use of deep neural networks (DNN). DNNs make
it possible to achieve high accuracy, but memory and computational complexity remain significant challenges. In order to meet the requirements of mobile platforms, i.e. low latency and low energy consumption, it becomes necessary to use specialised hardware platforms such as SoC FPGAs (System on Chip Field Programmable Gate Arrays) or eGPUs (embedded Graphic Processing Units). These solutions also have the advantage of relatively small size and weight. It is also worth noting that a major challenge is the reliability analysis of network-based solutions, including their explainability [1]. This is of particular importance when traffic safety, for example, depends on DNNs detections or control.
In perception systems, two basic tasks can be roughly distinguished: object detection and segmentation (semantic and instance). Object detection is the marking of objects belonging to the considered classes (e.g. cars, pedestrians, cyclists, traffic signs, etc.) in the image with bounding boxes or sometimes binary masks. Semantic segmentation involves assigning to each pixel a label that tells what object it belongs to (e.g. drivable area, horizontal road sign, vegetation, buildings, persistent, or sky). Instance segmentation, on the other hand, allows different labels to be given to pixels belonging to two separate objects of the same class (e.g. two pedestrians). It should be noted that object detection is a simpler and thus computationally less complex task. A typical solution /changeusingthat uses DNNs is the YOLO (You Only Look Once) family of algorithms [2]. In contrast, segmentation, especially of instances to obtain similar information, is much more complex - requiring both longer learning and inference. U-Nets [3] are typically used for semantic segmentation and Mask R-CNN-based [4] solutions for instances.
For autonomous vehicle perception systems, the tasks of detection and segmentation appear together. For objects such as pedestrians, vehicles, bicycles, vertical road signs, or traffic lights, the use of detection is sufficient. However, for the detection of drivable area or horizontal road signs (including pedestrian crossings), it is better to use segmentation. Hence, detection-segmentation networks have been proposed in the literature, which combine the advantages of both approaches and, at the same time, thanks to a common backbone (encoder), are characterised by lower computational complexity and an easier learning process than instance segmentation approaches. A detection-segmentation network, in addition to the aforementioned backbone, consists of a segmentation head and several detection heads. Examples of such networks are YOLOP [5], HybridNets [6] and MultiTask V3 [7] discussed in Section 2.
Taking into account the properties of the detection segmentation networks discussed above, we decided to use this solution as the basis for the perception system of our autonomous vehicle model. We used the MultiTask V3 network, which we implemented and deployed on two embedded platforms: SoC FPGA Kria KV260 and an eGPU (NVIDIA Jetson Nano and Xavier NX). The experiments performed showed that detection-segmentation networks represent a good compromise between accuracy, performance, and power consumption. We also discussed the design of the Mecanum wheeled vehicle used. To the best of our knowledge, this is the first paper that discusses the hardware implementation of
a perception system based on a detection-segmentation network implemented in an SoC FPGA, the results of which were applied to the control of an autonomous vehicle model.
The remainder of this paper is structured as follows. In Section 2 we discuss the relevant prior works on detection-segmentation networks and DNNs acceleration on SoC FPGA. Section 3 discusses the methods used, including the hardware implementation of the considered DNNs, and the design of the autonomous vehicle model. The results obtained are summarised in Section 4. The paper ends with conclusions and a discussion of possible future research.
## 2 Previous work
Three types of deep neural networks can be distinguished in current vision systems: detection, segmentation, and detection-segmentation. As mentioned in the introduction, detection-segmentation networks represent a compromise between the accuracy of instance segmentation and the speed of simple detection and are therefore an interesting solution for autonomous vehicle perception systems. Several architectures of detection-segmentation networks have been proposed in the literature.
The first is YOLOP [5]. It allows object detection and segmentation of drivable area and horizontal road markings. It consists of a common encoder and 3 separate decoders (one for detection and two for segmentation). It has been trained and evaluated on the popular _BDD100k_ dataset [8]. The second is HybridNets [6], which is very similar to YOLOP in terms of functionality. It consists of 4 components: encoder (EfficientNet V2 architecture), neck, detection head (inspired by YOLOv4), and segmentation head. The _BDD100k_ dataset was also used for training and evaluation. The third architecture, used in this work, is the MultiTask V3 [7] proposed by AMD Xilinx. It is worth noting that it is included in the Vitis AI library as a demonstrator of its capabilities, but to our knowledge, it has not been described in a scientific publication. Details of its construction are presented in Section 3.1. Unlike YOLOP and HybridNets, it also includes a depth estimation module. However, it has not been evaluated on a publicly available dataset.
The topic of hardware acceleration of deep neural networks, especially for embedded computing, is the subject of intense academic and industrial research due to its very high practical importance. A whole spectrum of solutions is encountered, from dedicated chips for AI acceleration (e.g. Intel Neural Compute Stick, Google Coral, Tesla FSD Chip), through programmable SoC FPGAs to eGPU platforms. A detailed overview of the solutions is beyond the scope of this article, and we refer interested readers, for example, to the review [9] or the work [10].
In this work, we have chosen to use an SoC FPGA platform and also run the selected network on an eGPU platform for comparison. Reprogrammable devices have been a proven platform for implementing vision algorithms for years, which was the main reason for our choice. In addition, they tend to have lower power
consumption than eGPUs. Of the available detection-segmentation networks, we chose MultiTask V3 for two reasons. First, from our previous experiments, it had the highest efficiency and relatively low computational complexity for our scenario. Second, it was well-prepared by AMD Xilinx for acceleration in SoC FPGAs, which facilitated its use in the target perception and control system.
## 3 Implementation of the perception and control system
The starting point for our research was the FPT'22 [11] competition, the aim of which is to create a model of an autonomous vehicle capable of driving according to the road traffic rules in a mock-up city. Figure 0(a) shows the used mock-up city. It is equipped with horizontal markings (traffic lanes, pedestrian crossings), traffic lights, figures imitating pedestrians, and various objects (obstacles) to be avoided on the road. Thanks to this test environment, it is possible to evaluate the perception and control system of an autonomous vehicle. The research presented can be divided into four phases: the design and construction of an autonomous vehicle equipped with Mecanum wheels, the design of electronics and assembly equipment, the implementation of the perception and control algorithm on the AMD Xilinx Kria KV260 platform, and the programming of a low-level algorithm to control the motors for the Mecanum wheels. The most important part of the work is the implementation of the perception and control system. It uses a detection-segmentation deep convolutional neural network architecture that is parallelised, quantised, and accelerated on an embedded SoC FPGA platform. On the other hand, the Mecanum wheels allow for precise manoeuvring, and the detection-segmentation network provides the necessary information about obstacles and other elements of the environment. In addition, the PID controller implemented in the motor controllers ensures stable driving, which is essential for the safety of the vehicle.
Figure 1: The mock-up of a city made by us (a) and the model of an autonomous vehicle (b) with Mecanum wheels and all equipment.
### Detection-segmentation network in SoC FPGA
MultiTask V3 is a deep convolutional neural network, designed by the developers of Vitis AI (AMD Xilinx) as part of an open source library made available for the development process [7]1. Its architecture is shown in Figure 2 and allows the simultaneous execution of five tasks: detection, three types of segmentation, and depth estimation (not used in this work).
Footnote 1: MultiTask V3 has not been described in a published scientific paper.
The segmentation part of the architecture is divided into three branches. Each branch can focus on a different task, such as segmenting detected objects, lanes (drivable area), or road markings. This approach makes it easier to prepare training sets, as these can be separated from each other, allowing a pixel to be classified in more than one class (e.g. a road marking should still be detected as a lane). The additional use of detection means that an in-depth analysis of detected objects (e.g. in terms of shape or occupied area in the image) is optional and performed only in special cases. The MultiTask V3 network architecture consists of several elements. First, the input image is transferred to the _Backbone_ segment, which is used for feature extraction. This is based on the ResNet-18 convolutional neural network. Then, thanks to the use of encoders and convolutional layers, the _Neck_ segment allows further feature extraction and the combination of low-level and high-level features. The features obtained are transferred to the appropriate branches: _Detection_, _Depth_, and _Segmentation_ heads. In them, again, thanks to the use of convolution, activation operations, and normalisation, the corresponding result tensors are generated.
Figure 2: Scheme of the MultiTask V3 deep neural network, showing layers of neurons grouped into sections. An input image is processed within successive layers to extract features. The features are used to generate output data: detections, segmentation, and also a depth map.
Due to the specificity of the project and the high complexity of the training set for depth estimation, the _Depth_ head training was not considered. For the remaining branches, three training sets were prepared, one common for object detection and segmentation and two for drivable area segmentation and road markings. The data for the training sets were obtained from recordings made on a city mock-up, which made it possible to prepare them strictly for the assumed task. 250 photos were obtained for the set containing the detected objects and 500 photos for the set showing the drivable area. The images were then manually labelled using the LabelMe software. The generated datasets were converted into a format compatible with the framework used to train the network. The framework is open source, based on Python, uses the PyTorch libraries, and is published in the Vitis AI libraries. As the software was written for older versions of the libraries and Python, corrections had to be made in order for the code to run properly. Once the modifications had been made, the software was launched using the prepared datasets. The model was trained using the GTX 1060 M GPU on sets split 80/20 between training and validation. The training was stopped after 450 epochs if there was no improvement in network performance.
The next step was to quantise the network model so that it could be run on an embedded SoC FPGA platform. This was done using the software described above. The quantisation is based on the vai_p_pytorch API provided by AMD Xilinx. Finally, the model was compiled into an architecture-compatible format using the vai_c_xir program, also provided by AMD Xilinx.
The final detection-segmentation model has been launched on the Kria KV260 SoC FPGA platform [12]. Kria is designed for the development of advanced image processing applications, allowing the acceleration of neural networks thanks to the use of DPU(Deep Processing Unit). The platform's operating system is Ubuntu, with PYNQ software installed, which allows a program to be created in Python on notebooks using the DPU overlay. In addition, by using the WiFi USB adapter and modifying the operating system's network settings, it is possible to communicate with the platform via SSH (Secure Shell) and through the Jupyter Notebook server created, allowing the algorithm to be executed and its operation to be analysed in real-time. This communication also makes it possible to continuously monitor the consumption of resources and the performance of the algorithm. Thanks to the libraries used, it is possible to collect image frames from a connected USB camera with a resolution of \(512\times 320\) pixels, convert them into the network input tensor, and then analyse the output tensors using methods from the OpenCV library. The implemented algorithm imports the necessary libraries and defines data pre-processing and processing functions.
### Vehicle control algorithm
The algorithm captures the last frame from the USB camera, pre-processes it (size, colour space), and converts it into tensors, which are then fed into the MultiTaskv3 neural network. The network returns tensors which are then converted into masks: segmentation of detected objects, segmentation of drivable area, segmentation of road markings, and bounding boxes of detected objects.
The received data is then analysed: first, it is checked that the pedestrian or obstacle is not in the ROI, which is defined as a short distance in front of the vehicle. In the case of a pedestrian, the vehicle should stop, and in the case of an obstacle, the overtaking manoeuvre should be initiated. The lines are then checked. The detection of a continuous cross-line marking triggers a vehicle stop. Based on the sideline, it is possible to determine the trajectory of movement. If the sideline is not in the ROI - on the left side of the image, the segmentation of the drivable area allows checking if the vehicle is at an intersection or in a curve, which means it needs to turn. Based on the results of the analysis, a trajectory is determined and transmitted to the Arduino microcontroller, which controls the motors. The loop then returns to the initial step and continues indefinitely.
### Hardware setup
The electronics project consisted of placing the Arduino Nano Every microcontroller, based on the ATMega4809, on the breadboard, allowing the use of hardware interrupts on any pin. The microcontroller is directly connected to the motor encoders and four Pololu DRV8838 motor controllers, which allow control using the PWM (Pulse-width Modulation) signal. The power section consists of a LiPo package and step-down converters: 12V for the FPGA platform and 6V for the motors. The microcontroller communicates and is powered via a USB connection to the FPGA platform. The motor control was programmed on the microcontroller in the language provided by Arduino, based on C++. The program receives the set values from the FPGA platform through the UART protocol in the format \(V_{x},V_{y},\omega\), where \(V_{x}\) is the longitudinal velocity vector, \(V_{y}\) is the transverse velocity vector, and \(\omega\) is the given angular velocity of rotation relative to the geometric centre of the vehicle. From the above values, the angular velocities set values for each of the motors are determined. The rotation of each wheel changes the signals on the encoder connected to it. Using hardware interrupts, it is possible to determine the angle that each of the motors has turned, which is counted in the counter assigned to it, and stored in the cache. The interrupt timer has been implemented in the program, which calls the function exactly every 0.1 seconds. This function retrieves the current counter reading and compares it with the previous one. This is used to determine the angular velocity, the previous values of which are also stored and differentiated for the purposes of the PID (Proportional Integral Derivative) controller. Then, for each motor, the control set for the given speed, the control error and its differential are determined, which makes it possible to determine the P and D terms of the PID controller. The values obtained are used to determine the filling of the PWM signal sent to the motor controllers. The program runs in an infinite loop, and in asynchronous mode, the microcontroller is constantly waiting for a new reference to be sent.
In order to better adapt the vehicle to the dimensions of the city mock-up, all its elements were made using 3D printing technology, such as adapters for the motors to mount the wheels, USB camera holder, base platform adapted to mount the motors, cameras, electronics, power supply, and the main computing
platform. Four Pololu HP micromotors with 150:1 gears and encoders were attached to the base platform, on which the shaft was mounted using Mecanum 80mm diameter wheels with adapters. On the underside of the platform is a breadboard with electronics to control the motors and a 14.8V nominal LiPo pack. At the top of the chassis is a computer platform and a USB camera mount. Figure 0(b) shows the model of the autonomous vehicle described above.
## 4 Evaluation of the detection-segmentation network
The first experiment was to compare the quality (efficiency, accuracy) of network model inference before and after quantisation. The tests were performed using the libraries provided by AMD Xilinx, discussed earlier. Each branch was evaluated on the test set and the results are summarised in Tabels 1, 2, 3 and 4. As can be seen, quantisation resulted in a slight quality decrease (of the order of less than one per cent). This means, therefore, that the model used by the SoC FPGA platform will behave almost identically to the one run on a PC equipped with a graphics card in the environment provided by AMD Xilinx. 3.1.
To test the efficiency and cost-effectiveness of the proposed solution, a series of performance tests were carried out on the Kria KV260 platform. The input to the algorithm was a pre-prepared dataset derived from footage recorded on a mock-up of the city. During operation, the use of the quad-core Cortex-A53 processor clocked at 1.3 GHz used in the platform, the use of RAM (Random Access Memory) and CMA (Contiguous Memory Allocator), and the power consumption of the SOM (System on Module) platform were checked. The results are shown in Table 5. It is worth noting that the platform makes full use of one CPU core. According to the manufacturer's documentation, it is possible to run the algorithm using multithreading, but this would involve higher power consumption. The results show that the platform consumes only around 5W of power when running, which allows it to be considered energy efficient.
In order to compare the performance of the platform used, the inference time of the MultiTask V3 network and the execution time of one iteration of the algorithm was examined. The same algorithm was then run on the NVIDIA Jetson Nano and NVIDIA Jetson Xavier NX eGPU platforms, using the pre-quantisation model and the PyTorch library to run the network. The results of the algorithm's efficiency on the platforms are shown in Table 6.
Experiments show that the Kria KV260 platform has demonstrated the best performance in its power consumption class. In terms of processing speed, it clearly outperforms the NVIDIA Jetson Nano platform, with the same power consumption. It also runs faster than the NVIDIA Jetson Xavier NX platform in 10W consumption mode. Only when using the 20W consumption mode does the NX platform achieve approximately 0.5 fps (frames per second) more, but at the cost of four times higher power consumption.
The achieved processing speed of almost 5 FPS is sufficient for the algorithm to make a decision in a satisfactory time. However, the results show that the
application of deep neural networks on energy-efficient embedded platforms is still a significant challenge.
To sum up. The best results were obtained on the Kria KV260 SoC FPGA platform. The SoC FPGA platform allows us to obtain satisfactory results in terms of accuracy, efficiency, and power consumption. It should be noted that the currently implemented algorithm is still under development, and the results show that it would be beneficial to focus more on code optimisation and system reconfiguration to utilise all CPU cores. This could slightly increase power consumption, but even 10W of consumption can be considered low for a platform that would be the most important element of an autonomous car. The code used in the experiments described is available at [https://github.com/vision-agh/mt_kria](https://github.com/vision-agh/mt_kria).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Quantisation & \multirow{2}{*}{mAP\({}_{50}\) [\%]} & \multirow{2}{*}{mAP\({}_{70}\) [\%]} & \multirow{2}{*}{mAP\({}_{75}\) [\%]} \\ state & & & \\ \hline Before & 99.4 & 99.4 & 97.2 \\ After & 99.3 & 99.3 & 97.0 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of results for object detection (mAP – mean Average Precision).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Quantisation & \multirow{2}{*}{MIoU [\%]} & \multicolumn{2}{c|}{IoU [\%]} \\ \cline{3-4} state & & Background & Lanes \\ \hline Before & 90.72 & 99.04 & 82.40 \\ After & 90.69 & 99.04 & 82.33 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of results for lane segmentation (MIoU – Mean IoU, IoU – Intersection over Union).
## 5 Conclusion
In this paper, we have discussed the implementation of a perception system for autonomous vehicles using a detection-segmentation network deployed in an SoC FPGA. We have presented the process of preparing a custom dataset according to the requirements of the FPT'22 competition and the training of a neural network model. We have also given a detailed description of the construction of a Mecanum wheel-based autonomous vehicle model, focusing on mechanical and electrical aspects. A fully autonomous control algorithm has been implemented and run on the discussed platform, as well as on two eGPUs. Several experiments have been performed, showing the efficiency and low power consumption of the proposed solution, which supports our thesis that the FPGA Kria KV260 using the MultiTask V3 neural network is a suitable solution for autonomous cars and robots with limited space and resources.
In future work, we will first refactor the code to further improve its efficiency. We also plan to test the vehicle model on the mock-up. Secondly, we will try to use the _weakly supervised learning_ and _self-supervised learning_ methods, which, in the case of an atypical, custom dataset, would allow a significant reduction in the labelling process of the learning data. We would also like to consider adding modules for depth estimation and optical flow, as these are often used in autonomous vehicle perception systems.
## Acknowledgements
The work presented in this paper was supported by the programme "Excellence initiative - research university" for the AGH University of Krakow.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Embedded platform & Power & Speed & Execution & Model Inference \\ & [W] & [fps] & time [s] & time [s] \\ \hline Kria KV260 & 5 & 4.85 & 0.206 & 0.073 \\ \hline Nvidia Jetson Nano & 5 & 2.07 & 0.483 & 0.223 \\ \hline Nvidia Jetson Xavier NX & 10 & 4.35 & 0.230 & 0.093 \\ & 20 & 5.48 & 0.182 & 0.068 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of algorithm’s performance on different computing platforms.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Resource usage} & \multicolumn{4}{c|}{CPU cores} & \multirow{2}{*}{RAM} & \multirow{2}{*}{CMA} & \multirow{2}{*}{Power} \\ \cline{2-2} \cline{6-8} & CPU\({}_{0}\) & CPU\({}_{1}\) & CPU\({}_{2}\) & CPU\({}_{3}\) & & \\ \hline Used & 85 \% & 22 \% & 3 \% & 3 \% & 38 \% & 6 \% & 4.95 W \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of resource consumption on the Kria KV260 platform.
|
2304.00956
|
Measurements of the Kr (e, 2e) differential cross section in the
perpendicular plane, from 2 eV to 120 eV above the ionization threshold
|
New (e, 2e) differential cross section measurements from krypton are
presented in the perpendicular plane, where the incident electron beam is
orthogonal to the scattered and ejected electrons that map out a detection
plane. New data were obtained at incident energies from 30 eV to 120 eV above
the ionization potential (IP), the experiment being configured to detect
scattered and ejected electrons with equal energy. The results are compared to
previous measurements from 2 eV to 50 eV above the IP and to calculations from
different models in this energy range. The new experiments confirm the results
from previous measurements. The results are also compared to recent data for
argon acquired under the same kinematic conditions, to highlight similarities
and differences that are observed.
|
Andrew James Murray, Joshua Rogers
|
2023-04-03T13:24:28Z
|
http://arxiv.org/abs/2304.00956v1
|
Measurements of the Kr (e, 2e) differential cross section in the perpendicular plane, from 2 eV to 120 eV above the ionization threshold
###### Abstract
New (e, 2e) differential cross section measurements from krypton are presented in the perpendicular plane, where the incident electron beam is orthogonal to the scattered and ejected electrons that map out a detection plane. New data were obtained at incident energies from 30 eV to 120 eV above the ionization potential (IP), the experiment being configured to detect scattered and ejected electrons with equal energy. The results are compared to previous measurements from 2 eV to 50 eV above the IP and to calculations from different models in this energy range. The new experiments confirm the results from previous measurements. The results are also compared to recent data for argon acquired under the same kinematic conditions, to highlight similarities and differences that are observed.
## I Introduction
In a recent paper [1] the electron-impact ionization differential cross-sections (DCS) from an argon target were detailed, where the scattered and ejected electrons from the interaction were detected in the plane perpendicular to the incident electron momentum \(\mathbf{k_{0}}\), as shown in Fig. 1. The outgoing electrons with momenta \(\mathbf{k_{1}}\) and \(\mathbf{k_{2}}\) were detected in coincidence with equal energy, using hemispherical electron analyzers that swept around the detection plane. In this plane only the mutual angle \(\phi\) has relevance.
The results from this argon study confirmed previous experimental data [2], which found a significant disagreement with the calculations carried out by Whelan and co-workers [3]. This was particularly noticeable around 50 eV above the ionization potential (IP). The models included non-relativistic distorted wave Born approximation (DWBA) calculations including and neglecting post-collisional interactions (\(\pm\)PCI), and a plane wave approximation (PWA). PCI was included using a Gamow factor [4] and also by including a Ward-Macek factor [5]. Both factors provide an approximation to the effects of PCI and their inclusion was seen to overcompensate the effects of electron-electron repulsion under these kinematic conditions. The Ar experimental data were further extended to an incident energy of 200 eV above the ionization potential in [1], allowing future tests of different models.
A comparison with low energy experimental data for Kr as presented in [2] was also carried out in [3]. When compared to the Ar studies, the DWBA-PCI calculation was found to be in better agreement for Kr, particularly at an incident energy 50 eV above the IP. This contrast between the calculations for these different targets hence suggested that an extended survey of the cross section for krypton should also be carried out, both to check the data in [2] and to extend the measurement range to higher energies.
The results of these experiments are presented in this paper, together with a comparison to previous data from [2] and with the calculations in [3]. The energy range of the original study was 2 eV to 50 eV above the IP. The new measurements detailed here extend this range to 120 eV above the IP. This range could not be extended further due to the very low coincidence count rates encountered in this geometry. Selected measurements of the DCS from [2] at incident energies 30, 40 and 50 eV above the IP were repeated and compared to the new measurements. Agreement was found between the two data sets at these energies.
In all of these studies the DCS are not measured on an absolute scale. The mutual angle \(\phi=180^{\circ}\) is therefore selected as a normalization point. This is a common angle for all incident beam angles \(\psi\) since the outgoing electrons are detected in a symmetric geometry (i.e. \(\theta_{1}=\theta_{2}\), see Fig. 1). Normalizing the data at this point allows the experimental results and calculations to then be directly compared over a wide range of geometries, as discussed in [6; 7].
In the present studies the energies of the detected electrons were chosen to be equal, so that \(E_{1}=E_{2}=E\). The incident electron energy was hence set to be \(E_{inc}=2E\) + IP. An (e, 2e) DCS was then determined that depends
Figure 1: The perpendicular plane scattering geometry.
on both the energy and mutual angle \(\phi\).
In the perpendicular plane geometry the ionization reaction is highly sensitive to multiple order scattering processes [8; 9] and so provides a robust test of the more sophisticated theories that include these effects. At the energies investigated here it is also important to include target polarization and post-collisional interactions, which places further demands on the calculations. These effects reduce as the incident energy increases. The extended energy range of the DCS measurements presented here hence should allow their contributions to be considered systematically.
The strength of the DWBA models described in [3; 8] lies in the fact that different interaction processes can be turned 'on' or 'off' by replacing the distorted waves with plane waves. This allows different underlying scattering mechanisms to be considered. In the perpendicular plane these models suggest that peaks found at \(\phi\) = 180\({}^{\circ}\) are due to the momentum of the bound electron matching that of the incident electron, so that both electrons leave the interaction in opposite directions [10]. Madison and co-workers [8] further showed that peaks occurring at \(\phi\) = 180\({}^{\circ}\) will also have a contribution from _triple_ scattering, where the incident electron first scatters elastically into the perpendicular plane, followed by a binary collision with a bound electron. The electron then scatters elastically from the target to emerge at the mutual angle of 180\({}^{\circ}\). Peaks found near \(\phi\) = 90\({}^{\circ}\) and 270\({}^{\circ}\) are considered to arise from elastic scattering of the electron into the perpendicular plane, followed by a binary collision. Note that the DCS at angles \(\phi\)\(\leq\) 90\({}^{\circ}\) are mirrored by the measurements for \(\phi\)\(\geq\) 90\({}^{\circ}\), due to rotational symmetry around the incident electron beam direction.
These semi-classical descriptions of the different collision mechanisms are attractive, as they provide an intuitive explanation of the processes that are involved. They have proven to be successful in describing ionization from lighter targets, however a full quantum calculation is needed to model the DCS for heavier atoms. The additional experimental data presented here are hence important to elucidate a better understanding of the scattering mechanisms that are involved.
These experiments proved to be challenging due to the very low coincidence count rates obtained from this target in the perpendicular plane. The rates varied from around 0.5 Hz to less than 0.01 Hz, depending on both the mutual angle and incident energy. The (e, 2e) spectrometer hence was operated under computer control, adopting optimization techniques to eliminate long-term drifts that can occur over the extended periods of time required to accumulate coincidence data. These control and optimization techniques have been described in depth previously [11; 12].
The (e, 2e) spectrometer in Manchester uses an unselected-energy electron gun that produces an electron beam in the energy range from 5 eV to 300 eV with a width of around 0.6 eV. The electron beam enters the interaction region with zero beam angle and a pencil angle of around 2\({}^{\circ}\). The electron analyzers use 3-element electrostatic zoom lenses that direct electrons emerging from the interaction into hemispherical energy selectors. The pass energies of the selectors are adjusted to control the overall resolution of the spectrometer and are typically set to match that of the gun. A gas jet directs atoms into the interaction region (not shown in Fig. 1). The spectrometer is mounted inside a large \(\mu\)-metal enclosed chamber that is evacuated to a base pressure of \(\sim~{}6\) x 10\({}^{-7}\) Torr. When the gas jet is injected the pressure rises to 2 x 10\({}^{-5}\) Torr. Further details of the spectrometer can be found in [1] and the references therein.
These new experiments were performed over a period of 10 months to accumulate the data presented here. Once the gas jet was set to deliver target atoms into the interaction region, the incident electron beam current was adjusted to produce scattered electron count rates in each analyzer that were between 5 and 10 kHz. The analyzers were then set to scan between \(\phi\) = 70\({}^{\circ}\) and 290\({}^{\circ}\), this range being limited by their physical size. The lens voltages were optimized automatically at each angle to maximize the detected electron count rates. Coincidence counts were then accumulated for between 1,500 and 5,000 seconds, depending on the probability of detection of events at that energy and angle. The analyzers were then moved to a new angle and the process repeated.
Once the detection plane had been mapped in one direction, the analyzers moved in the reverse direction. Up to 20 sweeps of the plane were conducted at a given energy. The measurements at each angle were then normalized to a fixed collection time and were averaged, the standard error in the mean providing the quoted uncertainty. These data were then re-normalized to unity at \(\phi\) = 180\({}^{\circ}\), as discussed above.
Figure 2: The binding energy spectrum from Kr, where the outgoing energies were set to \(E_{1}=E_{2}=22.5\) eV and the mutual angle was \(\phi\) = 90\({}^{\circ}\) (\(\theta_{1}=\theta_{2}=45^{\circ}\)). The normalized data are shown for analyzer pass energies set to 15 eV and 7.5 eV respectively. At 7.5 eV the contribution from different core states can just be resolved.
All noble gas targets (apart from helium) have a complete outer valence shell comprised of 6 \(p\)-electrons. The ground state of krypton is the [Ar]\(3d^{10}4s^{2}4p^{6}\)\({}^{1}\)S\({}_{0}\) state, where [Ar] is the closed argon electron configuration. Removal of one of the outer electrons by electron-impact ionization hence leaves the core with five \(p\)-electrons, whose combined angular momentum is non-zero. This leads to two possible ionic core states, which are the 4\(p^{5}\)\({}^{2}\)P\({}_{3/2}\) and 4\(p^{5}\)\({}^{2}\)P\({}_{1/2}\) states. The \({}^{2}\)P\({}_{3/2}\) core has the lowest binding energy of 14.0 eV, whereas the \({}^{2}\)P\({}_{1/2}\) core has a binding energy \(\sim\) 0.63 eV higher [13].
Since the resolution of the incident electron beam is around 0.6 eV it is possible to estimate the relative contribution from the different core states, by reducing the pass energy of the detectors so that the spectrometer resolution is dominated by that of the incident beam. Fig. 2 shows the result of this study, where the outgoing electrons were detected at \(\phi\) = 90\({}^{\circ}\) (\(\theta_{1}\) = \(\theta_{2}\) = 45\({}^{\circ}\)) and were selected to have energies \(E_{1}\) = \(E_{2}\) = 22.5 eV.
The coincidence data in Fig. 2 were obtained at pass energies of 7.5 eV and 15 eV respectively. The higher resolution data at 7.5 eV shows that the contribution from ionization to the 4\(p^{5}\)\({}^{2}\)P\({}_{1/2}\) core is around half that to the 4\(p^{5}\)\({}^{2}\)P\({}_{3/2}\) core under these kinematic conditions. Owing to the very low coincidence counting rates obtained in this geometry, it was however not practical to operate at the higher resolution and therefore all data presented in this paper were taken with a pass energy of 15 eV. Ionization to both channels hence needs to be considered when comparing these results to theory.
## II DCS from 2 to 120 eV above the IP
The measured DCS for ionization of krypton are presented in Fig. 3 for incident energies from 2 eV to 120 eV above the IP, using analyzer pass energies of 15 eV. The gun angle was set to \(\psi\) = 90\({}^{\circ}\) and the mutual angle \(\phi\) was adjusted in steps of 10\({}^{\circ}\). The results from [2] (2010 data) have been reproduced in Fig. 3, allowing comparison between the new measurements and previous data.
This figure shows how the relative cross section evolves from a low incident energy 2 eV above the IP to a high energy 120 eV above the IP. The results from the calculations in [3] are also reproduced where available, allowing comparison between theory and experiment. The DWBA calculation without PCI is the blue curve labelled DWBA-PCI. The calculation that includes PCI using the Gamow factor is the red curve labelled DWBA+PCI and the plane wave calculation is the green curve labelled PWA. Results from the calculations are included for incident energies of (c) 10 eV, (e) 20 eV, (f) 30 eV, (g) 40 eV and (i) 50 eV above the IP.
The DWBA-PCI model produces the closest agreement with the data at the higher energies. Under these kinematic conditions the DCS must be zero at \(\phi\) = 0\({}^{\circ}\) and \(\phi\) = 360\({}^{\circ}\), due to post collisional interactions between outgoing electrons that have equal energy. The importance of this effect is clearly seen in the data. The DWBA-PCI model cannot however reproduce this condition, and so diverges from the data at low and high angles. At the lowest energy of 10 eV above the IP this model fails to emulate the data at all angles, as shown in Fig. 3(c).
Inclusion of PCI using the Gamow factor in the DWBA+PCI calculation is clearly too strong an effect, since this does not reproduce the data well. The Ward Macek factor also produced too large a change compared to experiment and so for clarity is not reproduced here [3]. Inclusion of PCI does however ensure that the cross section is zero at \(\phi\) = 0\({}^{\circ}\) and \(\phi\) = 360\({}^{\circ}\), as is required. The DWBA+PCI model predicts the position of the peaks reasonably well, however the magnitudes of the peaks around \(\phi\) = 90\({}^{\circ}\) and \(\phi\) = 270\({}^{\circ}\) are too small when normalized to the common point at \(\phi\) = 180\({}^{\circ}\).
The plane wave approximation does not agree well with the data at any of the energies shown here. This demonstrates the importance of including the additional complexities of the interaction into the calculations as discussed above, particularly at lower energies. Since the PWA model is expected to become more accurate as the energy increases [14], comparison with the new data at higher energies may show better agreement in the future.
New results at an energy of 45 eV and ranging from 55 eV to 120 eV above the IP were obtained in this study, as shown in Fig. 3(h) and panels (j) - (o). The complete set of data presented in Fig. 3 hence details the evolution of the DCS in the perpendicular plane for Kr from near threshold to high energy. At the lowest observed energy with \(E_{1}\) = \(E_{2}\) = 1 eV the DCS has a relatively broad structure with small peaks occurring at around \(\phi\) = 120\({}^{\circ}\) and \(\phi\) = 240\({}^{\circ}\). This contrasts with measurements from the lighter noble gas targets He and Ne, which both show a large peak at \(\phi\) = 180\({}^{\circ}\) under similar kinematics [15; 16; 17; 18; 19; 20; 21; 22]. Since the energy of the outgoing electrons are equal, the effects of PCI become increasingly important as the energy is lowered. This results in an enhancement of the back-to-back signal where the electrons emerge opposite each other (i.e. at \(\phi\) = 180\({}^{\circ}\)). Near threshold, PCI is expected to dominate over all other collision processes, as described by Wannier [23].
It is interesting to note that for Kr, Ar and Xe the DCS in this low energy regime does not feature a dominant peak at \(\phi\) = 180\({}^{\circ}\) as found for He and Ne and as predicted by the Wannier model. These targets all produce a 'double peak' structure with a local \(minimum\) at this angle, rather than a maximum. Since Ne also has \(p\)-electrons in the valence shell, this difference probably arises from the more complex electronic structure of these heavier targets. Further theoretical investigations in this energy regime are needed to explain these differences.
As the incident energy increases beyond the threshold region the double-peak structure is enhanced until at 15 eV above the IP (Fig. 3(d)) a small central peak starts to emerge. Between 15 eV and 40 eV above the IP a triple peak structure is observed. When \(E_{1}\) = \(E_{2}\) = 10 eV all peaks have similar magnitudes. The side
peaks then rapidly increase in magnitude compared to the central peak, until at 45 eV above the IP the central peak has disappeared. The remaining side peaks then slowly reduce in magnitude as the energy increases, until at 120 eV above the IP the DCS is largely uniform over a broad range of angles. This pattern is similar to that of the noble gases Ar and Xe at higher energies in this geometry [17; 1]. It is not clear why this flattening of the DCS occurs and so further calculations are required to explain these results.
### Comparison between Ar and Kr peak ratios
Given the differences between theory and experiment that are evident for Ar and Kr, it is instructive to consider how the ratio of the cross section measurements varies as the energy changes. Since the peak of the DCS occurs at around \(\phi=90^{\circ}\), the data at this angle (normalized to the common point at \(\phi=180^{\circ}\)) can be compared as the energy changes. This ratio is sensitive to both the uncertainty in the \(90^{\circ}\) measurement and that in the measurement at \(\phi=180^{\circ}\). The latter measurement has the larger uncertainty since the cross section at this angle is very small, with coincidence count rates often being less than 1 count in 100 seconds. Long accumulation time were hence required to reduce these uncertainties, as discussed above.
Fig. 4 shows the results of this analysis for both Ar and Kr over the range of energies where data were obtained. The results from the three different calculations of [3] are also shown for comparison. The Ar measurements ranged from 5 eV to 200 eV above the IP for this target. The results from Kr were restricted between 2 eV to 120 eV above the IP but for comparison, the two are presented on the same energy scale.
It is interesting to note that the overall structure of these ratio measurements is similar between the targets. A small peak is seen near threshold in both cases, followed by a large rise which peaks at around 50 eV above the IP in each set of measurements. The ratio then slowly decreases, reaching a value of \(\sim\)1.9 for Ar at \(E_{1}=E_{2}=100\) eV and \(\sim\)1.1 for Kr at \(E_{1}=E_{2}=60\) eV. The uncertainties on these ratios are greatest at their peak val
Figure 3: Evolution of the DCS of krypton in the perpendicular plane from (a) 2 eV above the ionization potential to (o) 120 eV above the IP. The measurements are normalized to unity at \(\phi=180^{\circ}\). The data shown as black squares are from measurements taken in 2010 [2]. Data shown as filled black circles are new measurements. The calculations of [3] are also shown as solid curves, as discussed in section II.
ues, due to the large uncertainty in the measurements at \(\phi=180^{\circ}\). There is however a similar structure observed in both sets of data over the same energy range.
The theoretical calculations from [3] are only available at energies from 10 eV to 50 eV above the IP and these are shown as curves in Fig. 4 to highlight their variations with energy. As expected, the PWA and DWBA+PCI calculations do not emulate the data well for either target. The DWBA-PCI calculation does not follow the data trend for Ar, however it closely matches the Kr results, apart from at the lowest energies. This agreement may be fortuitous, however these comparisons suggest that the differences between calculations should be investigated further.
## III Summary and conclusions
In this paper the ionization cross sections for Kr have been presented over a range of energies from 2 eV to 120 eV above the ionization potential of the \(4^{2}\)P\({}_{(1/2,3/2)}\) ion states. These states were unresolved in the experiments conducted here. A more detailed analysis of the binding energy spectrum (as shown in Fig. 2) indicates that both states contribute to the cross section and so need to be considered when comparisons to theory are made. All measurements were carried out in the perpendicular plane, the detected electrons being selected to have equal energies. The data are presented on a relative scale, with the DCS at \(\phi=180^{\circ}\) set to unity at each energy.
The measurements are also compared to theoretical calculations from [3] using both DWBA and PWA models. PCI was included via a Gamow factor, however this was found to produce too large a correction when compared to the data. Since PCI is important in this energy regime, it is essential for a more refined approach to PCI to be included in future models. The PWA model (which is successful at high incident energies) was found to be a poor approximation in this energy range for this target.
The DWBA-PCI model agrees closely with the ratio measurements between \(\phi=90^{\circ}\) and \(\phi=180^{\circ}\) for Kr. This model however does not agree with the Ar measurements, as shown in Fig. 4. This suggests these calculations should be revisited in order to understand this discrepancy.
The DCS measurements at higher energies evolve into a broad flat structure, with the side lobes around \(\phi=90^{\circ}\) and \(270^{\circ}\) reducing in magnitude as the energy increases. The broad, featureless cross section at the highest energy shown here has also been seen in other targets at similar energies, including Ar, Xe and CH\({}_{4}\)[24]. The DCS in this region may hence be controlled by the kinematics, rather than by the structure of the target.
It is hoped that the comprehensive survey of the DCS for Kr presented here will aid in the development and refinement of models of the ionization process, so that they can more accurately describe the interactions that are occurring in this important energy regime.
## IV Acknowledgements
We wish to thank the Engineering and Physical Sciences Research Council (EPSRC) for funding through grant EP/W003864/1 and EP/V027689/1. The data supporting the findings reported in this paper are openly available from the authors through the contact emails given above.
|
2310.02006
|
Markovian master equations for quantum-classical hybrid systems
|
The problem of constructing a consistent quantum-classical hybrid dynamics is
afforded in the case of a quantum component in a separable Hilbert space and a
continuous, finite-dimensional classical component. In the Markovian case, the
problem is formalized by the notion of hybrid dynamical semigroup. A classical
component can be observed without perturbing the system and information on the
quantum component can be extracted, thanks to the quantum-classical
interaction. This point is formalized by showing how to introduce positive
operator valued measures and operations compatible with the hybrid dynamical
semigroup; in this way the notion of hybrid dynamics is connected to quantum
measurements in continuous time. Then, the case of the most general quasi-free
generator is presented and the various quantum-classical interaction terms are
discussed. To bee quasi-free means to send, in the Heisenberg description,
hybrid Weyl operators into multiples of Weyl operators; the results on the
structure of quasi-free semigroups were proved in the article arXiv:2307.02611.
Even in the pure quantum case, a quasi-free semigroup is not restricted to have
only a Gaussian structure, but also jump-type terms are allowed. An important
result is that, to have interactions producing a flow of information from the
quantum component to the classical one, suitable dissipative terms must be
present in the generator. Finally, some possibilities are discussed to go
beyond the quasi-free case.
|
Alberto Barchielli
|
2023-10-03T12:24:06Z
|
http://arxiv.org/abs/2310.02006v2
|
# Markovian master equations for quantum-classical hybrid systems
###### Abstract
The problem of constructing a consistent quantum-classical hybrid dynamics is afforded in the case of a quantum component in a separable Hilbert space and a continuous, finite-dimensional classical component. In the Markovian case, the problem is formalized by the notion of _hybrid dynamical semigroup_. A classical component can be observed without perturbing the system and information on the quantum component can be extracted, thanks to the quantum-classical interaction. This point is formalized by showing how to introduce _positive operator valued measures_ and _operations_ compatible with the hybrid dynamical semigroup; in this way the notion of hybrid dynamics is connected to quantum measurements in continuous time. Then, the case of the most general _quasi-free_ generator is presented and the various quantum-classical interaction terms are discussed. To bee quasi-free means to send, in the Heisenberg description, hybrid Weyl operators into multiples of Weyl operators; the results on the structure of quasi-free semigroups were proved in Reference [12]. Even in the pure quantum case, a quasi-free semigroup is not restricted to have only a Gaussian structure, but also jump-type terms are allowed. An important result is that, to have interactions producing a flow of information from the quantum component to the classical one, suitable dissipative terms must be present in the generator. Finally, some possibilities are discussed to go beyond the quasi-free case.
Keywords: Quantum-classical hybrid system; quasi-free dynamics; Weyl operators; Levy-Khintchine formula; hybrid dynamical semigroup; quantum measurements.
## 1 Introduction
The search for a consistent formulation of the dynamics for quantum-classical hybrid systems has a long history; the motivations include computational advantages, description of mesoscopic systems, unification of gravity and quantum theory, description of quantum measurements..., see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] and references there in. Also the quantum measurements in continuous time can be interpreted in terms of hybrid systems; in this case the classical component is the monitored signal extracted from the quantum system [2, 3, 13, 14, 15, 16]. A typical realization is given by direct, homodyne and heterodyne detection in quantum optics [17, 18].
A first aim of this paper is to present the results of [12], where the most general hybrid Markovian dynamics has been obtained in the quasi-free case. The notion of quasi-free map means that, in the Heisenberg picture, quantum-classical Weyl operators are mapped to multiples of Weyl operators [11, 19, 20, 21]. A quasi-free dynamics includes not only the Gaussian case, but also contributions of "jump type". A second aim is to discuss the physical meaning of the various terms appearing in the master equation giving the dynamics of the hybrid system. An important point here is that, to have flow of information from the quantum system to the classical one, suitable dissipation terms must be present in the dynamics. As the classical component can be observed without disturbing it, this means that at least some "smooth" state reduction must be present to have significant quantum-classical interactions, allowing extraction of quantum information.
The general setting of the hybrid dynamics is discussed in Sec. 2. The quantum component is described in a separable Hilbert space and the classical one is continuous and finite dimensional; the dynamics of the total system is without memory. This is formalized by the definition of _hybrid dynamical semigroup_; we work in the Heisenberg description, which is more convenient for the study of the generator of this semigroup. Then, it is shown that the possibility of observing the classical subsystem, even in continuous time, allows to construct _positive operator valued measures_ and _instruments_, typical notions in quantum measurement theory.
In Sec. 3 we ask the maps in the semigroup to be quasi-free. Here, we define the hybrid Weyl operators and we present the results of [12] about the structure of the most general quasi-free hybrid dynamical semigroup. Then, the generator of the dynamical semigroup is explicitly given; we put in evidence that it can be decomposed in a "diffusive-like" component and in a "jump-like" one. We show also how the quantum-classical interaction is made up by different structures.
The knowledge of the generator of the semigroup is equivalent to give the master equation of the hybrid system. The meaning of the various terms appearing in the quasi-free dynamics is discussed in Sec. 4. The resulting dynamical equation connects together a quantum master equation of Lindblad type with a differential equation of Kolmogorov-Fokker-Planck type. A relevant point is the connection between dissipation and flow of information from the quantum system to the classical one. When we ask the vanishing of the dissipative terms either in the reduced quantum dynamics or in the reduced classical dynamics, the positivity condition implies the vanishing of the interaction terms giving this flow of information.
Comparisons with other approaches and suggestions for possible developments are given in Sec. 5.
## 2 Quantum-classical dynamical semigroup
We assume the quantum component to be represented in a separable complex Hilbert space \(\mathcal{H}\). Moreover, the bounded linear operators are denoted by \(\mathcal{B}(\mathcal{H})\) and the trace class by \(\mathcal{T}(\mathcal{H})\). The unit element in \(\mathcal{B}(\mathcal{H})\) is denoted by \(\mathds{1}\) and the adjoint of the operator \(a\in\mathcal{B}(\mathcal{H})\) is \(a^{\dagger}\). The complex conjugated of \(\alpha\in\mathbb{C}\) is denoted by \(\overline{\alpha}\).
The classical component is a continuous system with _phase space_\(\Xi_{0}:=\mathbb{R}^{s}\). The probability densities are in \(L^{1}(\mathbb{R}^{s})\), whose dual space is \(L^{\infty}(\mathbb{R}^{s})\); the Lebesgue measure is always understood.
The observables of the composed system live in the \(W^{*}\)-algebra \(\mathcal{N}=\mathcal{B}(\mathcal{H})\otimes L^{\infty}(\mathbb{R}^{s})\), and the hybrid states in its predual \(\mathcal{N}_{*}=\mathcal{T}(\mathcal{H})\otimes L^{1}(\mathbb{R}^{s})\). The duality form between \(\mathcal{N}_{*}\) and \(\mathcal{N}\) is given by
\[\langle P|F\rangle=\int_{\mathbb{R}^{s}}\mathrm{d}x\,\operatorname{Tr}\left\{P (x)F(x)\right\},\qquad\forall P\in\mathcal{N}_{*},\qquad\forall F\in\mathcal{ N}; \tag{1}\]
we have used the fact that \(\mathcal{N}_{*}\) is isomorphic to \(L^{1}\big{(}\mathbb{R}^{s};\mathcal{T}(\mathcal{H})\big{)}\) (the Lebesgue integrable functions from \(\mathbb{R}^{s}\) to \(\mathcal{T}(\mathcal{H})\)), and \(\mathcal{N}\) to \(L^{\infty}\big{(}\mathbb{R}^{s};\mathcal{B}(\mathcal{H})\big{)}\) (the bounded functions from \(\mathbb{R}^{s}\) to \(\mathcal{B}(\mathcal{H})\)).
_Remark 1_.: A state \(\hat{\pi}\) is a trace-class valued function \(\hat{\pi}(x)\in\mathcal{T}(\mathcal{H})\), \(x\in\mathbb{R}^{s}\), such that \(\int_{\mathbb{R}^{s}}\mathrm{d}x\,\operatorname{Tr}\bigl{\{}\hat{\pi}(x)\bigr{\}}=1\) and \(\hat{\pi}(x)\geq 0\). It is always possible to decompose a hybrid state as a probability density times a conditional quantum state [2, 3]: (almost everywhere with respect to the Lebesgue measure) we can write
\[\hat{\pi}(x)=p_{\hat{\pi}}(x)\rho_{\hat{\pi}}(x),\qquad p_{\hat{\pi}}(x)=: \operatorname{Tr}\{\hat{\pi}(x)\},\qquad\rho_{\hat{\pi}}(x)\geq 0,\qquad\operatorname{ Tr}\left\{\rho_{\hat{\pi}}(x)\right\}=1. \tag{2}\]
Then, \(p_{\hat{\pi}}(x)\) is a probability density representing the reduced state of the classical component and \(\rho_{\hat{\pi}}(x)\) is the quantum state conditional on the value \(x\) taken by the classical component. Moreover, \(\int_{\mathbb{R}^{s}}\mathrm{d}x\,\hat{\pi}(x)\) is the statistical operator representing the reduced state of the quantum component.
To study the dynamical semigroups for the hybrid system it is convenient to work with operators on the \(W^{*}\)-algebra \(\mathcal{N}\), which means to work in the analogous of the Heisenberg picture of the dynamics.
**Definition 1**.: A _hybrid dynamical semigroup_ is a family \(\{\mathcal{T}_{t},\ t\geq 0\}\) of linear bounded operators on \(\mathcal{N}\), such that, \(\forall t,\,s\geq 0\),
1. \(\mathcal{T}_{t}\) is completely positive;
2. \(\mathcal{T}_{0}=\mathrm{Id}\);
* \(\mathcal{T}_{t}[\openone]=\openone\);
* \(\mathcal{T}_{t}\) is a normal map on \(\mathcal{N}\);
* \(\mathcal{T}_{t}\circ\mathcal{T}_{s}=\mathcal{T}_{t+s}\);
* \(\mathcal{T}_{t}\) is weak\({}^{*}\) continuous in \(t\), i.e. \(\langle P|\mathcal{T}_{t}[F]\rangle\) is continuous in \(t\), \(\forall P\in\mathcal{N}_{*},\ \forall F\in\mathcal{N}\).
Property a. (complete positivity) means that, for any choice of the integer \(N\geq 1\), one has
\[\sum_{i,j=1}^{N}\int_{\mathbb{R}^{s}}\mathrm{d}x\,\langle g_{j}(x)|\mathcal{T }_{t}[F_{j}^{*}F_{i}](x)g_{i}(x)\rangle\geq 0\qquad\forall F_{j}\in L^{\infty} \big{(}\mathbb{R}^{s};\mathcal{B}(\mathcal{H})\big{)},\quad g_{j}\in\mathcal{ H}\otimes L^{2}(\mathbb{R}^{s}),\]
\(F^{*}\) is the adjoint of \(F\) as element of \(\mathcal{N}\equiv L^{\infty}\big{(}\mathbb{R}^{s};\mathcal{B}(\mathcal{H}) \big{)}\). In property b., \(\mathrm{Id}\) denotes the identity map. Property d. is a suitable regularity requirement [14, 15]); for a positive map it is equivalent to require the map \(\mathcal{T}_{t}\) on \(\mathcal{N}\) to be the adjoint of a bounded map \(\mathcal{T}_{t*}\) on \(\mathcal{N}_{*}\).
_Remark 2_.: The dynamics of the quantum-classical system is given by the pre-adjoint semigroup: if \(\hat{\pi}_{0}\) is the hybrid state at time \(0\), then the state at time \(t\) is given by \(\hat{\pi}_{t}=\mathcal{T}_{t*}[\hat{\pi}_{0}]\).
### Instruments and probabilities
In principle, a classical system can be observed without disturbing it. Indeed, as done in [12, Sec. 5], in the pure classical case (\(\mathcal{H}=\mathbb{C}\)) from the single-time probabilities given at all times \(t\geq 0\) one can obtain also the multi-time probabilities, the transition probabilities...; due to the semigroup request one obtains a Markov process [22, Sec. 10]. On the other side, measurements on a quantum system [23, 24] can be interpreted as involving hybrid systems: a _positive operator valued measure_ is a channel from a quantum system to a classical one, an _instrument_ is a channel from a quantum system to a hybrid system [11, 2, 4, 10].
**Definition 2**.: An _instrument_\(\mathcal{I}(\cdot)\) with value space \(\mathbb{R}^{s}\) is a function from the \(\sigma\)-algebra of the Borel sets in \(\mathbb{R}^{s}\) to the linear maps on \(\mathcal{B}(\mathcal{H})\) such that
1. (positivity) for every Borel set \(E\subset\mathbb{R}^{s}\), \(\mathcal{I}(E)\) is a completely positive and normal map from \(\mathcal{B}(\mathcal{H})\) into itself;
2. (normalization) \(\mathcal{I}(\mathbb{R}^{s})[\openone]=\openone\);
3. (\(\sigma\)-additivity) we have \(\mathcal{I}\left(\bigcup_{i}E_{i}\right)[a]=\sum_{i}\mathcal{I}(E_{i})[a],\)\(\forall a\in\mathcal{B}(\mathcal{H})\) and for every countable family of Borel disjoint sets \(E_{i}\).
Let us recall that an instrument gives the probabilities and the conditional state after the measurement. The quantity \(\mathcal{I}(\cdot)[\openone]\) is a positive operator valued measure, and it gives only the probabilities.
As in the purely classical case, in the hybrid case too the classical component can be observed in continuous time without disturbance. By analogy with the transition probabilities, it is possible to introduce instruments depending on the initial value of the classical system, some kind of _transition instruments_[12, 25]. Indeed, we can define the family of maps
\[\mathcal{I}_{t}(E|x)[a]=\mathcal{T}_{t}[a\otimes\openone_{E}](x),\qquad \forall a\in\mathcal{B}(\mathcal{H}); \tag{3}\]
\(E\subset\mathbb{R}^{s}\) is a generic Borel set. Definition (3) holds almost everywhere for \(x\in\mathbb{R}^{s}\). By \(\openone_{E}\) we denote the _indicator function_ of a generic set \(E\):
\[\openone_{E}(x)=\begin{cases}1&\text{if}\ \ x\in E,\\ 0&\text{if}\ \ x\notin E.\end{cases}\]
**Proposition 1**.: _Almost everywhere for \(x\in\mathbb{R}^{s}\), equation (3) defines an instrument \(\mathcal{I}_{t}(\cdot|x)\) on the \(\sigma\)-algebra of the Borel sets in \(\mathbb{R}^{s}\). Moreover, the family of instruments (3) enjoys the following composition property:_
\[\mathcal{I}_{t+t^{\prime}}(E|x)=\int_{z\in\mathbb{R}^{s}}\mathcal{I}_{t^{\prime }}(\mathrm{d}z|x)\circ\mathcal{I}_{t}(E|z). \tag{4}\]
The proof of this proposition is given in [12]. The quantity \(\mathcal{I}_{t}(\bullet|x)[\mathds{1}]\) turns out to be a positive operator valued measure, conditional on \(x\). Equation (4) is the quantum analogue of the Chapman-Kolmogorov identity for transition probabilities [22, Sec. 10].
Property 4. represents a compatibility condition among the various instruments at different times. Via Kolmogorov's extension theorem [22, Theor. 1.8], this property allows to represent the classical component as a stochastic process \(X(t)\), whose joint probabilities at the times \(0<t_{1}<t_{2}<\cdots<t_{m}\) (for an initial state \(\tilde{\pi}_{0}\)) are given by
\[P[X(t_{1})\in E_{1},X(t_{2})\in E_{2},\ldots,X(t_{m})\in E_{m}| \hat{\pi}_{0}]=\int_{\mathbb{R}^{n}}\mathrm{d}x\,\mathrm{Tr}\bigg{\{}\hat{\pi} _{0}(x)\int_{E_{1}}\mathcal{I}_{t_{1}}(\mathrm{d}x_{1}|x)\\ \circ\int_{E_{2}}\mathcal{I}_{t_{2}-t_{1}}(\mathrm{d}x_{2}|x_{1} )\circ\cdots\circ\int_{E_{m}}\mathcal{I}_{t_{m}-t_{m-1}}(\mathrm{d}x_{m}|x_{m- 1})[\mathds{1}]\bigg{\}}. \tag{5}\]
## 3 Quasi-free hybrid dynamics
When the dynamical semigroup is restricted to quasi-free maps, its structure can be completely characterized [12]. As said in Sec. 1, quasi-free maps are defined by their action on the Weyl operators; to introduce such operators also the quantum system is taken to be continuous with Hilbert space
\[\mathcal{H}=L^{2}(\mathbb{R}^{n}). \tag{6}\]
### Settings
Firstly, we introduce the position and momentum operators \(Q_{j},\,P_{j}\); we also use the vector notation
\[R=\begin{pmatrix}Q\\ P\end{pmatrix},\qquad R_{j}=\begin{cases}Q_{j}&j=1,\ldots,n,\\ P_{j-n}&j=n+1,\ldots,2n.\end{cases} \tag{7}\]
The canonical commutation relations (CCR) take the form
\[[R_{i},R_{j}]=\mathrm{i}\sigma_{ij},\qquad\sigma_{ij}=\begin{cases}1&1\leq i \leq n\quad j=i+n,\\ -1&n+1\leq i\leq 2n\quad j=i-n,\\ 0&\text{otherwise},\end{cases}\]
and the Weyl operators can be written as
\[W_{1}(\zeta)=\exp\left\{\mathrm{i}\zeta\cdot R\right\}\in\mathcal{B}( \mathcal{H}),\qquad\zeta\in\Xi_{1}:=\mathbb{R}^{2n};\] (8a) \[\Xi_{1}\] is the _quantum phase space_.
For the classical component, the analogous objects are the Weyl functions:
\[W_{0}(k)\in L^{\infty}(\mathbb{R}^{s}),\qquad W_{0}(k)(x)=\exp\left\{\mathrm{ i}k\cdot x\right\},\quad k,\,x\in\Xi_{0}. \tag{8b}\]
For the hybrid system we can introduce the total phase space \(\Xi\) and the (generalized) Weyl operators \(W(\xi)\):
\[\Xi=\Xi_{1}\oplus\Xi_{0}=\mathbb{R}^{d},\qquad d=2n+s, \tag{8c}\] \[W(\xi)=W_{1}(\zeta)W_{0}(k)\in\mathcal{N},\qquad\xi=\begin{pmatrix} \zeta\\ k\end{pmatrix},\qquad\zeta\in\Xi_{1},\qquad k\in\Xi_{0}.\]
The Weyl operators satisfy the following composition property:
\[W(\xi+\eta)=W(\xi)W(\eta)\exp\left\{\frac{\mathrm{i}}{2}\,\xi^{\mathsf{T}} \sigma\eta\right\}=W(\eta)W(\xi)\exp\left\{-\frac{\mathrm{i}}{2}\,\xi^{ \mathsf{T}}\sigma\eta\right\}. \tag{9}\]
More rigorously, the Weyl operators \(W_{1}\) are defined as projective unitary representations of the translation group \(\Xi_{1}\)[23], or as displacement operators acting on coherent vectors [24]. Then, (9) represents the rigorous version of the CCR [23].
_Remark 3_ (Characteristic function of a state and Wigner function).: As in the pure quantum case, the states \(\hat{\pi}\in N_{\pi}\) are uniquely determined by their characteristic function \(\chi_{\hat{\pi}}(\xi)\)[11, Sec. 2.4] or by their Wigner function \(\mathcal{W}_{\hat{\pi}}(z)\)[24]:
\[\chi_{\hat{\pi}}(\xi)=\int_{\mathbb{R}^{s}}\mathrm{d}x\ \mathrm{Tr}\left\{\hat{\pi}(x )W(\xi)(x)\right\},\qquad\mathcal{W}_{\hat{\pi}}(z)=\frac{1}{(2\pi)^{d}}\int_{ \Xi}\mathrm{d}\xi\,\mathrm{e}^{-\mathrm{i}z^{\xi}}\xi\chi_{\hat{\pi}}(\xi). \tag{10}\]
A function \(\chi:\Xi\to\mathbb{C}\) is the characteristic function of a state [11, 12] if and only if
(1) \(\chi\) is continuous, (2) \(\chi(0)=1\), (3) for every integer \(N\) and every choice of \(\xi_{1},\ldots,\xi_{N},\ \xi_{j}\in\Xi\), the \(N\times N\)-matrix with elements \(\chi(\xi_{k}-\xi_{l})\exp\left\{\frac{\mathrm{i}}{2}\,\xi_{k}^{\mathsf{T}} \sigma\xi_{l}\right\}\) is positive semi-definite, (4) \(\chi\in L^{1}(\Xi)\).
### Quasi-free hybrid dynamical semigroup
**Definition 3**.: A _quasi-free hybrid dynamical semigroup_ is a family of maps \(\{\mathcal{T}_{t},\ t\geq 0\}\) such that Definition 1 holds with \(\mathcal{H}=L^{2}(\mathbb{R}^{n})\) and
* (quasi-free property) for all \(\xi\in\Xi\) \[\mathcal{T}_{t}[W(\xi)]=f_{t}(\xi)W(S_{t}\xi),\] (11) where \(S_{t}\) is a linear operator from \(\Xi\) to \(\Xi\), and \(f_{t}\) is a continuous function from \(\Xi\) to \(\mathbb{C}\).
The factor \(f_{t}(\xi)\) is the _noise function_ and \(S_{t}\) gives the dynamics on the phase space. The main result of [12] concerns the explicit structure of these objects and the complete characterization of quasi-free hybrid dynamical semigroups.
**Theorem 2**.: \(\{\mathcal{T}_{t},\ t\geq 0\}\) _satisfies Definitions 1 and 3 if and only if \(S_{t}\) and \(f_{t}(\xi)\) have the following structure:_
1. \(S_{t}=\mathrm{e}^{Zt},\ \forall t\geq 0\)_, where_ \(Z\) _is a real_ \(d\times d\)_-matrix;_
2. \(f_{t}(\xi)=\exp\left(\int_{0}^{t}\mathrm{d}\tau\,\psi(S_{\tau}\xi)\right)\)_, where_ \[\psi(\xi)=\mathrm{i}\alpha\cdot\xi-\frac{1}{2}\,\xi\cdot A\xi+\int_{\Xi}\nu( \mathrm{d}\eta)\left(\mathrm{e}^{\mathrm{i}\eta\cdot\xi}-1-\mathrm{i}\mathtt{1 }_{\{|\eta|<1\}}(\eta)\eta\cdot\xi\right),\qquad\forall\xi\in\Xi=\mathbb{R}^{d},\] (12) \(\alpha\in\Xi\)_,_ \(A\) _is a real symmetric_ \(d\times d\)_-matrix with_ \(A\geq 0\)_,_ \(\mathtt{1}_{\{|\eta|<1\}}\) _is the indicator function of the sphere of radius 1,_ \(\nu\) _is a_ \(\sigma\)_-finite measure on_ \(\Xi\)_, such that_ \[\nu(\{0\})=0,\qquad\int_{\{|\eta|<1\}}|\eta|^{2}\,\nu(\mathrm{d}\eta)<+\infty,\qquad\nu(\{|\eta|\geq 1\})<+\infty;\] (13)
3. \(A\pm\mathrm{i}B\geq 0\)_,_ \(B:=\frac{1}{2}\,(\sigma P_{1}Z-Z^{\mathsf{T}}P_{1}\sigma^{\mathsf{T}})\)_._
The super-script \({}^{\mathsf{T}}\) means matrix transposition and \(P_{1}\) is the orthogonal projection on the quantum sector of the phase space:
\[P_{1}\Xi=\Xi_{1},\qquad P_{0}=\mathtt{1}-P_{1},\quad P_{0}\Xi=\Xi_{0}. \tag{14}\]
The structure of \(\psi(\xi)\) is the classical _Levy-Khintchine formula_ and \(\nu\) is known as _Levy measure_. The quantum features appear in the positivity condition (point 3): \(\sigma\) comes from the CCR. The term with the indicator function in the integral has a compensating role and allows for measures \(\nu\) with possible divergences in a neighbourhood of zero. This compensating term can be written in different ways; the quantity \(\psi\) can be left invariant by suitably changing \(\alpha\).
### The master equation
To better understand the dynamics of the hybrid system, it is useful to consider the master equation satisfied by the hybrid state. According to the discussion in Remark 2, the state at time \(t\) is \(\hat{\pi}_{t}=\mathcal{T}_{t*}[\hat{\pi}_{0}]\); then, if \(\mathcal{K}\) is the generator of \(\mathcal{T}_{t}\) and \(\mathcal{K}_{*}\) its pre-adjoint, the state dynamics is given by the master equation
\[\frac{\mathrm{d}}{\mathrm{d}t}\,\hat{\pi}_{t}=\mathcal{K}_{*}[\hat{\pi}_{t}]. \tag{15}\]
To express the structure of \(\mathcal{K}\) we firstly introduce some notation.
We write the matrices \(A\) and \(Z\) in the block form
\[A=\begin{pmatrix}A^{11}&A^{10}\\ A^{01}&A^{00}\end{pmatrix},\qquad Z=\begin{pmatrix}Z^{11}&Z^{10}\\ Z^{01}&Z^{00}\end{pmatrix}, \tag{16}\]
where \(A^{11}\) is a real, non-negative \(2n\times 2n\) matrix, \(A^{00}\) is a real, non-negative \(s\times s\) matrix, \(A^{10}\) is a real \(2n\times s\) matrix, \(A^{01}={A^{10}}^{\mathsf{T}}\), \(Z^{11}\) is a real \(2n\times 2n\) matrix, \(Z^{00}\) is a real \(s\times s\) matrix, \(Z^{10}\) is a real \(2n\times s\) matrix, \(Z^{01}\) is a real \(s\times 2n\) matrix. We write also the vector \(\alpha\) and the matrix \(B\) in a similar block form:
\[\alpha=\begin{pmatrix}\beta\\ \alpha^{0}\end{pmatrix},\qquad B=\begin{pmatrix}B^{11}&B^{10}\\ B^{01}&0\end{pmatrix}=\frac{1}{2}\begin{pmatrix}\sigma Z^{11}-{Z^{11}}^{ \mathsf{T}}\sigma^{\mathsf{T}}&\sigma Z^{10}\\ -{Z^{10}}^{\mathsf{T}}\sigma^{\mathsf{T}}&0\end{pmatrix}. \tag{17}\]
Finally, we define
\[D:=\frac{1}{2}\left(Z^{11}\sigma+\sigma^{\mathsf{T}}{Z^{11}}^{\mathsf{T}} \right), \tag{18}\]
\[G:=\sigma^{\mathsf{T}}A^{11}\sigma+\frac{\mathrm{i}}{2}\left(\sigma^{\mathsf{ T}}{Z^{11}}^{\mathsf{T}}-Z^{11}\sigma\right),\qquad C:=A^{00},\qquad E:= \sigma^{\mathsf{T}}A^{10}-\frac{\mathrm{i}}{2}\,Z^{10}. \tag{19}\]
Then, the positivity condition 3. of Theorem 2, which is equivalent to \(\left(\sigma^{\mathsf{T}}\otimes\mathds{1}\right)\left(A-\mathrm{i}B\right) \left(\sigma\otimes\mathds{1}\right)\geq 0\), can be written as
\[\begin{pmatrix}G&E\\ E^{\dagger}&C\end{pmatrix}\geq 0. \tag{20}\]
Let us stress that \(Z^{01}\), \(Z^{00}\), and \(D\) are not involved in the positivity condition.
By using the notation above and (7), it is easy to check that the generator \(\mathcal{K}\), given in [12, Sec. 3], can be rewritten in the following form, where the quantum-classical interaction terms are highlighted.
**Proposition 3**.: _When \(a\) is in the linear span of the Weyl operators, \(f\) is bounded and twice differentiable, and \(x_{j}f(x)\) (\(j=1,\ldots,s\)) is bounded, the generator \(\mathcal{K}\) of \(\mathcal{T}_{t}\) can be written as_
\[\mathcal{K}[a\otimes f](x)=f(x)\sum_{l=1}^{2}\mathcal{L}_{\mathrm{q}}^{l}[a]+ a\sum_{l=1}^{2}\mathcal{K}_{\mathrm{cl}}^{l}[f](x)+\sum_{l=1}^{4}\mathcal{K}_{ \mathrm{int}}^{l}[a\otimes f](x), \tag{21}\]
\[\mathcal{L}_{\mathrm{q}}^{1}[a]=\sum_{i,j=1}^{N}G_{ij}\left(R_{i}aR_{j}-\frac{ 1}{2}\left\{R_{i}R_{j},a\right\}\right)+\mathrm{i}\left[H_{\mathrm{q}},\,a \right],\qquad H_{\mathrm{q}}=\beta^{\mathsf{T}}\sigma R+\frac{1}{2}\,R^{ \mathsf{T}}DR, \tag{22a}\] \[\mathcal{L}_{\mathrm{q}}^{2}[a]=\int_{\Xi}\nu(\mathrm{d}\eta) \bigg{\{}W_{1}(\sigma\zeta)^{\dagger}aW_{1}(\sigma\zeta)-a-\mathds{1}_{\{| \eta|<1\}}(\eta)\mathrm{i}\left[\zeta^{\mathsf{T}}\sigma R,\,a\right]\bigg{\}},\qquad\eta=\begin{pmatrix}\zeta\\ y\end{pmatrix},\] (22b) \[\mathcal{K}_{\mathrm{cl}}^{1}[f](x)=\sum_{j=1}^{s}\alpha_{j}^{0}\, \frac{\partial f(x)}{\partial x_{j}}+\sum_{i,j=1}^{s}x_{i}Z_{ij}^{00}\,\frac{ \partial f(x)}{\partial x_{j}}+\frac{1}{2}\sum_{i,j=1}^{s}C_{ij}\,\frac{ \partial^{2}f(x)}{\partial x_{i}\partial x_{j}},\] (22c) \[\mathcal{K}_{\mathrm{cl}}^{2}[f](x)=\int_{\Xi}\nu(\mathrm{d}\eta) \bigg{\{}f(x+y)-f(x)-\mathds{1}_{\{|\eta|<1\}}(\eta)\sum_{j=1}^{s}y_{j}\,\frac{ \partial f(x)}{\partial x_{j}}\bigg{\}},\qquad\eta=\begin{pmatrix}\zeta\\ y\end{pmatrix},\] (22d) \[\mathcal{K}_{\mathrm{int}}^{1}[a\otimes f](x)=\mathrm{i}\left[H_{x}, \,a\right]f(x),\qquad H_{x}=x^{\mathsf{T}}Z^{01}\sigma R, \tag{22e}\]
\[\mathcal{K}_{\mathrm{int}}^{2}[a\otimes f](x)=-\sum_{i=1}^{N}\sum_{j=1}^{s}(\, \mathrm{Im}\,E_{ij})\,\{R_{i},a\}\,\frac{\partial f(x)}{\partial x_{j}},\qquad \mathrm{Im}\,E_{ij}=-\frac{1}{2}\,Z_{ij}^{10}, \tag{22f}\] \[\mathcal{K}_{\mathrm{int}}^{3}[a\otimes f](x)=\mathrm{i}\sum_{i=1}^{N}\sum_{j= 1}^{s}(\,\mathrm{Re}\,E_{ij})\,[R_{i},a]\,\frac{\partial f(x)}{\partial x_{j}}, \qquad\mathrm{Re}\,E_{ij}=\left(\sigma^{\intercal}A^{10}\right)_{ij},\] (22g) \[\mathcal{K}_{\mathrm{int}}^{4}[a\otimes f](x)=\int_{\Xi}\nu( \mathrm{d}\eta)\,(f(x+y)-f(x))\left(W_{1}(\sigma\zeta)^{\dagger}aW_{1}(\sigma \zeta)-a\right),\qquad\eta=\begin{pmatrix}\zeta\\ y\end{pmatrix}. \tag{22h}\]
The domain of the generator can be extended by linearity and weak\({}^{*}\)-closure. Equation (11) and Theorem 2 give the explicit form of the action of \(\mathcal{T}_{t}\) on the Weyl operators; then, by linearity and weak\({}^{*}\)-continuity, we obtained the action on the whole \(W^{*}\)-algebra \(\mathcal{N}\). So, the generator of the semigroup was not needed to determine the semigroup, but it is useful to better understand the dynamical behaviour and the physical interactions.
We have also separated the "diffusive" contributions \(\mathcal{L}_{\mathrm{q}}^{1}\), \(\mathcal{K}_{\mathrm{cl}}^{1}\), \(\mathcal{K}_{\mathrm{int}}^{1}\), \(\mathcal{K}_{\mathrm{int}}^{2}\) from the "jump" terms \(\mathcal{L}_{\mathrm{q}}^{2}\), \(\mathcal{K}_{\mathrm{cl}}^{2}\), \(\mathcal{K}_{\mathrm{int}}^{3}\), \(\mathcal{K}_{\mathrm{int}}^{4}\). As written at the end of Sec. 3.2, the compensating term in the jump part can be written in different ways, and this could change this separation.
## 4 The structure of the quasi-free dynamics
In the following we illustrate the role of the various terms introduced in Proposition 3.
### Reduced quantum dynamics
As said in Remark 1, the reduced quantum state is
\[\hat{\rho}_{t}=\int_{\mathbb{R}^{s}}\mathrm{d}x\,\hat{\pi}_{t}(x).\]
By using the notation (1) for the duality form, we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\,\,\mathrm{Tr}\,\{\hat{\rho}_{t}a\}=\frac{ \mathrm{d}}{\mathrm{d}t}\langle\hat{\pi}_{t}|a\otimes 1\rangle=\langle\hat{\pi}_{t}| \mathcal{K}[a\otimes 1]\rangle.\]
So, the reduced quantum dynamics is obtained by taking \(f(x)=1\) in the generator (3); then, Eqs. (22) give
\[\mathcal{K}[a\otimes 1](x)=\mathcal{L}_{\mathrm{q}}^{1}[a]+\mathcal{L}_{ \mathrm{q}}^{2}[a]+\mathrm{i}[H_{x},a]. \tag{23}\]
To have an autonomous reduced master equation, no \(x\) dependence can appear in the reduced generator (23); by (22e), we must have \(H_{x}=0\), i.e. \(Z^{01}=0\). When \(Z^{01}\neq 0\), the interaction term \(\mathcal{K}_{\mathrm{int}}^{1}\) can be seen as a random quantum Hamiltonian evolution, because the classical variables \(x_{j}\), \(j=1,\ldots,s\), appear in \(H_{x}\). We can say that \(Z^{01}\) controls the information flow from the classical system to the quantum one.
By construction, \(\mathcal{L}_{\mathrm{q}}^{1}+\mathcal{L}_{\mathrm{q}}^{2}\) is the most general generator of a quasi-free quantum dynamical semigroup. Note that this unbounded generator has, formally, a structure of Lindblad type. This result was obtained in [12], while generators with a non-vanishing "jump" part already appeared in the literature [26, 27].
### Reduced classical dynamics
Now, the classical reduced density and its dynamics are given by
\[p_{t}(x)=\mathrm{Tr}\{\hat{\pi}_{t}(x)\},\qquad\frac{\mathrm{d}}{\mathrm{d}t} \int_{\mathbb{R}^{s}}\mathrm{d}x\,p_{t}(x)f(x)=\frac{\mathrm{d}}{\mathrm{d}t} \langle\hat{\pi}_{t}|\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes f \rangle=\langle\hat{\pi}_{t}|\mathcal{K}[\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}\otimes f]\rangle.\]
By Eqs. (21), (22), we get
\[\mathcal{K}[\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes f](x)=\sum_{l=1}^{2} \mathcal{K}_{\mathrm{cl}}^{l}[f](x)+\mathcal{K}_{\mathrm{int}}^{2}[ \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes f](x),\qquad \mathcal{K}_{\mathrm{int}}^{2}[\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}\otimes f](x)=\sum_{i=1}^{N}\sum_{j=1}^{s}R_{i}Z_{ij}^{10}\,\frac{ \partial f(x)}{\partial x_{j}}. \tag{24}\]
Then, the reduced evolution equation of the classical component is autonomous only when \(Z^{10}=0\). In this case we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\,p_{t}(x)=-\sum_{j=1}^{s}\alpha_{j}^{ \alpha}\,\frac{\partial p_{t}(x)}{\partial x_{j}}-\sum_{i,j=1}^{s}Z^{00}_{ij}\, \frac{\partial x_{i}p_{t}(x)}{\partial x_{j}}+\frac{1}{2}\sum_{i,j=1}^{s}C_{ij} \,\frac{\partial^{2}p_{t}(x)}{\partial x_{i}\partial x_{j}}\\ +\int_{\Xi}\nu(\mathrm{d}\eta)\bigg{\{}p_{t}(x-y)-p_{t}(x)+1_{\{| \eta|<1\}}(\eta)\sum_{j=1}^{s}y_{j}\frac{\partial p_{t}(x)}{\partial x_{j}} \bigg{\}},\qquad\eta=\binom{\zeta}{y}\,. \tag{25}\]
This is a version of the Kolmogorov-Fokker-Planck equation [28, Secs. 3.5.2, 3.5.3], giving rise to semigroups of transition probabilities of time-homogeneous Markov processes [28, 22]. For \(C=0\) and \(\nu=0\), one can also obtain the Liouville equation for a system with quadratic Hamiltonian [12, Sec. 5.1.2].
### Dissipation and information exchange
In principle a classical system can be observed without disturbing its dynamics. Due to the classical-quantum interaction, by observing the classical component one can gain information on the quantum subsystem without changing the dynamics of the total system. However, this flow of information is possible only when some dissipation is present in the dynamics. The usual statement that "measurements perturb a quantum system" becomes something like "some irreversibility must be present in the quantum dynamics to extract information from a quantum system". In the modern formulation of quantum measurement theory the notions of observables and state reduction have been generalized by the notions of _positive operator valued measures_ (also called _resolution of identity_) and _instruments_ (or _operation_ valued measures) [23, 16, 24], which indeed we have already constructed in Sec. 2.1.
In the following subsections 4.3.1, 4.3.2 we shall show how some irreversibility and noise source are needed to have a non-trivial quantum-classical dynamics. However, we are working only in the quasi-free case, while this point was raised also in other approaches, see for instance [3, 5].
#### 4.3.1 No dissipation in the quantum system
We consider now the case of no dissipation in the quantum subsystem, in the sense that the reduced quantum dynamics is of purely Hamiltonian type. So, we take \(G=0\) in (22a) to have \(\mathcal{L}^{1}_{\mathrm{q}}\) of purely Hamiltonian type; then, we need \(\mathcal{L}^{2}_{\mathrm{q}}=0\) and we take the measure \(\nu\) concentrated on \(\Xi_{0}\): \(\int_{\Xi}\nu(\mathrm{d}\zeta,\mathrm{d}y)g(\zeta,y)=\int_{\Xi_{0}}\mu( \mathrm{d}y)g(0,y)\). By the positivity condition (20) we get also \(E=0\); then, we have
\[\mathcal{L}^{2}_{\mathrm{q}}=0,\qquad\mathcal{K}^{l}_{\mathrm{int}}=0\,\,\, \text{for}\,\,l=2,3,4.\]
The term \(\mathcal{K}^{1}_{\mathrm{cl}}\) remains unchanged and \(\mathcal{K}^{2}_{\mathrm{cl}}\) becomes
\[\mathcal{K}^{2}_{\mathrm{cl}}[f](x)=\int_{\Xi_{0}}\mu(\mathrm{d}y)\bigg{\{}f(x +y)-f(x)-1_{\{|y|<1\}}(y)\sum_{j=1}^{s}y_{j}\frac{\partial f(x)}{\partial x_{j }}\bigg{\}}.\]
Finally, the total generator (21) reduces to
\[\mathcal{K}[a\otimes f](x)=\mathrm{i}\left[H_{\mathrm{q}}+H_{x},\,a\right]f(x )+a\sum_{l=1}^{2}\mathcal{K}^{l}_{\mathrm{cl}}[f](x).\]
Only a single interaction term survives, a Hamiltonian term which gives a force exerted by the classical system on the quantum one.
Without dissipation in the quantum system, there is no possibility for the classical system to extract information from the quantum component. Some "smooth state reduction" is needed to extract information from a quantum system.
#### 4.3.2 No dissipation in the classical system
A similar situation happens when we ask for no dissipation in the classical component. Let us assume \(C=0\) and \(\mathcal{K}^{2}_{\rm cl}=0\), which gives again \(E=0\); moreover, the measure \(\nu\) turns out to be concentrated on \(\Xi_{1}\):
\[\int_{\Xi}\nu({\rm d}\zeta,{\rm d}y)g(\zeta,y)=\int_{\Xi_{1}}\tilde{\mu}({\rm d }\zeta)g(\zeta,0).\]
Then, we have again \(\mathcal{K}^{l}_{\rm int}=0\) for \(l=2,3,4\). Also in this situation no information can flow from the quantum system to the classical one.
When some quantum information is extracted, its intrinsic probabilistic character introduces a certain degree of uncertainty in the classical signal.
### The interaction terms
The interaction \(\mathcal{K}^{1}_{\rm int}\) (22e) involves the random Hamiltonian \(H_{x}\); it is the only interaction term which appears in the reduced quantum dynamics (23). This term represents a force exerted on the quantum system by the classical one.
On the other side, \(\mathcal{K}^{2}_{\rm int}\) (22f) is the only interaction surviving in the reduced classical dynamics and it represents some action of the quantum system on the classical one. The matrix \(\,{\rm Im}\,E_{ij}\), appearing in this interaction, is involved in the positivity condition (20); we can say that this interaction term injects some quantum uncertainty into the classical output.
The interaction terms \(\mathcal{K}^{3}_{\rm int}\) (22g) and \(\mathcal{K}^{4}_{\rm int}\) (22h) have a peculiar structure, as they vanish either when the reduced classical dynamics is considered (\(a=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\)), either when the reduced quantum dynamics is considered (\(f(x)=1\)). Their effects are visible only in the total joint dynamics and modify the quantum-classical correlations.
In the general theory of measurements in continuous time and in previous examples of hybrid dynamics, some stochastic evolution equations for quantum states have been introduced [3, 5, 13, 16, 18, 24, 25]. They give the _conditional state_, which means the state to be attributed to the quantum system at time \(t\), when the trajectory of the classical system is known up to \(t\). This construction is also known as stochastic unraveling; in this framework, the physical meaning of the interaction terms (mainly of the last two) should become more transparent.
## 5 Possible developments
As already discussed, the notion of hybrid dynamics of a quantum-classical system is connected to measurements in continuous time. In the Markovian case quantum continuous measurements have been formalized in [14, 15] by the notion of _semigroup of probability operators_. Such semigroups are defined by the properties of Definition 1, to which the translation invariance of the classical component is added, in order to represent a signal without an intrinsic dynamics. The translations in the classical component are defined by: \(\forall f\in L^{\infty}(\mathbb{R}^{s})\), \(\mathcal{R}_{z}[f](x)=f(x+z)\) (almost everywhere); we shall identify \(\mathcal{R}_{z}\) and \({\rm Id}\otimes\mathcal{R}_{z}\). Then, a _semigroup of probability operators_ is a hybrid dynamical semigroup, as given in Definition 1, to which the invariance restriction is added:
* \(\mathcal{R}_{z}\circ\mathcal{T}_{t}=\mathcal{T}_{t}\circ\mathcal{R}_{z}\), \(\forall z\in\mathbb{R}^{s}\), \(\forall t\geq 0\).
In [15] the generator of the most general semigroup of probability operators has been found under a further continuity restriction, which implies that only bounded operators on the quantum component are involved in this generator. The construction of the generator is again based on the Levy-Khintchine formula. Essentially, in the final expression the quantum position and momentum operators \(R_{j}\) (7) are substituted by generic bounded operators on \(\mathcal{H}\). Moreover, the jump part has again an integral structure very similar to the one of the quasi-free case, but now not only unitary operators can appear in this integral. Since the semigroups found in [15] are in a sub-class of the hybrid semigroups, one could try to modify them to construct more general non-quasi free hybrid semigroups.
To better understand how to proceed, it is useful to see what happens in the quasi-free case (Def. 3) by adding the restriction h. By using (11) we have
\[(\mathcal{R}_{z}\circ\mathcal{T}_{t})[W(\xi)](x)=f_{t}(\xi)W_{1}(P_{1}S_{t}\xi) W_{0}(P_{0}S_{t}\xi)(x+z),\]
\[(\mathcal{T}_{t}\circ\mathcal{R}_{z})[W(\xi)](x)=f_{t}(\xi)W_{1}(P_{1}S_{t}\xi) (x)W_{0}(P_{0}S_{t}\xi)(x)\mathrm{e}^{iP_{0}\xi\cdot z};\]
by applying the restriction h. this gives \(P_{0}S_{t}\xi=P_{0}\xi\), \(\forall\xi\in\Xi=\Xi_{1}\oplus\Xi_{0}\). So, by differentiating with respect to \(t\) and using the block structure (16), we get
\[Z^{00}=0,\qquad Z^{01}=0.\]
The same result can be obtained by asking the commutation of classical translations with the various terms (22) of the generator. The restriction \(Z^{00}=0\) affects only the classical dynamics \(\mathcal{K}^{1}_{\mathrm{cl}}\) (22c) and gives the vanishing of the deterministic part of the classical motion. The restriction \(Z^{01}=0\) is equivalent to \(\mathcal{K}^{1}_{\mathrm{int}}=0\), which means the vanishing of the force exerted by the classical system on the quantum one. No other change appears in the generator. It can be checked that the final form of \(\mathcal{K}\) is, formally, in the class of the generators obtained in [15], but with position and momentum operators instead of bounded operators on \(\mathcal{H}\). We see that the effect of adding the translation invariance is to suppress the only two terms were the quantity \(x\) appears: \(H_{x}\) and \(\sum_{i,j=1}^{s}x_{i}Z^{00}_{ij}\frac{\partial f(x)}{\partial x_{j}}\). So, a possibility to get most general hybrid semigroup is to allow for a \(x\)-dependence in the various terms of the generator of [15].
A second way to get other classes of hybrid dynamical semigroups is to use stochastic differential equations in Hilbert spaces, as done in [3, 5, 25]. While the construction of [3, 5] was done with the explicit aim of constructing examples of quantum-classical dynamics, the approach of [25] was developed having in mind the study of a class of stochastic differential equations and the generalization of continuous measurements to non-Markovian cases. In any case, in the Markovian case the expression [25, (4.39)] is found, which indeed modifies the generator of [15] by introducing many \(x\)-dependencies both in the diffusive part and in the jump part. Apart the restriction that only bounded operators on \(\mathcal{H}\) are allowed, this expression gives rise to a very general master equation for a quantum-classical dynamics.
Finally, a fruitful approach to quantum measurements in continuous time has been through the use of quantum stochastic calculus [17, 18, 23]; this should open new possibilities also to construct quantum-classical dynamical theories.
|
2310.14670
|
Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and
Beyond
|
Vision-language (VL) understanding tasks evaluate models' comprehension of
complex visual scenes through multiple-choice questions. However, we have
identified two dataset biases that models can exploit as shortcuts to resolve
various VL tasks correctly without proper understanding. The first type of
dataset bias is \emph{Unbalanced Matching} bias, where the correct answer
overlaps the question and image more than the incorrect answers. The second
type of dataset bias is \emph{Distractor Similarity} bias, where incorrect
answers are overly dissimilar to the correct answer but significantly similar
to other incorrect answers within the same sample. To address these dataset
biases, we first propose Adversarial Data Synthesis (ADS) to generate synthetic
training and debiased evaluation data. We then introduce Intra-sample
Counterfactual Training (ICT) to assist models in utilizing the synthesized
training data, particularly the counterfactual data, via focusing on
intra-sample differentiation. Extensive experiments demonstrate the
effectiveness of ADS and ICT in consistently improving model performance across
different benchmarks, even in domain-shifted scenarios.
|
Zhecan Wang, Long Chen, Haoxuan You, Keyang Xu, Yicheng He, Wenhao Li, Noel Codella, Kai-Wei Chang, Shih-Fu Chang
|
2023-10-23T08:09:42Z
|
http://arxiv.org/abs/2310.14670v2
|
# Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond
###### Abstract
Vision-language (VL) understanding tasks evaluate models' comprehension of complex visual scenes through multiple-choice questions. However, we have identified two dataset biases that models can exploit as shortcuts to resolve various VL tasks correctly without proper understanding. The first type of dataset bias is _Unbalanced Matching_ bias, where the correct answer overlaps the question and image more than the incorrect answers. The second type of dataset bias is _Distractor Similarity_ bias, where incorrect answers are overly dissimilar to the correct answer but significantly similar to other incorrect answers within the same sample. To address these dataset biases, we first propose Adversarial Data Synthesis (ADS) to generate synthetic training and debiased evaluation data. We then introduce Intra-sample Counterfactual Training (ICT) to assist models in utilizing the synthesized training data, particularly the counterfactual data, via focusing on intra-sample differentiation. Extensive experiments demonstrate the effectiveness of ADS and ICT in consistently improving model performance across different benchmarks, even in domain-shifted scenarios.
## 1 Introduction
Visual Question Answering (VQA) is a challenging vision-language task that requires reasoning with integrated information from visual and text modalities (Antol et al., 2015; Zellers et al., 2019; Lei et al., 2020; Ren et al., 2015; Lu et al., 2021; Tapaswi et al., 2016; Schwenk et al., 2022). VQA benchmarks (Zellers et al., 2019; Lei et al., 2020; Tapaswi et al., 2016; Lei et al., 2018, 2019) present complex scenarios with multiple entities in a multiple-choice question format, where the model selects the correct answer from multiple long, context-dependent candidate answers.
Previous studies have examined dataset bias in VQA benchmarks with short-phrase answers (VQA-Short) (Chen et al., 2020; Dancette et al., 2021; Gokhale et al., 2020; Gupta et al., 2022; Niu et al., 2021; Ramakrishnan et al., 2018). These benchmarks typically consist of simple questions that can be answered using one or two words, focusing primarily on visual perception. However, as the demand for complex reasoning capabilities has increased, VLU benchmarks incorporating rich text annotation as contextual information have gained popularity, leaning towards the annotation of long answers. In this paper, we uncover dataset biases in **VQA-Long**, which employs long answer formats. We contend that these biases pose greater challenges for mitigation and have a significant impact on the training and evaluation process of supervised models. Specifically, we identify two prominent types of biases in VQA-Long. The first is Unbalanced Matching (UM) bias, characterized by an uneven distributi
Figure 1: (a): Prior studies examined dataset bias in the distribution of short-phrase answers (_e.g._, “white” is often the answer when asking about color). (b): Our work investigates the biases in VQA with long answer choices, where the correct answer has more n-grams overlapped with the image and question (orange and green). Meanwhile, the incorrect answers contain more irrelevant n-grams to the scene (blue).
tween the answer choices and premise (_i.e_., image and question). The correct answer often exhibits a higher n-gram overlap with the question or mentions more objects in the image than distracting options, which frequently contain unrelated N-grams. The second type, termed Distractor Similarity (DS) bias, occurs when the model can identify the correct answer without considering the question and image. This bias arises when the correct answer distinctly differs from the distractors, which are highly similar amongst themselves.
The two biases we have identified are also not limited to VQA-Long; they are also present in other VLU benchmarks, including SNLI-VE (Xie et al., 2019) and VLEP (Lei et al., 2020). Capitalizing on these biases, we design a simple algorithm based on heuristic rules without any training. Surprisingly, it yields high-performance comparable to supervised models: \(66.29\%\) Q2A accuracy on VCR, a long-form VQA problem on visual commonsense reasoning, \(69.77\%\) accuracy on SNLI-VE, and \(48.85\%\) on VLEP. These results raise questions about if the existing models truly comprehend the context or rely on shortcuts to answer questions.
Different from the biases identified in VQA-short, the dataset biases we identified in these text-rich datasets are significantly harder to remove. They are affected by several reasons, cross-modal correlations, open-ended text generation, and heavy reliance on human artifacts during annotation. These biases in VQA-Long with rich context are more likely to be text-dependent, causing models to under-utilize visual information and potentially develop false visual dependencies.
In terms of mitigating dataset biases, prior data synthesis approaches (Chen et al., 2020; Dancette et al., 2021; Gokhale et al., 2020; Gupta et al., 2022; Niu et al., 2021; Ramakrishnan et al., 2018) have demonstrated their effectiveness for VQA-Short; however, they are not suitable for VQA-Long. Additionally, some well-known methods (Chen et al., 2020; Liang et al., 2020) disrupt the data distribution through superficial text masking or image occlusions (see Figure 2). To overcome these limitations, we propose a novel Adversarial Data Synthesis (ADS) method to mitigate biases in VQA-Long, addressing the under-utilization of visual information and incorrect visual dependency in models. ADS generates synthetic factual and counterfactual text data using ADS-T and synthesizes images using ADS-I. Specifically, ADS-T generates long sentence answers and distractors directly, while ADS-I generates images that closely resemble real images with minimal disturbance to the data.
Furthermore, previous debiasing methods directly incorporate the synthesized counterfactual questions and images into the models' training input (Figure 2c). However, these methods are not applicable to VQA-Long, as it requires an exact-matched ground truth long answer and distinct distracting options for constructing a multiple-choice question. To address this limitation, we introduce Intra-sample Contrastive Training (ICT), which employs a loss function to promote the models' focus on intra-sample disparities among the synthesized factual, and counterfactual images. This approach guides models to learn the appropriate visual dependency pertaining to the query.
Through extensive comparisons over VCR, SNLI-VE, VLEP, and even VQA-Short datasets, we empirically demonstrate the effectiveness of our methods in improving model performance under both standard evaluation and domain-shifted scenarios. To further assess models' robustness against dataset bias, we create domain-shifted evaluation benchmarks based on VCR using ADS. These benchmarks are validated by human annotators to guarantee high quality. While our analysis and experiments primarily focus on VCR, SNLI-VE, and VLEP, it is important to note that these dataset biases frequently appear in other VLU tasks with rich contextual information.
In summary, our contributions are three-fold:
\(\bullet\) We conduct the first comprehensive study on
Figure 2: A comparison of image-text data. (a) lists the original VCR question, answer choices, and image data. (b) lists our synthesized factual (I+, A+) and counterfactual (I-, A-) image-text data. (c) shows an example of modifying a sample of VCR data using the former solutions (Chen et al., 2020; Liang et al., 2020).
data biases in VQA-Long, uncovering two prevalent biases in VLU benchmarks.
\(\bullet\) We propose a data synthesis method (ADS) and a training strategy (ICT) to address these biases. ADS generates factual and counterfactual image-text data to mitigate biases, while ICT aids models in utilizing this synthesized data during training.
\(\bullet\) We introduce evaluation benchmarks to evaluate the robustness of VL models against dataset biases, establishing a more fair comparison for existing and future models.
## 2 Related Work
**Biases in VL Benchmarks.** Previous studies have predominantly focused on biases in VQA-Short Li et al. (2018); Hudson and Manning (2019); Lu et al. (2021); Ren et al. (2015); Johnson et al. (2017). These benchmarks lack sample-specific candidate options, resulting in VL models being supervised to classify image-question pairs using shared class labels consisting of short answers. In these datasets, researchers have pointed out that models often downplay visual information and instead focus on learning biases in the text Agrawal et al. (2018); Dancette et al. (2021); Zhang et al. (2016); Manjunatha et al. (2019). This is exemplified by models directly learning the shallow mapping between prior question words and shared class labels in the absence of sample-specific contextualized candidate options. Consequently, models develop false visual dependency Cao et al. (2020); Wang et al. (2022) as they may succeed in resolving VQA tasks Selvaraju et al. (2016); Chen et al. (2020); Gupta et al. (2022) utilizing irrelevant visual cues.
However, the biases previously examined may not apply to VQA-Long due to its unique characteristics, such as the absence of shared class labels, the presence of sample-specific candidate options, and the diverse question types. Consequently, it is challenging for models to conclude the question types and determine the most popular answer choices for each question type. Given its difficulty and complex visual scenes, VQA-Long has gained popularity in recent years Zellers et al. (2019); Tapaswi et al. (2016); Schwenk et al. (2022); Zhu et al. (2016); Li et al. (2020), but bias analysis in this specific context has not been explored extensively. While a recent study Ye and Kovashka (2021) briefly addressed bias issues in VCR, it only focused on the exact matching of pronoun words. In contrast, our research delves into a comprehensive analysis of biases across all textual components, cross-modal correlations in the input, and the process of generating distractors. We identify more general bias problems and demonstrate their prevalence.
**Debiasing Methods.** Various approaches were proposed to counter biases but only focus on VQA-Short. They can be categorized into two directions, training strategies and Data Synthesis (DS), and all suffer from various constraints. For instance, training strategies like Gupta et al. (2022); Niu et al. (2021) and DS solutions, Ray et al. (2019); Selvaraju et al. (2020); Ribeiro et al. (2019); Wang et al. (2022) only focus on a single modality. Debiased training like Wang et al. (2022); Niu and Zhang (2021); Zhang et al. (2021) require constraints of either a specific model structure or doubling the models' complexity. Other methods Chen et al. (2020); Liang et al. (2020) apply occlusion boxes or maskings on images or questions and thus drastically disturb data distribution, leading to nonsensical synthesized answers. Gokhale et al. (2020) tries to improve the synthesized image and text quality but is limited to two specific question types. Most importantly, all of them cannot generalize to VQA-Long. Only a few works, Wang et al. (2022),d; Ye and Kovashka (2021), are related to VQA-Long but still fail to identify specific bias issues.
## 3 Bias Analysis in VQA-Long
In this section, we identify and analyze two distinct types of biases that commonly occur in VQA-Long and other VLU benchmarks.
### Unbalanced-Matching Dataset Bias
Inspired by Ye and Kovashka (2021), we conducted a comprehensive analysis of matching n-grams within candidate options against the text premise (question), \(t\), and visual premise (image)1, \(v\). We calculate the percentage of samples, \(C_{c}^{p}\) (\(C_{d}^{p}\)) as following, where correct (incorrect) answers \(a^{c}\) (\(a^{d}\)) have more matched n-grams (\(n\leq 3\)) against the premise information \(p\in\{v,t\}\) than the other:
Footnote 1: For matching n-grams against the visual premise, we extract object labels from images.
\[O(a,p)=\#\text{ matched n-grams between $a$ and $p$}\] \[C_{c}^{p}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}\{O(a_{i}^{c},p_{i} )>\max_{a_{i}^{d}\in A_{i}-a_{i}^{c}}(O(a_{i}^{d},p_{i}))\},\] \[C_{d}^{p}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}\{\max_{a_{i}^{d} \in A_{i}-a_{i}^{c}}(O(a_{i}^{d},p_{i}))>O(a_{i}^{c},p_{i})\},\]
where for each sample \(i\), \(A_{i}\) represents all the paired candidate options for sample \(i\), \(a_{i}^{c}\) is the correct answer, \(a_{i}^{d}\) is one of the three distractors
(incorrect answers), and \(p_{i}\) is the premise (either image or question). \(O(a,p)\) is the count of matched n-grams, and N is the total number of samples.
Our analysis reveals that the _correct answer often has the highest number of matched n-grams against the question and image among candidate answer options_. Specifically, for the Q2A task2 in VCR, \(C_{c}^{t}\) can be as high as \(66.29\%\), which is much higher than the percentage of distractors, \(C_{d}^{t}\), at \(29.16\%\). When using image as premise, \(C_{c}^{v}\) is \(42.75\%\), which is also higher than \(C_{d}^{v}\), at \(40.23\%\). This unbalance also persists in other VLU benchmarks. For example, \(C_{c}^{t}\) is \(48.85\%\) (higher than \(C_{d}^{t}\), \(36.19\%\)) in VLEP and \(69.77\%\) (higher than \(C_{d}^{t}\), \(45.40\%\)) in SNLI-VE. Besides containing fewer n-gram overlaps against the premise, we observed that distractors even contain more irrelevant n-grams to the given context (Details in A.8), as in Figure 5.
Footnote 2: Results about QA2R and Q2AR tasks are in A.6.
### Distractor Similarity Dataset Bias
Many benchmarks [21, 22, 23, 24] rely on automated Adversarial Matching (AM) for generating distractors, aiming to minimize costs. AM generates distractors by reusing answers from other questions. It selects answers related to the given question while dissimilar to the correct answer. However, prior works tend to overly emphasize the dissimilarity to the correct answer and thus irrelevance to the context using stylistic models (Details in A.7). Additionally, a significant issue arises from generating distractors without considering visual information in AM [23, 22, 24]. Surprisingly, even in manual annotation settings [23, 22, 24], annotators are tasked with generating distracting candidates (distractors) without access to the image, forcing them to imagine and create excessively dissimilar distractors to avoid ambiguity. Additionally, insufficient premise can also cause limited diversity of generated distractors. In contrast, correct answers are consistently generated with visual and textual information, ensuring accuracy. Consequently, _the dissimilarity between correct and incorrect answers becomes exaggerated_ due to the different premise information used for their generation.
## 4 Biases Mitigation in VQA-Long
This section introduces (1) Adversarial Data Synthesis (ADS) to synthesize factual and counterfactual data; (2) Intra-sample Counterfactual Training (ICT) method to exploit the data.
### Adversarial Data Synthesis
ADS has two components, ADS-Text (ADS-T) for generating less biased answers and ADS-Image (ADS-I) for images.
#### 4.1.1 Ads-T
ADS-T generates synthesized options for a sample, \(A+\) and \(A-\), to alleviate the dataset biases.
**Multimodal Distractor Generation.** To improve distractors' diversity and relevance to the given context and correct answers, we incorporate visual premise information to improve Adversarial Matching (AM).
For given dataset examples, \(\{(p_{i},a_{i})\}_{i=1}^{K}\), \(p_{i}\) represents the premise (visual or textual), \(a_{i}\) denotes the answer and \(K\) is the total number of samples. Following AM, we utilize the first term in Eq. (1) to measure the relevance of the candidate answer, \(a_{j}\) from other questions, against the premise and the second term to measure the similarity between \(a_{i}\) and \(a_{j}\). Both \(S_{\text{t-rel}}\) and \(S_{\text{sim}}\) are approximated by stylish models5. Further, for every example, \((p_{i},a_{i})\), we can obtain a distractor by performing maximum-weight bipartite matching on a weight matrix \(\mathbf{W}\in\mathbb{R}^{N\times N}\), given by:
Footnote 5: [https://github.com/](https://github.com/)
further refine answer candidates from Multimodal Distractor Generation. We established specific criteria for quality distractors and hired experienced annotators for iterative refinement (Details in A.3).
Recognizing the improved quality through human refinement, we leverage the largely pre-trained ChatGPT [14] to mimic the human refinement process. With rich context and 5 human-annotated examples as input, the model can generalize for large-scale annotations5.
Footnote 5: [https://github.com/](https://github.com/)
focuses on mapping the given premise information to the correct answer among sample-specific candidate options. For instance, in VCR, models learn the local \(IQ\to A\) mapping, each \((i_{i},q_{i})\) pair against four specific answers, \((a_{i1},...,a_{i4})\); In SNLI-VE, the mapping exists between each image and specific candidate hypotheses. Unfortunately, the former methods do not address situations with sample-specific counterfactual candidate options. To incorporate them in VQA-Long tasks, we propose to use InfoNCE (Van den Oord et al., 2018) to measure the intra-sample contrastive loss, \(\mathcal{L}_{\mathrm{A-ICT}}\) between each \((i_{i},q_{i})\) against \((a_{i},a_{i}+,a_{i}-)\):
\[-\log\frac{\exp\left(\Phi\left(\mathbf{z},\mathbf{z}_{p}\right)/\tau\right)}{\exp\left( \Phi\left(\mathbf{z},\mathbf{z}_{p}\right)/\tau\right)+\exp\left(\Phi\left(\mathbf{z},\mathbf{ z}_{n}\right)/\tau\right)}, \tag{3}\]
where \(\Phi\) measures the cosine distance, \(\tau\) is a hyperparameter temperature, \(\mathbf{z}_{p}\) is the [CLS] token feature for a positive pair, \((I,Q,A)\) or \((I,Q,A+)\), and \(\mathbf{z}_{n}\) is for a negative pair, is \((I,Q,A-)\).
Synthesized answer candidates can have more balanced matched n-grams and more diverse distractor distributions, requiring a stronger model capacity to distinguish (Figure 3(b)). Therefore, Answer-focus ICT can encourage models to focus on this challenging intra-sample differentiation.
#### 4.2.3 Image-focused ICT
To be aware, the direct augmentation of counterfactual images, **I**- in training through the aforementioned two training losses still remains unclear, as we cannot find the paired answer choices for **I**-. Previous approaches in VQA v2 (Goyal et al., 2017) require generating new pseudo or soft answer labels (Chen et al., 2020; Gokhale et al., 2020) for **I**-. However, they are not feasible for VQA-Long tasks such as VCR, which require sample-specific sentence answer options.
We address this issue by transforming the \(IQ\to A\) mapping problem into a \(QA\to I\) mapping problem, which we further narrow to the intra-sample pairing between each \((q_{i},a_{i})\) pair and \((i_{i},i_{+},i_{i}-)\), similarly utilizing Eq. (3). Existing VL models often underuse visual information, leading to poor visual explainability. By contrasting sample-specific \((i_{i},i_{+},i_{i}-)\), we highlight the significance of relevant visual regions to the question-answer pair. This approach, exemplified in Figure 2, promotes recognition of relevant entities, like the "bottle", and fosters the learning of correct visual dependencies linked to the question-answer pair.
Finally, after combing \(L_{A-ICT}\) and \(L_{I-ICT}\), the overall objective is:
\[L=\delta_{1}L_{XE}+\delta_{2}\left(L_{A-ICT}+L_{I-ICT}\right), \tag{4}\]
where \(\delta_{1}\) and \(\delta_{2}\) are hyperparameter weights.
## 5 Experiment
**Base models.** Since ADS-ICT is generalizable and can be applied to various models, we evaluated it on several backbones: UNITER\({}_{\text{L}}\)(Chen et al., 2020), VL-BERT\({}_{\text{L}}\)(Su et al., 2019), VILAL\({}_{\text{L}}\)(Gan et al., 2020) and LMH (Clark et al., 2019).
**Datasets.** We conduct bias analysis and validation experiments over **VCR**(Zellers et al., 2019), **SNLI-VE**(Xie et al., 2019)3. Our identified bias problems also apply to other VLU benchmarks with long candidate options, and our methods can even generalize to VQA-Short datasets like VQA v2 and VQA CP2 v2.
Footnote 3: Results over VLEP (Lei et al., 2020) are provided in the appendix.
### Bias Verification
This analysis verifies how the aforementioned two biases affect models' learning and to what extent existing models can take the shortcut for advantage.
Figure 3: (a) Diagram of Coarse-to-Fine Region Removal. The left part illustrates the training of \(\mathsf{SPL}_{f}\) and the right part showcases the coarse-to-fine inferencing process. Within the triple-pass autoregression strategy, the 1st pass includes a one-time inpainting step with the whole region masked, the 2nd pass includes M inpainting steps with smaller masking over regions from the top left to bottom right iteratively, and the 3rd pass includes N steps with further smaller masked regions. (b) Diagram of all the combinations of (+/-I, Q, +/-A) pairs utilized in training. The pairs from the top block are utilized in QA classification training.
**UM bias.** We train two separate UNITER\({}_{\text{B}}\) models in VCR: a Q-A model taking only questions and answer options as input, and an I-A model taking images and answer options as input. We find that the Q-A model and I-A model can achieve Q2A validation accuracy of \(67.20\%\) and \(59.28\%\), respectively, which is much higher than random guessing. This validates the existence of shallow mappings inside \((\mathrm{Q,A})\) or \((\mathrm{I,A})\). We extract a subset of data where Q-A and I-A models have more than \(90\%\) confidence in predictions and find that \(C_{c}^{t}\) and \(C_{c}^{v}\) become extremely high at \(78.11\%\) and \(64.05\%\).
**DS bias.** Like the identified hypothesis-only bias in text-only benchmarks Belinkov et al. (2019); Stacey et al. (2020), DS bias enables models to attain the ground-truth labels without visual and question inputs. To verify it, we train an Answer-only model, a RoBERTa\({}_{\text{B}}\)Liu et al. (2019) with only candidate options (both the correct and incorrect answers) as input in VCR, and it achieves \(51.84\%\) Q2A accuracy (\(69\%\) on SNLI-VE, and \(61\%\) on VLEP5). This verifies that the DS bias indeed exists. Secondly, using a common feature space4Reimers and Gurevych (2019), we realize the average intra-sample similarity score between the correct answer and distractors within a sample is 0.31, and the average inter-sample similarity score of every correct answer against its 1000th ranked similar answer candidate (a correct answer from a different question) is 0.34. Moreover, the average intra-sample similarity score among distractors within the same sample is 0.36. This implies that (1) the correct answer can be overly dissimilar to the distractors within the same sample but much more similar to the correct answers to other questions; (2) distractors are also overly similar to each other within the same sample.
Footnote 4: [https://github.com/UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers).
### Debiased Evaluation
**Debiased Evaluation Setting.** We inference trained I-A, Q-A, and Answer-only models over the VCR validation set and extract samples that meet the following criteria: 1) None of the three models can predict correctly with confidence higher than \(25\%\) 2) The correct and incorrect answer choices have a similar number of matched n-grams. We obtain a subset of approximately 2.7K image-text pairs by filtering these conditions, and we consider this subset as a debiased evaluation set, \(\underline{\text{VCR}}_{\text{Fair}}\), without direct data augmentation. Lastly, we also
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Models & \multicolumn{4}{c}{VCR} & \multicolumn{2}{c}{SNLI-VE} \\ \hline \multirow{2}{*}{Heuristics-Only VL-BERT +Answer Mask} & Val. & Fair & Adv. & Val & Test \\ & 66.29 & 48.70 & 43.93 & 69.77 & 69.30 \\ & 75.51 & 72.84 & 70.46 & 74.66 & 74.71 \\ & 75.89 & 73.48 & 71.59 & 75.09 & 75.21 \\ & 75.67 & 73.14 & 71.10 & & \\ & **76.23** & **73.69** & **72.18** & **75.52** & **75.60** \\ & 76.07 & 73.67 & 71.83 & 75.24 & 75.52 \\ & **76.88** & **75.03** & **73.00** & **75.90** & **75.96** \\ & 75.94 & 73.85 & 71.35 & & \\ & 76.04 & 74.29 & 71.72 & & \\ & **77.33** & **76.12** & **73.72** & **76.27** & **76.33** \\ \hline UNITER & 76.72 & 74.99 & 72.48 & 79.02 & 79.19 \\ & **78.23** & **77.36** & **74.74** & **80.14** & **80.23** \\ \hline VILA & 78.28 & 76.67 & 74.05 & 79.64 & 79.32 \\ & **78.98** & **77.93** & **75.38** & **80.87** & **80.28** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracies (\(\%\)) on VCR (Q2A) and SNLI-VE based on our re-implementation. “Val” indicates the validation set of VCR. \(\star\) indicates training with ICT to utilize counterfactual images. The results of Heuristics-Only are obtained by taking the best performance from a mix of heuristic rules utilizing the two biases, \(e\)._g_., the method always selects the option with the most matching n-grams.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline A+ & A- & A-ICT & I+ & I- & I-ICT & \(VCR_{Sid}\) & \(VCR_{Fair}\) \\ \hline & & & & & 75.51 & 72.84 \\ ✓ & & & & & 75.80 & 73.05 \\ ✓ & ✓ & & & & 76.23 & 73.69 \\ ✓ & ✓ & ✓ & & & 76.85 & 74.46 \\ ✓ & ✓ & ✓ & ✓ & & 76.93 & 75.08 \\ ✓ & ✓ & ✓ & ✓ & ✓ & **77.33** & **76.12** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study on UNITER\({}_{L}\). A-ICT is answer-focused ICT, and I-ICT is image-focused ICT.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Models & \multicolumn{4}{c}{VQA-CP v2 test} \\ \hline \multirow{3}{*}{LMH} & All & Y/N & Num & Other \\ & 52.01 & 72.58 & 31.11 & 46.96 \\ & 58.95 & 84.37 & 49.42 & 48.21 \\ & 59.18 & 86.99 & 49.89 & 47.16 \\ & 53.70 & 74.79 & 34.32 & 47.97 \\ & 59.54 & 86.09 & 54.84 & 46.92 \\
**+ADS-ICT** & **61.03** & **87.94** & **57.02** & **48.29** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracies (\(\%\)) on VQA-CP v2. Our method can generalize to VQA-Short tasks and consistently improves over base models. \(\star\) indicates we only apply ADS-I and I-ICT over the base model.
apply our ADS method on top of this subset so that, on average, for each original \(\{Q,I,A\}\), we can generate four additional types of synthesized data, _i.e._, \(\{I+/-,A+/-\}\). This leads us to obtain around 11K I-Q-A pairs for a domain-shifted evaluation set, \(\underline{\text{VCR}_{\text{Adv}}}\). To ensure the integrity of the data, we hired experienced MTurkers to verify the correctness of every synthesized data in \(\underline{\text{VCR}_{\text{Adv}}}\)5.
Footnote 5: To save space, more details are presented in the appendix.
**Bias Mitigation.** We re-analyze UM and DS biases to verify if the dataset biases are mitigated. From Table 5, we observe that correct answers have a much similar frequency of obtaining matching n-grams than distractors against the premise information in \(\underline{\text{VCR}_{\text{Far}}}\). This improved balance becomes more noticeable when ADS is applied in \(\underline{\text{VCR}_{\text{Adv}}}\). Moreover, the similarity between the correct answers and distractors has also increased, and the distractors become more diverse.
### Debiased Training
**Benchmark Comparison.** Results from Table 8 indicate several key observations: (1) ADS-ICT can generalize to various VL models and consistently improve performance across different evaluation settings; (2) The addition of ADS-ICT can bring even much more performance improvement on domain-shifted and debiased settings.
**Other Dataset.** ADS-ICT can generalize to and improve performance over VLU tasks with long candidate options like SNLI-VE, as in Table 8, and even VQA-Short tasks like VQA v2, as in Table 4.
**Debiased Method Comparison.** Despite that former methods lack generalization to VQA-Long tasks, we re-implement former techniques like masking and occlusion boxes, and methods like (Chen et al., 2021; Liang et al., 2020)5, as in Table 8. To ensure fairness, we even apply ADS-ICT over VQA v2 and VQA-CP v2 for a thorough comparison, as in Table 4. We observe ADS-ICT delivers more significant gains over both datasets.
Footnote 5: To save space, more details are presented in the appendix.
**Ablation Study.** Table 2 verifies consistent improvement by adding components of our method. Notably, we find that augmenting counterfactual text data can bring greater improvement than factual ones. This also emphasizes the importance of distractors in VQA-Long tasks.
### Visual Explainability
We quantitatively and qualitatively verify the models' visual explainability or dependency conditioning on ADS-ICT. As in Figure 6, the Grad-CAM (Selvaraju et al., 2016) result indicates that the base model ignores the most relevant entity, "person6". However, after adding ADS-ICT, the model relies more on the relevant regions like "person6" and "person8". We further calculate the recall accuracy of the model for retrieving the most relevant entities by comparing its attention values against object labels5. As in Table 12, we observe that the recall accuracy is significantly increased with ADS-ICT\({}^{5}\), indicating that the model's visual explainability (dependency) has improved.
Footnote 5: To save space, more details are presented in the appendix.
## 6 Time Consumption
Our coarse-to-fine region removal method is flexible and generalizable as the number of runs of the image inpainting process can be adjusted depending on the scenarios to decrease the time consumption. After selecting the region to be removed, our proposed coarse-to-fine region removal will be conducted by two main steps: 1) Initial one-pass full region removal/inpainting by \(SPL_{p}\); 2) Triple-pass autoregression region removal/inpainting by \(SPL_{f}\). The triple-pass autoregression strategy en
\begin{table}
\begin{tabular}{l c c c} \hline \hline Models & Recall@1 & Recall@2 & Recall@3 \\ \hline VL-BERT\({}_{\text{L}}\) & 46.83 & 59.35 & 67.75 \\ +ADS-ICT & **58.92** & **70.68** & **77.62** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of recall accuracy(\(\%\)) for recognizing the most question-related visual objects.
Figure 4: An example from VCR and the paired visual Grad-CAM (Selvaraju et al., 2016) result from a fine-tuned VL-BERT\({}_{\text{L}}\)(Su et al., 2019). Based on the image, question, and correct answer, the most relevant entities are “person6” and then everyone on the porch.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(C_{c}^{\text{c}}\) & \(C_{d}^{\text{c}}\) & \(C_{e}^{\text{c}}\) & \(C_{d}^{\text{c}}\) & \(\text{Sim}_{\text{c,d}}\) & \(\text{Sim}_{\text{d,d}}\) \\ \hline \hline \(\text{VCR}_{\text{Std}}\) & 66.29 & 29.16 & 42.75 & 40.23 & 0.31 & 0.36 \\ \(\text{VCR}_{\text{Fair}}\) & 48.70 & 32.05 & 42.36 & 40.01 & 0.32 & 0.34 \\ \(\text{VCR}_{\text{Adv}}\) & 43.93 & 41.09 & 39.28 & 38.94 & 0.35 & 0.33 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Analysis of UM and DS biases on VCR. \(\text{Sim}_{\text{c,d}}\) indicates the average semantic similarity between the correct answer and three distractors within a sample and \(\text{Sim}_{\text{d,d}}\) indicates the similarity within the distractors only, as in Sec. 5.1.
sures fine-grained images and avoids superficial artifacts, as addressed in prior works. After the \(1st\) run of full region inpainting by \(SPL_{f}\), we split the region into smaller \(M\) regions evenly and run the inpainting process for each smaller region, respectively. A similar procedure applies to the third pass of \(N\) runs. Triple-pass autoregression is a flexible solution, as both M and N can be set to any arbitrary positive integers (\(2\leqslant M\leqslant N\)) depending on the situation to decrease the overhead time consumption. Hence, The total runs of the inpainting process in the tr iple-pass strategy is \(1+M+N\).
Based on Table 7, we have four observations: 1) As in Exp 1, If we do not conduct coarse-to-fine region removal (neither of the two steps will be applied), we essentially apply an occlusion box over the selected region. This is the same approach as the prior works and will cause a relatively low downstream QA accuracy of 76.07 \(\%\); 2) Comparing Exp 1-3, we observe that adding the image inpainting process by \(SPL_{p}\) and \(SPL_{f}\) will both consistently improve the downstream performance; 3) Based on Exp 3-5, we can see a performance improvement trend as we apply more refined region inpainting via increasing the value of \(M\) and \(N\); 4) Our method is tolerant to the variations of \(O\), \(M\) and \(N\) and can still achieve noticeable performance improvement even when the overall number of runs is low as 1 with time consumption of \(2\times 10^{2}ms\) for one image. In practice, we found that setting \(M\) and \(N\) to 4 and 16, respectively, generally achieves optimal performance while maintaining reasonable inference time consumption of \(2.2\times 10^{3}\) ms for one sample image However, as mentioned, the triple-pass autoregression strategy is a flexible and general solution. Thus, \(O\), \(M\) and \(N\) can be adjusted according to the actual situation to decrease time consumption and our method can still provide similar results.
## 7 Conclusion
This paper analyzed dataset biases and their underlying causes in VQA-Long. Our findings shed light on how these biases can impact the evaluation of VL models and the importance of mitigation. We hope our work will inspire future research in developing rigorous data annotation processes and strategies to mitigate the influence of dataset biases.
### Limitations
First of all, our proposed method, ADS-I, is designed to remove pertinent parts of visual regions to generate synthetic factual images, I+, and irrelevant regions to create I-. We adopted techniques from an existing study (Chen et al., 2020) to accomplish this. However, some noise still persists, which might impact the accuracy of determining the relevant region. A promising next step might involve enhancing the quality of the generated images by addressing these noise issues.
Besides, to ensure the high quality of our constructed debiased evaluation benchmarks, we opted for manual verification, which consequently increased the overall cost of our research study. We anticipate that some cost-efficient yet reliable pre-selection procedure could be developed to mitigate these costs. Additionally, the manual selection process could introduce a certain level of subjectivity into the dataset, which needs to be considered.
### Ethics Statement
ChatGPT is pre-trained on the colossal corpus which is likely to contain potential racial and gender bias. Therefore, if someone finds our work interesting and would like to use it in a specific environment, we strongly suggest the user check the potential bias before usage. In addition, it is hard to control the generation of LLMs like ChatGPT. We
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**Exp Index** & & \multicolumn{2}{c}{\# **Runs of**} & \multicolumn{3}{c}{**Time Consumption (\(ms\))**} & **Accuracy (\%)** \\ \hline & \(SPL_{p}\) & \(SPL_{f}\) (\(O+M+N\)) & \(1st\) **Pass (\(O\))** & \(2nd\) **Pass (\(M\))** & \(3rd\) **Pass (\(N\))** & \\ \hline
1 & 0 & 0 & 0 & 0 & \(10^{1}\) & 76.07 \\
2 & 1 & 0 & 0 & 0 & 0 & \(10^{2}\) & 76.18 \\
3 & 1 & 1 & 1 & 0 & 0 & \(2\times 10^{2}\) & 76.30 \\
4 & 1 & 14 & 1 & 3 & 9 & \(1.3\times 10^{3}\) & 76.51 \\
5 & 1 & 21 & 1 & 4 & 16 & \(2.2\times 10^{3}\) & 76.88 \\
6 & 1 & 30 & 1 & 4 & 25 & \(3.5\times 10^{3}\) & 76.87 \\
7 & 1 & 26 & 1 & 9 & 16 & \(3\times 10^{3}\) & 76.88 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Time consumption comparison among variations of our coarse-to-fine region removal method. During this comparison study, the base model is a VL-BERT\({}_{L}\)(Su et al., 2019) running on one NVIDIA TITAN RTX GPU of \(24\) GB. The input image is of size \(224\times 224\).
should be aware of the potential problems caused by incorrect predictions.
## Acknowledgements
This work is supported by DARPA MCS program under Cooperative Agreement N66001-19-2-4032.
|
2305.01975
|
A Survey on Dataset Distillation: Approaches, Applications and Future
Directions
|
Dataset distillation is attracting more attention in machine learning as
training sets continue to grow and the cost of training state-of-the-art models
becomes increasingly high. By synthesizing datasets with high information
density, dataset distillation offers a range of potential applications,
including support for continual learning, neural architecture search, and
privacy protection. Despite recent advances, we lack a holistic understanding
of the approaches and applications. Our survey aims to bridge this gap by first
proposing a taxonomy of dataset distillation, characterizing existing
approaches, and then systematically reviewing the data modalities, and related
applications. In addition, we summarize the challenges and discuss future
directions for this field of research.
|
Jiahui Geng, Zongxiong Chen, Yuandou Wang, Herbert Woisetschlaeger, Sonja Schimmler, Ruben Mayer, Zhiming Zhao, Chunming Rong
|
2023-05-03T08:41:37Z
|
http://arxiv.org/abs/2305.01975v3
|
# A Survey on Dataset Distillation: Approaches, Applications and Future Directions
###### Abstract
Dataset distillation is attracting more attention in machine learning as training sets continue to grow and the cost of training state-of-the-art models becomes increasingly high. By synthesizing datasets with high information density, dataset distillation offers a range of potential applications, including support for continual learning, neural architecture search, and privacy protection. Despite recent advances, we lack a holistic understanding of the approaches and applications. Our survey aims to bridge this gap by first proposing a taxonomy of dataset distillation, characterizing existing approaches, and then systematically reviewing the data modalities, and related applications. In addition, we summarize the challenges and discuss future directions for this field of research.
## 1 Introduction
High-quality and large-scale datasets are crucial for the success of deep learning, not only enabling the development of end-to-end learning systems [1, 1], but also serving as benchmarks to evaluate different machine learning architectures [15, 16]. However, the explosion of deep learning dataset sizes has posed considerable challenges concerning processing, storage, and transfer. Training neural networks often require thousands of iterations on the entire dataset, which consumes significant computational resources and power. Tasks such as hyperparameter optimization [1] and neural architecture search (NAS) [17] are even more resource-intensive. One promising solution is to use smaller datasets with high information density to reduce resource consumption while preserving model performance.
Research in the area of curriculum learning [14], active learning [15], and core-set selection [2] has shown that it is possible to sample a subset of the original data to train neural networks, resulting in models with competitive performance. This also implies that we can train high-performance models with less effort while downstream tasks like continual learning (CL) [16, 1], neural architecture search (NAS) will also benefit. Nevertheless, creating an algorithm-agnostic, efficient, and unbiased small dataset to replace the original is still challenging. For instance, coreset selection is typically an NP-hard problem, making it computationally intractable and difficult to apply to large datasets.
An alternative approach to coreset is dataset distillation, which aims to distill the original data onto a smaller synthetic dataset [20]. Dataset distillation techniques have continued to evolve, with various methods such as gradient matching [18], trajectory matching [19], and kernel ridge regression [21] being proposed to optimize the distilled data, resulting in improved distillation performance in terms of both the accuracy of the trained model on the test set and the generalization capability across different network architectures. However, there remain challenges regarding optimization stability and computational efficiency.
Despite the recent advancements in dataset distillation, a comprehensive overview summarizing its advances and applications is currently not available. This paper aims to fill this gap by presenting a taxonomy of dataset distillation. To our knowledge, it is the first work that provides a systematic categorization of the different methods and techniques used in dataset distillation. The paper mainly makes the following contributions:
* We propose a novel taxonomy of dataset distillation, which can help researchers to better understand the research landscape and find their areas of interest.
* We present existing distillation approaches in detail, discussing their strengths and weaknesses;
* We discuss important challenges in this domain, highlighting promising directions for future research.
The paper is organized as follows. In Section 2, we first present our taxonomy of dataset distillation. Then, we introduce the learning frameworks and common enhancement
methods in Section 3 and Section 4, respectively. Section 5 summarizes the advances in different data modalities. In Section 6, we categorize the related applications according to the dataset distillation properties. Finally, we conclude this paper with future directions in Section 7.
## 2 Taxonomy
### Basics of Dataset Distillation
We begin by introducing the key notations used in this paper. \(\mathcal{D}\) represents a general dataset, \(f_{\theta}\) represents a neural network with parameters \(\theta\), and \(f_{\theta}(x)\) denotes the model's prediction for data point \(x\). The expected loss for dataset \(\mathcal{D}\) in relation to \(\theta\) is defined as
\[\mathcal{L}_{\mathcal{D}}(\theta)=\mathbb{E}_{(x,y)\sim P_{\mathcal{D}}}[\ell (f_{\theta}(x),y)], \tag{1}\]
where \(x\) and \(y\) are the input data and label pair from \(\mathcal{D}\), \(\ell(f_{\theta}(x),y)\) is the given loss value between the prediction and ground truth.
Dataset distillation aims to reduce the size of large-scale training input and label pairs \(\mathcal{T}=\{(x_{i},y_{i})\}_{i=1}^{|\mathcal{T}|}\) by creating smaller synthetic pairs \(\mathcal{S}=\{(\hat{x}_{j},\hat{y}_{j})\}_{j=1}^{|\mathcal{S}|}\), so that models trained on both \(\mathcal{T}\) and \(\mathcal{S}\) can achieve similar performance, which can be formulated as:
\[\mathcal{L}_{\mathcal{T}}(\theta^{\mathcal{S}})\simeq\mathcal{L}_{\mathcal{T} }(\theta^{\mathcal{T}}), \tag{2}\]
where \(\theta^{\mathcal{S}}\) and \(\theta^{\mathcal{T}}\) are the parameters of the models trained on \(\mathcal{S}\) and \(\mathcal{T}\) respectively.
### Taxonomy Explanation
The taxonomy of dataset distillation is illustrated in Figure 1. In this taxonomy, we classify the research about dataset distillation from three perspectives: approaches, data modalities and applications. The approaches can be decomposed into two parts. In the learning framework, we explain how dataset distillation can be modeled, optimized and solved in different ways, such as using meta-learning [1] or surrogate objectives (see Section 3.2). Meta-learning can be further divided into using back-propagation through time and using kernel ridge regression. Surrogate objective can be subdivided into parameter matching and distribution matching. We categorize the common enhancement methods, which can be plugged into a learning framework, mainly into parameterization (see Section 4.1), augmentation (see Section 4.2) and label distillation (see Section 4.3). Existing work can be classified into four types of data: image, audio, text, and graph, based on data modality. Applications can be further divided into three categories: computationally intensive tasks such as continual learning and neural architecture search, privacy protection including dataset construction and federated learning, and model robustness, encompassing data poisoning attacks and improving robustness. Corresponding to our taxonomy, some representative papers, together with their characteristics, have been listed in Table 1. It comprehensively compares learning frameworks, enhancement methods, data modality, and applications.
## 3 Learning Frameworks
According to the learning goals, the current learning frameworks can mainly be divided into two categories: meta-learning methods based on inner model performance and methods using surrogate objectives.
### Meta-Learning
Meta-learning [1] refers to learning about learning, and often refers to machine learning algorithms that learn from the output of other machine learning algorithms. In this problem, the distilled data are treated as hyperparameters and the objective is to optimize the distilled data in a bi-level optimization problem as follows:
\[\mathcal{S}^{*}=\operatorname*{arg\,min}_{\mathcal{S}}\mathcal{L}_{\mathcal{T} }(\theta^{\mathcal{S}})\;\text{s.t.}\;\theta^{\mathcal{S}}=\operatorname*{ arg\,min}_{\theta}\mathcal{L}_{\mathcal{S}}(\theta), \tag{3}\]
where the inner loop, optimizing \(\theta^{\mathcal{S}}\), trains a model on the synthetic dataset until convergence, and the outer loop, optimizing \(\mathcal{S}\), subsequently optimizes the synthetic dataset, so that the model has good generalization capability and can perform well on the real dataset. The distillated dataset is optimized using the meta-gradient:
\[\mathcal{S}\leftarrow\mathcal{S}-\alpha\nabla_{\mathcal{S}}\mathcal{L}_{ \mathcal{T}}(\theta^{\mathcal{S}}), \tag{4}\]
where \(\alpha\) is the learning rate for updating the synthetic dataset.
Figure 1: Taxonomy of dataset distillation.
Figure 2: Back-Propagation Through Time. The gradient \(\nabla_{\mathcal{S}}\mathcal{L}\) is calculated via back-propagation through time (see orange dashed line). In the figure, we omits the details about meta-gradients corresponding to optimizer, i.e., \(\eta\), \(m\), etc. DD [22] and SLDD [1] are the cases when \(T=1\), whereas in AddMem [4], \(T\) reaches up to 200.
**Back-Propagation Through Time (BPTT)**
Computing the meta-gradient \(\nabla_{\mathcal{S}}\mathcal{L}_{\mathcal{T}}(\theta^{\mathcal{S}})\) requires differentiating through inner optimization. When the model is learned in an iterative way, i.e.,
\[\theta_{t+1}=\theta_{t}-\eta\nabla_{\theta_{t}}\ell(f_{\theta}(\hat{x}),\hat{y}), \tag{5}\]
where \(\eta\) is the learning rate for inner loop and meta-gradient is calculated by back-propagation through time (BPTT):
\[\nabla_{\mathcal{S}}\mathcal{L}_{\mathcal{T}}(\theta^{\mathcal{S}})=\frac{ \partial\mathcal{L}}{\partial\mathcal{S}}=\frac{\partial\mathcal{L}}{\partial \theta_{\tau}}\Big{(}\sum_{t=0}^{t=T}\frac{\partial\theta_{T}}{\partial\theta_ {t}}\cdot\frac{\partial\theta_{t}}{\partial\mathcal{S}}\Big{)} \tag{6}\]
which is illustrated in Figure 2. It is evident that the computation overhead is high due to the recursive calculation of the meta-gradient using Equation 6.
To make the implementation of Equation 6 feasible, [20] suggest using the Truncated Back-Propagation Through Time (TBPTT) method, which involves unrolling the inner-loop optimization steps as a single step of gradient descent optimization,
\[\hat{x},\hat{\eta}=\operatorname*{arg\,min}_{\hat{x},\hat{\eta}}\ell(f_{ \theta_{1}}(x),y),\text{s.t.}\theta_{1}=\theta_{0}-\eta\nabla_{\theta_{0}} \ell(f_{\theta_{0}}(\hat{x}),\hat{y}), \tag{7}\]
where \(\hat{x}\), \(\hat{y}\) are synthetic dataset and \(\hat{\eta}\) the learning rate for the optimizer.
Deng [2022] further improves the learning framework by incorporating a momentum term and extending the length of unrolled trajectories. Empirical results show that the momentum term can consistently improve performance and that longer unrolled trajectories can lead to better model parameters that produce more efficient gradients for compressed representation learning.
BPTT methods have been criticized for several issues, as noted in Zhou _et al._[2022]: 1) high computational cost and memory overhead; 2) bias in short unrolls; 3) gradients ex
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}}
ploding or vanishing in long unrolls; and 4) chaotic conditioned loss landscapes.
**Kernel Ridge Regression (KRR)**
Nguyen _et al._ (2020) transform dataset distillation into a kernel ridge regression (KRR) problem, where the synthetic set is used as the support set and the original set as the target set. Their approach result in a closed-form solution in terms of convex optimization, simplifying the expensive nested optimization into first-order optimization (see Figure 3). They introduce the Kernel-Inducing Point (KIP) algorithm which utilizes neural tangent kernel (NTK) (Jacot _et al._, 2018) ridge regression to compute the exact outputs of an infinite-width neural network trained on the synthetic set, bypassing the need for gradient and back-propagation computation on any neural network. The KRR loss function for a given kernel and batch data from synthetic set \((X_{\mathcal{S}},y_{\mathcal{S}})\) evaluated on batch data from real set \((X_{\mathcal{T}},y_{\mathcal{T}})\) can be formulated as,
\[\operatorname*{arg\,min}_{X_{\mathcal{S}},y_{\mathcal{S}}}\frac{1}{2}\|y_{ \mathcal{T}}-K_{X_{\mathcal{T}}X_{\mathcal{S}}}(K_{X_{\mathcal{S}}X_{\mathcal{ S}}}+\lambda I)^{-1}y_{\mathcal{S}}\|^{2}, \tag{8}\]
where \(K_{X_{\mathcal{T}}X_{\mathcal{S}}}\) is the Gram matrix of \(X_{\mathcal{S}}\) and \(X_{\mathcal{T}}\), and \(K_{X_{\mathcal{S}}X_{\mathcal{S}}}\) is the Gram matrix of \(X_{\mathcal{S}}\).
Zhou _et al._ (2022) propose a novel method, neural feature regression with pooling (FRePo), which utilizes a more flexible conjugate kernel with neural features to replace the NTK in KIP (Nguyen _et al._, 2020). This approach breaks down the traditional KRR training pipeline into two components: a feature broke \(f_{\theta}\) and a linear classifier. When calculating the meta-gradient of \(\mathcal{S}\), FRePo fixes the feature extractor parameters and updates \(\mathcal{S}\)\(T\) times according to Equation 8, where \(T\) is a hyperparameter that helps prevent the support/synthetic dataset from memorizing a specific network. Additionally, a model pool is employed to alleviate overfitting in the distillation process.
\[K^{\theta}_{X_{\mathcal{T}}X_{\mathcal{S}}}=f_{\theta}(X_{\mathcal{T}})f_{ \theta}(X_{\mathcal{S}})^{\top}, \tag{9}\]
\[K^{\theta}_{X_{\mathcal{S}}X_{\mathcal{S}}}=f_{\theta}(X_{\mathcal{S}})f_{ \theta}(X_{\mathcal{S}})^{\top} \tag{10}\]
Loo _et al._ (2022) propose to use random feature approximation for distillation (RFAD), which utilizes random feature approximation of the Neural Network Gaussian Process (NNGP) kernel to replace the NTK used in KIP. This approach reduces the computation of the Gram matrix to \(\mathcal{O}(|\mathcal{S}|)\), which is linear with the size of the synthetic set, compared to \(\mathcal{O}(|\mathcal{S}|^{2})\), the complexity of accurately calculating the NTK kernel matrix. They also suggest using cross-entropy loss with Platt scaling (Platt and others, 1999) to provide a more accurate probabilistic interpretation for classification tasks.
### Surrogate Objective
Instead of optimizing directly based on model performance, surrogate objective approaches optimize a proxy objective, such as the parameters or gradients of the model. These approaches assert that the effectiveness of a model trained on a full dataset and a distilled dataset can be inferred from their corresponding parameters and gradients.
### Parameter Matching
In contrast to optimizing directly based on the loss value corresponding to the distilled data, it aims to make the model approximate the original model in the parameter space, i.e. \(\theta^{\mathcal{S}}\approx\theta^{\mathcal{T}}\). Empirically, the trajectory of parameters vary with its initial state \(\theta_{0}\). Therefore, the objective of parameter matching should be agnostic to the initialization. When distance between the model parameters trained on the synthetic dataset and the real dataset are consistently close, the distilled dataset can be considered as a good alternative of original whole dataset. Let \(\theta^{\mathcal{S}}(\theta_{0})\), \(\theta^{\mathcal{T}}(\theta_{0})\) denote the trained models from the same initialization \(\theta_{0}\), the objective function can be expressed as:
\[\min_{\mathcal{S}}\mathbb{E}_{\theta_{0}\sim P_{\theta_{0}}}[D(\theta^{ \mathcal{S}}(\theta_{0}),\theta^{\mathcal{T}}(\theta_{0}))], \tag{11}\]
where \(D(\cdot,\cdot)\) is a distance function.
To enable a more guided optimization and apply the incomplete training, DC (Zhao _et al._, 2021) synthesizes images by minimizing the gradient matching loss at each training step \(t\):
\[\min_{\mathcal{S}}\mathbb{E}_{\theta_{0}\sim P_{\theta_{0}}}[\sum_{t=0}^{T-1}D (\nabla_{\theta}\mathcal{L}_{\mathcal{S}}(\theta_{t}),\nabla_{\theta} \mathcal{L}_{\mathcal{T}}(\theta_{t}))] \tag{12}\]
where \(T\) is the hyperparameter for the number of training iterations.
Cazenavette _et al._ (2022) suggest overcoming bias accumulated from one-step gradient by matching training trajectories (MTT). MTT considers the training trajectories \(\theta_{t=0}^{T-1}\) on real data as the expert models, the model \(\hat{\theta}\) trained on the synthetic dataset as the student model. It randomly samples \(\theta_{t}^{\mathcal{T}}\) from the expert model to initialize the student model, and the objective is to make the student model \(\hat{\theta}_{t+N}^{\mathcal{S}}\) approximate the expert model \(\theta_{t+M}^{\mathcal{T}}\) after \(N\) iterations. The optimization objective is given by
\[D=\frac{\|\hat{\theta}_{t+N}^{\mathcal{S}}-\theta_{t+M}^{\mathcal{T}}\|_{2}^{2} }{\|\theta_{t}^{\mathcal{T}}-\theta_{t+M}^{\mathcal{T}}\|_{2}^{2}}, \tag{13}\]
where \(M,N\) are the hyperparameters.
Parameter matching methods are often criticized for: 1) high bias it introduces (Wang _et al._, 2022). The synthetic set learned by gradient matching is extremely biased towards
Figure 3: Kernel Ridge Regression. The figure shows the workflow of kernel ridge regression. The details refer to Equation 8 and 9. The key difference is that KIP (Nguyen _et al._, 2020) uses NTK kernel, RFAD (Loo _et al._, 2022) uses Neural Network Gaussian Process (NNGP) kernel. Feature extractor \(f_{\theta}\) in FRePo (Zhou _et al._, 2022) is parameterized during training.
those large gradient samples, which will decrease its generalization capability on unseen architectures; 2) expensive bi-level optimization. For example training 50 images/class using DC [14] requires 500K epochs of updating network parameter \(\theta_{t}\) and 50K updating of \(\mathcal{S}\); and 3) fragile hyper-parameters [14] tuning. e.g. how often to update \(\theta_{t}\) and \(\mathcal{S}\) in DC, as well as \(M,N\) in MTT [15] are critical.
**Distribution Matching**
The objective of distribution matching is essentially to learn synthetic samples so that the distribution of the synthetic samples is similar to that of real samples in the feature space. They use an empirical estimate of the maximum mean discrepancy (MMD) as a metric to evaluate the distance of spatial distribution. Due to the high computational complexity and difficulty in optimization caused by high dimensionality, Zhao and Bilen [20] use different randomly initialized neural networks as feature extractors to reduce the input dimension to low-dimensional space.
\[\min_{\mathcal{S}}\mathbb{E}_{\theta\sim P_{\theta}}\bigg{\|}\frac{1}{| \mathcal{S}|}\sum_{i=1}^{|\mathcal{S}|}f_{\theta}(\hat{x}_{i})-\frac{1}{| \mathcal{T}|}\sum_{i=1}^{|\mathcal{T}|}f_{\theta}(x_{i})\bigg{\|}^{2}, \tag{14}\]
where \(f_{\theta}\) is parameterized by \(\theta\), and \(\theta\) is sampled from a random distribution \(P_{\theta}\). \(|\mathcal{S}|\) and \(|\mathcal{T}|\) are the cardinality of dataset \(\mathcal{S}\) and \(\mathcal{T}\), respectively.
To better capture the whole dataset distribution, [22] propose to use layer-wise feature alignment in CAFE to learn a more comprehensive characteristic of the distribution. They also introduce a loss function to improve the discriminative ability of the learned samples. The classification loss is calculated using the feature centers of real sample and averaged synthetic samples of each class.
## 4 Common Enhancement Methods
In this section we introduce some techniques that can be integrated into the learning framework presented in the previous section to further enhance distillation performance.
### Parameterization
Dataset parameterization aims to utilize the regularity to guide the synthesis, It helps enhance the interpretability by learning hidden patterns, and control the diversity of the synthetic data. In [14], the authors propose IT-GAN, a method that uses a pre-trained GAN decoder to increase the informativeness distilled data. IT-GAN first obtains latent vectors from training samples using GAN Inversion [1], then it use the distribution matching algorithm to learn the latent vectors. These vectors can be fed into a pre-trained GAN decoder to induce synthetic images of the original size. In addition, most distillation methods processes each synthetic sample independently, ignoring mutual consistency and relationships between samples. Factorization are proposed to decompose images into different parts to better capture the correlation between different samples and improve the diversity. IDC [17] utilizes a multi-formation function as the decoder as the decoder to store more information in single sample. Deng [20] propose to learn matrix-based codes and decodes and use matrix multiplication to generate synthetic datasets. Lee _et al._[20] employ the latent code - decoder mode for factorization. The decoder is designed as an upsampling neural network containing three ConvTranspose2d layers, aiming to restore latent codes compressed in low dimensions into the image pixel space. Liu _et al._[20] propose HaBa, which chooses to decompose the image into two parameter spaces of basis and hallucinator. Where hallucinator is an encoder-transformation-decoder structure. Specifically, the encoder is composed of CNN blocks, followed by an affine transformation with scale \(\sigma\) and a decoder of a symmetric CNN architecture.
### Augmentation
In Zhao Bo [20], the authors propose using differentiable siamese augmentation (DSA) when learning synthetic images, which leads to more informative datasets. DSA is a pluggable technique that includes operators like _scale_, _flip_, _crop_, _rotate_, _color jitters_, and _cutout_. It can be easily integrated into various distillation methods and has been widely used in Zhao and Bilen [20]; Wang _et al._[20]. In Cui _et al._[20], DSA is found to achieve the best performance compared to other data augmentation techniques. However, current augmentation techniques are not suitable for discrete data such as graphs and text.
### Label Distillation
Label distillation relaxes the restrictions on labels, allowing them to have richer semantics beyond one-hot vectors. It is first introduced in SLDD [2] and has been shown to improve not only the storage efficiency but also the distillation performance. Their method only requires to make the label in Equation 7 learnable variables. Nguyen _et al._[20] also provide a label learning algorithm based on the closed-form solution in KRR. It is reported that only five distilled images from MNIST would enable the model to achieve 92\(\%\) accuracy [2].
## 5 Data Modalities
Dataset distillation, first proposed for images, has been applied to various modalities. In this section, we categorize existing works according to data modality and discuss some of the challenges.
### Image
Most dataset distillation methods to date have been performed on image datasets [22, 23, 24, 14, 15]. These works have constructed benchmarks to facilitate fair comparisons of novel approaches. Images have a continuous real-value domain, which allows direct optimization of synthetic images using deep learning optimizers. We find that experimental datasets become increasingly complex, starting from MNIST, CIFAR10, and SVHN, to more challenging datasets like TinyImageNet and ImageNet [23, 14].
al._, 2022; Kim _et al._, 2022). Furthermore, parameterization methods that capture on the regularity of images are becoming increasingly prevalent in the field, as evidenced by recent research such as [14, 15].
### Audio
Speech signals also satisfy the regularity condition of a low-rank data subspace, i.e., temporally adjacent signals have similar spectra. Therefore, many parametrization methods [14, 15] designed for image dataset can also be applied in this domain. They both experiment with the Speech Commands [23] dataset. In detail, they process the waveform data with a short-time Fourier transform to obtain the magnitude spectrogram and used log-scale magnitude spectrograms for the experiments. Their works show that dataset distillation can achieve consistent performance on downstream tasks of speech signals.
### Text
The discrete nature poses challenges to textual distillation. Sucholutsky and Schonlau (2021) first embed the text into a contiguous space using pre-trained GloVe embedding and fill or truncate all sentences to a pre-determined length. In this way, each sentence can be regarded as a single channel image of size length \(\times\) embedding dimension. Text distillation also involves finding the nearest embedding in the dictionary for each vector in the optimized matrix, and transforming these embeddings into the corresponding words and finally the sentence.
Current efforts are based on primitive bi-level optimization, which is computationally inefficient. There is a lack of work analyzing factors such as the difficulty of the dataset, sentence length, or cross-architecture generalization. Distilled sentences may consist of unrelated words, which makes it difficult to interpret and further analyze. Exploring ways to leverage regularity and context in text distillation is a promising area of research.
### Graph
Graph data is very common in real life, e.g. social networks, Web relationship analysis, and user-item interaction can all be modeled as graph data containing nodes and edges. Jin _et al._ (2021, 2022) design a strategy to simultaneously compress node features and structural information based on gradient matching. Liu _et al._ (2022) adopt the distribution matching to boost the performance and show that the dataset distillation was significantly efficient and in some datasets they reached \(95\%\) of the original performance by compressing \(99\%\) of the data. Graph distillation is mainly challenged by heterogeneous, abstract, high-level graph representations.
## 6 Applications
Dataset distillation, initially designed for model training acceleration, has shown potential in various applications due to its properties.
### Computationally Intensive Tasks
#### 6.1.1 Continual learning
Continual learning (CL) addresses the problem of catastrophic forgetting by using strategies such as experience replay, which stores representative samples from previous tasks as a buffer to recall knowledge. Dataset distillation, which involves highly compressed representations, is an alternative to traditional sampling methods. There are currently two experimental settings for using distillation in CL. Zhao _et al._ (2021); Zhao Bo (2021) use different datasets, SVHN, MNIST, and USPS, three handwritten digit recognition datasets, and take EEIL [10] as the baseline for continuous learning. In the study of Zhao and Bilen (2021), the experimental settings are changed to incremental class learning on the CIFAR100 dataset. The researchers establish a baseline using the GDumb method [22] and randomly divided 100 classes into 5 and 10 learning steps, with 20 and 10 classes per step respectively.
#### 6.1.2 Neural Architecture Search
Neural architecture search (NAS) is known to be expensive as it involves training multiple architectures on the entire training dataset and selecting the best-performing one on the validation set. To address this issue, researchers have proposed using a distilled dataset as a proxy of the entire dataset, which can effectively identify the best network. Related experiments on the CIFAR10 dataset have been reported in DC [15], DSA [15], and DM [15]. These studies construct a search space of 720 ConvNets by varying hyperparameters such as network depth, width, activation function, normalization, and pooling
Figure 4: Surrogate Objective. The figure presents the workflow of parameter matching (left) and distribution matching (right). The key difference between algorithm DC [15] and MTT [10] is that DC uses information from one-step optimization (gradient) while MTT using parameters after several steps. Definition of \(D\) is given as in Equation 11. In distribution matching, the embeddings \(e^{\mathcal{S}}\) and \(e^{\mathcal{T}}\) in DM [15] are extracted from layer output of ConvNet and the \(D\) is maximum mean discrepancy, whereas, \(e^{\mathcal{S}}\) and \(e^{\mathcal{T}}\) in CAFE [15] correspond to layer-wise features and \(D\) is a mean square error.
layers over a uniform grid. The effectiveness of the distilled dataset was evaluated using the Spearman's rank correlation coefficient between the validation accuracies obtained by the proxy dataset and the entire dataset. A higher correlation value indicates that the proxy dataset is more reliable.
### Privacy
**Dataset Construction**
Machine learning is vulnerable to a variety of privacy attacks, such as membership inference attacks [15], model inversion attacks [14, 17], and gradient inversion attacks [15, 16], where attackers attempt to infer task-independent private information from the target model, and even recover the original training data. Additionally, data collection and publishing raise privacy and copyright concerns. Dong _et al._[13]; Zhou _et al._[13] have shown that models trained on synthetic data are robust to both loss-based and likelihood-based membership inference attacks. To ensure that the distilled samples cannot be inferred from real ones, [15] implemented the KIP\({}_{\rho}\) variant, which randomly initialized \(\rho\) proportion of each image and kept them unchanged during training. This idea was later followed by RFAD\({}_{\rho}\)[11]. Chen _et al._[13] added a differential privacy (DP) mechanism [10] to the distillation process to provide rigorous privacy guarantees. Medical data often requires strict anonymization before publication, [10] propose to dataset distillation to construct privacy-preserving datasets.
**Federated Learning**
Federated learning (FL) is an emerging technology that enables different clients to collaboratively train a shared model without sharing their local data. It faces challenges such as high bandwidth requirements for uploading large model updates and a lack of strict privacy guarantees. There are several works that propose to combine dataset distillation in FL. Hu _et al._[13]; Xiong _et al._[13] suggest sharing lightweight synthetic datasets instead of sharing model updates, since the distilled dataset size is generally smaller. However, this may introduce bias and increase the computational load, which can negatively impact the performance and efficiency of FL.
### Robustness
**Data Poisoning Attacks**
Distilled data lose its fidelity and may not be visually distinguishable from its original contents, making it vulnerable to data poisoning attacks and difficult to detect. Studies have shown that a small number of these poisoned samples can significantly reduce the accuracy of a model's predictions on a specific category. Wang _et al._[14] propose a study on data poisoning attacks using dataset distillation. Liu _et al._[13] propose two backdoor attacks on distilled data by injecting triggers into the synthetic data during the distillation process, either in the initial stage or throughout the entire process.
**Improve Model Robustness**
Dataset distillation can also be used as a means of improving its robustness. Researchers have proposed using optimization techniques to learn a robust distilled dataset, such that a classifier trained on this dataset will have improved resistance to adversarial attacks. Tsilivis _et al._[13] have combined the KIP method with adversarial training to enhance the robustness of the distilled dataset. Wu _et al._[13] approached the problem of dataset learning as a tri-level optimization problem to obtain a distilled dataset that minimizes robust error on the data-parameterized classifier.
## 7 Conclusion and Future Directions
In this paper, we present a systematic review of recent advances in dataset distillation. We introduce a novel taxonomy that categorizes existing works from various perspectives. We find that most existing efforts are geared toward image datasets, whereas the handling of discrete text and graph data remains a significant challenge. There is a limited exploration of robustness, and further research is necessary as the technology gains wider adoption. Our study demonstrates the research landscape in this field and suggests directions for future work.
### Computational efficiency
The computational efficiency of dataset distillation is an important consideration, as many current methods for dataset distillation can be computationally expensive, particularly for larger datasets. The goal of dataset distillation is to reduce the size of a dataset while preserving its key features and patterns, but this process often requires complex optimization and clustering algorithms, which can be computationally intensive. Methods like MTT [1], KIP [15], and FRePo [12] can cause GPU memory bottlenecks when the number of images per class (IPC) increases. While the DM [13] approach proposes using distribution matching to avoid model training, and RFAD [11] proposes using NNGP to reduce the computational complexity of kernel ridge regression, the computational efficiency of distillation still requires improvement, particularly for larger datasets.
### Performance degradation on larger IPC
According to Cui _et al._[13], current dataset distillation methods perform well only when the number of images per class (IPC) is relatively small. As the IPC increases, the performance of most distillation methods deteriorates and becomes similar to that of random sampling. Therefore, it is important to explore whether dataset distillation can overcome this limitation and maintain superior performance on larger datasets.
### Weak labels
Currently, research on dataset distillation primarily focuses on classification tasks. However, its potential for more complex tasks, such as image detection and segmentation, named entity recognition, summarization, and machine translation, remains untapped. Exploring the technique's effectiveness on these tasks could provide deeper insights into data characteristics and the inner workings of AI.
## Acknowledgements
This work is partially funded by the European Union's Horizon 2020 Research and Innovation Program through Marie Sklodowska-Curie Grant 860627 (CLoud ARtificial Intelligence For pathologY (CLARIFY) Project).
|
2304.10589
|
Backward uniqueness of 2D and 3D convective Brinkman-Forchheimer
equations and its applications
|
In this work, we consider the two- and three-dimensional convective
Brinkman-Forchheimer (CBF) equations (or damped Navier--Stokes equations) on a
torus $\mathbb{T}^d,$ $d\in\{2,3\}$:
$$ \frac{\partial \boldsymbol{u}}{\partial t}-\mu
\Delta\boldsymbol{u}+(\boldsymbol{u}\cdot\nabla)\boldsymbol{u}+\alpha\boldsymbol{u}+\beta|\boldsymbol{u}|^{r-1}\boldsymbol{u}+\nabla
p=\boldsymbol{f}, \ \nabla\cdot\boldsymbol{u}=0,$$ where $\mu,\alpha,\beta>0$
and $r\in[1,\infty)$ is the absorption exponent. For $d=2,r\in[1,\infty)$ and
$d=3,r\in[3,\infty)$ ($2\beta\mu\geq 1$ for $d=r=3$), we first show the
backward uniqueness of deterministic CBF equations by exploiting the
logarithmic convexity property and the global solvability results available in
the literature. As a direct consequence of the backward uniqueness result, we
first derive the approximate controllability with respect to the initial data
(viewed as a start controller). Secondly, we apply the backward uniqueness
results in the attractor theory to show the zero Lipschitz deviation of the
global attractors for 2D and 3D CBF equations. By an application of
log-Lipschitz regularity, we prove the uniqueness of Lagrangian trajectories in
2D and 3D CBF flows and the continuity of Lagrangian trajectories with respect
to the Eulerian initial data. Finally, we consider the stochastic CBF equations
with a linear multiplicative Gaussian noise. For $d=2,r\in[1,\infty)$ and
$d=3,r\in[3,5]$ ($2\beta\mu\geq 1$ for $d=r=3$), we show the pathwise backward
uniqueness as well as approximate controllability via starter controller
results. In particular, the results obtained in this work hold true for 2D
Navier--Stokes equations.
|
Manil T. Mohan
|
2023-04-20T18:20:59Z
|
http://arxiv.org/abs/2304.10589v1
|
# Backward uniqueness of 2D and 3D convective Brinkman-Forchheimer equations and its applications
###### Abstract.
In this work, we consider the two- and three-dimensional convective Brinkman-Forchheimer (CBF) equations (or damped Navier-Stokes equations) on a torus \(\mathbb{T}^{d}\), \(d\in\{2,3\}\):
\[\frac{\partial\boldsymbol{u}}{\partial t}-\mu\Delta\boldsymbol{u}+(\boldsymbol{ u}\cdot\nabla)\boldsymbol{u}+\alpha\boldsymbol{u}+\beta|\boldsymbol{u}|^{r-1} \boldsymbol{u}+\nabla p=\boldsymbol{f},\ \nabla\cdot\boldsymbol{u}=0,\]
where \(\mu,\alpha,\beta>0\) and \(r\in[1,\infty)\) is the absorption exponent. For \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,\infty)\) (\(2\beta\mu\geq 1\) for \(d=r=3\)), we first show the backward uniqueness of deterministic CBF equations by exploiting the logarithmic convexity property and the global solvability results available in the literature. As a direct consequence of the backward uniqueness result, we first derive the approximate controllability with respect to the initial data (viewed as a start controller). Secondly, we apply the backward uniqueness results in the attractor theory to show the zero Lipschitz deviation of the global attractors for 2D and 3D CBF equations. By an application of log-Lipschitz regularity, we prove the uniqueness of Lagrangian trajectories in 2D and 3D CBF flows and the continuity of Lagrangian trajectories with respect to the Eulerian initial data. Finally, we consider the stochastic CBF equations with a linear multiplicative Gaussian noise. For \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,5]\) (\(2\beta\mu\geq 1\) for \(d=r=3\)), we show the pathwise backward uniqueness as well as approximate controllability via starter controller results. In particular, the results obtained in this work hold true for 2D Navier-Stokes equations.
\({}^{1}\)Department of Mathematics, Indian Institute of Technology Roorkee-IIT Roorkee, Haridwar Highway, Roorkee, Uttarakhand 247667, India.
_e-mail:_ [email protected], [email protected].
_Key words:_ Convective Brinkman-Forchheimer equations, backward uniqueness, approximate controllability, Dirichlet's quotient, Lipschitz deviation, Gaussian noise.
Mathematics Subject Classification (2010): Primary 35Q30, 76D05; Secondary 35R60, 37L30, 76B75.
of CBF equations is its amazing connection with the study of attractors and the long-term behavior of infinite-dimensional dynamical systems. Moreover, by using log-Lipschitz regularity, we prove the uniqueness of Lagrangian trajectories in 2D and 3D CBF flows. We cannot solve CBF equations backwards (ill-posed problem), but one can show that the regular solutions enjoy the _backward uniqueness property_.
### The model
The convective Brinkman-Forchheimer (CBF) equations describe the motion of incompressible fluid flows in a saturated porous medium. We consider the following CBF equations in a \(d\)-dimensional torus \(\mathbb{T}^{d}=\left(\mathbb{R}/\mathrm{L}\mathbb{Z}\right)^{d}\) (\(d=2,3\)):
\[\left\{\begin{aligned} \frac{\partial\mathbf{u}}{\partial t}-\mu \Delta\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\alpha\mathbf{u}+\beta|\mathbf{u}|^{r-1}\mathbf{u}+ \nabla p&=\mathbf{f},\ \ \text{in}\ \ \mathbb{T}^{d}\times(0,T],\\ \nabla\cdot\mathbf{u}&=0,\ \ \text{in}\ \ \mathbb{T}^{d} \times(0,T],\\ \mathbf{u}(0)&=\mathbf{u}_{0}\ \ \text{in}\ \ \mathbb{T}^{d}, \end{aligned}\right. \tag{1.1}\]
where \(\mathbf{u}(x,t):\mathbb{T}^{d}\times[0,T]\to\mathbb{R}^{d}\) represents the velocity field at time \(t\) and position \(x\), \(p(x,t):\mathbb{T}^{d}\times[0,T]\to\mathbb{R}\) denotes the pressure field and \(\mathbf{f}(x,t):\mathbb{T}^{d}\times[0,T]\to\mathbb{R}^{d}\) is an external forcing. Moreover, \(\mathbf{u}(\cdot,\cdot)\), \(p(\cdot,\cdot)\) and \(\mathbf{f}(\cdot,\cdot)\) satisfy the following periodicity conditions:
\[\mathbf{u}(x+\mathrm{L}e_{i},\cdot)=\mathbf{u}(x,\cdot),\ p(x+\mathrm{L}e_{i},\cdot)= p(x,\cdot)\ \ \text{and}\ \ \mathbf{f}(x+\mathrm{L}e_{i},\cdot)=\mathbf{f}(x,\cdot), \tag{1.2}\]
for every \(x\in\mathbb{R}^{d}\) and \(i=1,\ldots,d\), where \(\{e_{1},\ldots,e_{d}\}\) is the canonical basis of \(\mathbb{R}^{d}.\) The positive constants \(\mu,\alpha\) and \(\beta\) denote the _Brinkman coefficient_ (effective viscosity), _Darcy_ (permeability of the porous medium) and _Forchheimer_ (proportional to the porosity of the material) coefficients, respectively. The absorption exponent \(r\in[1,\infty)\) and \(r=3\) is known as the _critical exponent_. The critical homogeneous CBF equations have the same scaling as Navier-Stokes equations (NSE) only when \(\alpha=0\) ([17]). In the literature, the case \(r<3\) is referred as _subcritical_ and \(r>3\) as _supercritical_ (or fast growing nonlinearities, [21]). For the supercritical case, the diffusion (\(-\Delta\mathbf{u}\)) and damping (\(|\mathbf{u}|^{r-1}\mathbf{u}\)) terms dominate the convective term \((\mathbf{u}\cdot\nabla)\mathbf{u}\), and one can expect global solvability results for the system (1.1) (see [2, 17, 21, 29, 31], etc.). Moreover, the system (1.1) is also referred as NSE modified by an absorption term ([2]). The above model is accurate when the flow velocity is too large for Darcy's law to be valid alone, and apart from that the porosity is not too small ([29]). If one considers (1.1) with \(\alpha=\beta=0\), then we obtain the classical NSE, which describe the motion of viscous fluid substances, and if \(\alpha,\beta>0\), then it can also be considered as a damped NSE.
The authors in [29] considered the system (1.1) with an extra term \(\widetilde{\beta}|\mathbf{u}|^{\widetilde{r}-1}\mathbf{u}\) to model a pumping, when \(\widetilde{\beta}<0\) by opposition to the damping modeled through the term \(\beta|\mathbf{u}|^{r-1}\mathbf{u}\) when \(\beta>0\) (referred as Brinkman-Forchheimer extended Darcy (BFeD) model). For \(\beta>0\) and \(\widetilde{\beta}\in\mathbb{R}\), the existence of weak solutions is obtained by assuming that \(r>\widetilde{r}\geq 1\), and the continuous dependence on the data as well as the existence of strong solutions were established for \(r>3\). As we are working on the torus \(\mathbb{T}^{d}\) and \(\widetilde{r}<r\), by modifying calculations suitably, the results of this paper hold true for BFeD model also.
### Literature survey
A good number of literature is available for the global solvability results of the system (1.1) (cf. [2, 17, 18, 21, 29, 31], etc.). The existence of a global weak solution to the system (1.1) is established in [2] and its uniqueness (for \(d=2\), \(r\in[1,\infty)\) and for \(d=3\), \(r\in[3,\infty)\)) in [29, 31], etc. The Brinkman-Forchheimer equations with fast growing nonlinearities is considered in [21] and the authors established the existence of regular dissipative solutions and global attractors for 3D CBF equations for \(r>3\). The authors in [13, 17] proved that all weak solutions of the critical 3D CBF equations satisfy the energy equality in bounded
as well as periodic domains. For the global solvability and random dynamics of stochastic CBF equations, one may refer to [25, 32, 34], etc. Likewise 3D NSE, the global solvability of deterministic and stochastic subcritical 3D CBF equations (\(2\beta\mu<1\) for critical also) is still an open problem.
For ordinary differential equations, the backward uniqueness is equivalent to the forward uniqueness. Whereas, for stochastic differential equations, the backward uniqueness is closely related to the question of existence of a stochastic flow and the latter implies former ([7]). Similar to deterministic partial differential equations (PDEs), in the case of stochastic PDEs, the existence of a flow does not imply the backward uniqueness. The main applications of backward uniqueness are in the long time behavior of the solutions (cf. [7, 16, 24, 43], etc.) and control theory ([5, 12, 30], etc.). The backward uniqueness as well as unique continuation property for various equations, like NSE, Kuramoto-Sivashinsky equations, nonlinear dissipative Schrodinger equation, etc. are established in [16]. By proving the backward uniqueness result, the asymptotic behavior for large times of solutions of linear stochastic PDEs of parabolic type is investigated in [7]. The backward uniqueness property of the solution to 3D stochastic magnetohydrodynamic-\(\alpha\) model driven by a linear multiplicative Gaussian noise is studied in [52]. The authors in [5] proved the backward uniqueness of solutions to stochastic semilinear parabolic equations as well as for the tamed NSE driven by a linear multiplicative Gaussian noise. They have provided applications to the approximate controllability of nonlinear stochastic parabolic equations with initial controllers are given. The backward uniqueness of 3D NSE and their applications have been explored in [10, 11, 16, 23, 26, 38, 46, 47, 48], etc. and references therein. The authors in [19, 39] proved the backward uniqueness results for 3D NSE of compressible flow and primitive equations, respectively. In this work, we establish the backward uniqueness result for deterministic and stochastic 2D and 3D CBF equations and investigate their applications in control theory and attractor theory.
### Difficulties, approaches and novelties
We consider the CBF equations (1.1) in a \(d\)-dimensional torus only. In the torus \(\mathbb{T}^{d}\) as well as on the whole space \(\mathbb{R}^{d}\), the Helmholtz-Hodge projection \(\mathcal{P}\) and \(-\Delta\) is commute ([45, Theorem 2.22]). Therefore the equality ([17, Lemma 2.1])
\[\int_{\mathbb{T}^{d}}(-\Delta\mathbf{y}(x))\cdot|\mathbf{y}(x)|^{r-1}\mathbf{y }(x)\mathrm{d}x\] \[=\int_{\mathbb{T}^{d}}|\nabla\mathbf{y}(x)|^{2}|\mathbf{y}(x)|^{r-1} \mathrm{d}x+4\Bigg{[}\frac{r-1}{(r+1)^{2}}\Bigg{]}\int_{\mathbb{T}^{d}}|\nabla |\mathbf{y}(x)|^{\frac{r+1}{2}}|^{2}\mathrm{d}x, \tag{1.3}\]
is quite useful in obtaining the regularity results. The above equality may not be useful in domains other than the whole space or a \(d\)-dimensional torus (see [17, 31], etc. for a detailed discussion). For \(\mathbf{x}\in\mathbb{H}\) and \(\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\) (see section 2 below for functional setting), using the above estimate, one can show that the weak solution \(\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{2}(0,T;\mathbb{V})\cap \mathrm{L}^{r+1}(0,T;\widehat{\mathbb{L}}^{r+1})\) to the system (2.11) (see below) has the regularity
\[\mathbf{u}\in\mathrm{C}((0,T];\mathbb{V})\cap\mathrm{L}^{2}(\epsilon,T;\mathrm{D} (\mathrm{A}))\cap\mathrm{L}^{r+1}(\epsilon,T;\widehat{\mathbb{L}}^{3(r+1)})\]
for any \(\epsilon>0\) (see (3.16)-(3.18) below). Furthermore, for the supercritical case, if \(\frac{\mathrm{d}\mathbf{f}}{\mathrm{d}t}\in\mathrm{L}^{2}(0,T;\mathbb{V}^{\prime})\), one can show that \(\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}\in\mathrm{L}^{\infty}(\epsilon_{1},T; \mathbb{H})\cap\mathrm{L}^{2}(\epsilon_{1},T;\mathbb{V})\) and if \(\frac{\mathrm{d}\mathbf{f}}{\mathrm{d}t}\in\mathrm{L}^{2}(0,T;\mathbb{H})\), then one gets \(\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}\in\mathrm{L}^{\infty}(\epsilon_{1},T; \mathbb{V})\), for any \(0<\epsilon<\epsilon_{1}\). Note that in the stochastic case, we are unable to obtain the above regularity on the time derivative, so we must restrict ourselves to \(r\in[3,5]\) in three dimensions.
We point out here that in the deterministic case, the results obtained in this work hold true for bounded domains as well. For \(\mathbf{x}\in\mathbb{H}\) and \(\mathbf{f}\in\mathrm{W}^{1,2}(0,T;\mathbb{H})\) (so that \(\mathbf{f}\in\mathrm{C}([0,T];\mathbb{H})\)),
the authors in [21] established that the weak solution of (2.11) with fast growing nonlinearities (see below) has the regularity \(\boldsymbol{u}\in\mathrm{L}^{\infty}(\epsilon,T;\mathrm{D}(\mathrm{A}))\), for any \(\epsilon>0\) ([21, Theorem 4.2], see (4.33) below). However, in the case of \(\mathbb{T}^{d}\), we obtained the backward uniqueness result without using this regularity result.
To the best our knowledge, the backward uniqueness results for the system (1.1) and its stochastic counterpart are not considered in the literature. The main objectives of this work are to establish
1. backward uniqueness property of the system (1.1) by using a logarithmic convexity method and its applications like 1. approximate controllability with respect to the initial data of the system (1.1), 2. zero Lipschitz deviation of the global attractor for the system (1.1), 3. the uniqueness of Lagrangian trajectories in 2D and 3D CBF flows by using the log-Lipschitz regularity,
2. pathwise backward uniqueness property of stochastic CBF equations perturbed by linear multiplicative Gaussian noise and its application in approximate controllability.
We use a logarithmic convexity approach to obtain the backward uniqueness results (cf. [5, 16, 24]). In order to prove the backward uniqueness, for \(\boldsymbol{u}=\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\), several authors have used the ratio (cf. [5, 7, 16, 24, 43])
\[\Lambda(t)=\frac{\langle\mathcal{A}(t)\boldsymbol{u}(t),\boldsymbol{u}(t) \rangle}{\|\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}},\]
for all \(t\in[0,T]\), where \(\mathcal{A}(\cdot)\) is linear and self-adjoint. But in this work, for the deterministic supercritical case, we consider for all \(t\in[0,T]\)
\[\Lambda(t)=\frac{\langle\mathcal{A}(\boldsymbol{u}(t)),\boldsymbol{u}(t) \rangle}{\|\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}},\ \ \text{where}\ \ \mathcal{A}(\boldsymbol{u})=\mu\mathrm{A}\boldsymbol{u}+\alpha\boldsymbol{u}+ \beta[\mathcal{C}(\boldsymbol{u}_{1})-\mathcal{C}(\boldsymbol{u}_{2})], \tag{1.4}\]
which is nonlinear. In the deterministic setting, the monotonicity property of \(\mathcal{C}(\cdot)\) (see (2.6) below) as well as the regularity of the solutions to the system (1.1) (see (3.17), (3.18), (3.23), (3.27), (3.28) and (3.35) below) play a crucial role in obtaining the backward uniqueness results for the case \(d=2,r\in[1,\infty)\), \(d=3,r\in[3,\infty)\) (\(2\beta\mu\geq 1\) for \(d=r=3\)).
Whereas in the stochastic case, we use the ratio
\[\widehat{\Lambda}(t)=\frac{\langle\widehat{\mathcal{A}}\boldsymbol{v}(t), \boldsymbol{v}(t)\rangle}{\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}},\ \text{where}\ \ \mu\mathrm{A}\boldsymbol{v}+\left(\alpha+\frac{\sigma^{2}}{2}\right) \boldsymbol{v},\]
for the noise intensity \(\sigma\in\mathbb{R}\backslash\{0\}\). Note that \(\widehat{\mathcal{A}}\) is a self-adjoint operator. By using this ratio, we are able to obtain the backward uniqueness results for the case \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,5]\) (\(2\beta\mu\geq 1\) for \(d=r=3\)) only. The regularity estimates obtained in (5.3) and (5.4) help us to obtain the desired results. As it is impossible to emulate estimates similar to (3.27), (3.28), and (3.35) on the time derivative in the stochastic case, we are unable to use the ratio given in (1.4) and consequently obtain the desired results for \(d=3\), \(r\in(5,\infty)\).
Approximate controllability is a property of dynamical systems which means that the system can be steered from any initial state to an arbitrary but close enough final state by inputting an appropriate control. It is a weaker form of exact controllability, which requires that the system can be steered from any initial state to an arbitrary final state. A direct consequence of the backward uniqueness result is the _approximate controllability with respect to the initial data, which is viewed as a start controller_ (cf. [5, 28], etc.). Following the works [5, 12], etc., we
prove the approximate controllability results for 2D and 3D deterministic as well as stochastic CBF equations.
The backward uniqueness of CBF equations has an additional application in the realm of attractor theory. The injectivity of the solution semigroup \(\mathrm{S}(\cdot)\) is an immediate consequence of backward uniqueness property (Lemma 4.5). From [25, Theorem 3.5] (see [35] for 2D CBF flows), we know that if the autonomous forcing \(\boldsymbol{f}\in\mathbb{H}\), then the system (2.11) possesses a global attractor \(\mathscr{A}\) in \(\mathbb{H}\). In fact, for the case \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,\infty)\) (\(2\beta\mu\geq 1\) for \(d=r=3\)), using the procedure followed to prove backward uniqueness, one can establish the following (Theorem 4.6):
\[\|\mathrm{A}^{\frac{1}{2}}(\boldsymbol{u}_{1}-\boldsymbol{u}_{2})\|_{\mathbb{H }}^{2}\leq C_{0}\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{\mathbb{H}}^{2}\log \!\left(\frac{M_{0}^{2}}{\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{\mathbb{H }}^{2}}\right)\!,\ \ \text{for all}\ \ \boldsymbol{u}_{1},\boldsymbol{u}_{2}\in\mathscr{A},\ \boldsymbol{u}_{1}\neq \boldsymbol{u}_{2}, \tag{1.5}\]
where \(M_{0}\geq 4\sup\limits_{\boldsymbol{x}\in\mathbb{H}}\|\boldsymbol{x}\|_{\mathbb{H }}\) and \(C_{0}\) is a constant. Note that the above result can be used to obtain the 1-log-Lipschitz continuity of \(\mathrm{A}:\mathscr{A}\to\mathbb{H}\) (Corollary 4.7). Using the techniques given in [40, 43], the estimate (1.5) is used to prove the zero Lipschitz deviation for 2D and 3D CBF equations when \(\boldsymbol{f}\in\mathbb{H}\) (Theorem 4.8). Whether (1.5) holds without the factor \(\log\!\left(\frac{M_{0}^{2}}{\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{ \mathbb{H}}^{2}}\right)\) is an open problem.
The authors in [8, 9] proved the uniqueness of Lagrangian trajectories for 2D and 3D NSE (local for 3D case) in periodic domains and bounded domains, respectively. The log-Lipschitz regularity of the velocity field and Holder regularity of the flow map of the three-dimensional Navier-Stokes equations with small data in critical spaces is demonstrated in [4]. Given a smooth solution of 2D and 3D NSE, the authors in [9] established the uniqueness of Lagrangian particle trajectories, as well as their continuity with respect to the Eulerian initial data through the abstract results in [9, Theorems 3.2.1 and 3.2.2]. By applying these abstract results, we prove the uniqueness of Lagrangian trajectories in 2D and 3D CBF flows and the continuity of Lagrangian trajectories with respect to the Eulerian initial data (Theorem 4.9).
### Organization of the paper
The rest of the paper is organized as follows: In the next section, we provide the necessary function spaces and operators needed to obtain main results of this work. The backward uniqueness for 2D and 3D deterministic CBF equations for \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,\infty)\) (\(2\beta\mu\geq 1\) for \(d=r=3\)) is established in section 3 by using a logarithmic convexity approach (Theorem 3.1). Some applications of the backward uniquness result are discussed in section 4. As a direct consequence of the backward uniqueness result, we establish the approximate controllability with respect to the initial data in Theorem 4.3. As an application in the attractor theory, we show that the semigroup operator \(\mathrm{S}(\cdot)\) is injective (Lemma 4.5). Moreover, we establish some results on boundedness of "log-Dirichlet quotients" for differences of solutions of the 2D and 3D CBF equations on the global attractor (Theorem 4.6 and Corollary 4.7). This helps us to prove the zero Lipschitz deviation for the global attractors (Theorem 4.8). The uniqueness as well as continuity with respect to the Eulerian initial data of Lagrangian trajectories in 2D and 3D CBF flows is established in Theorem 4.9. The stochastic CBF equations perturbed by a linear multiplicative Gaussian noise is considered in section 5. By using a suitable transformation, we transform the stochastic CBF equations into a random dynamical system and then prove the pathwise backward uniqueness results for stochastic CBF equations for \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,5]\) (\(2\beta\mu\geq 1\) for \(d=r=3\)) in Theorem 5.1. The approximate controllability result for the stochastic CBF equations is stated in Theorem 5.3.
## 2. Functional Setting
In this section, we provide the necessary function spaces needed to obtain the main results of this work. We consider the problem (1.1) on a \(d\)-dimensional torus \(\mathbb{T}^{d}=\left(\mathbb{R}/\mathrm{L}\mathbb{Z}\right)^{d}\) (\(d=2,3\)), with periodic boundary conditions and zero-mean value constraint for the functions, that is, \(\hat{\int}_{\mathbb{T}^{d}}\boldsymbol{u}(x)\mathrm{d}x=\boldsymbol{0}\).
### Function spaces
Let \(\,\hat{\mathbb{C}}_{p}^{\infty}(\mathbb{T}^{d};\mathbb{R}^{d})\) denote the space of all infinitely differentiable functions (\(\mathbb{R}^{d}\)-valued) such that \(\int_{\mathbb{T}^{d}}\boldsymbol{u}(x)\mathrm{d}x=\boldsymbol{0}\) and satisfy the periodic boundary conditions (1.2). The Sobolev space \(\hat{\mathbb{H}}_{p}^{k}(\mathbb{T}^{d}):=\hat{\mathrm{H}}_{p}^{k}(\mathbb{T}^ {d};\mathbb{R}^{d})\) is the completion of \(\hat{\mathbb{C}}_{p}^{\infty}(\mathbb{T}^{d};\mathbb{R}^{d})\) with respect to the \(\mathbb{H}^{s}\) norm \(\|\boldsymbol{u}\|_{\hat{\mathbb{H}}_{p}^{s}}:=\left(\sum_{0\leqslant|\alpha| \leqslant s}\|\mathrm{D}^{\alpha}\boldsymbol{u}\|_{\mathrm{L}^{2}(\mathbb{T}^ {d})}^{2}\right)^{1/2}\). The Sobolev space of periodic functions with zero mean \(\hat{\mathbb{H}}_{p}^{k}(\mathbb{T}^{d})\) is the same as [42, Proposition 5.39]
\[\Bigg{\{}\boldsymbol{u}:\boldsymbol{u}=\sum_{\boldsymbol{k}\in\mathbb{Z}^{d}} \boldsymbol{u}_{\boldsymbol{k}}e^{2\pi i\boldsymbol{k}\cdot\boldsymbol{x}/ \mathrm{L}},\boldsymbol{u}_{\boldsymbol{0}}=\boldsymbol{0},\ \bar{\boldsymbol{u}}_{\boldsymbol{k}}= \boldsymbol{u}_{-\boldsymbol{k}},\ \|\boldsymbol{u}\|_{\hat{\mathbb{H}}_{f}^{s}}:=\sum_{k\in \mathbb{Z}^{d}}|\boldsymbol{k}|^{2s}|\boldsymbol{u}_{\boldsymbol{k}}|^{2}< \infty\Bigg{\}}.\]
From [42, Proposition 5.38], we infer that the norms \(\|\cdot\|_{\hat{\mathbb{H}}_{p}^{s}}\) and \(\|\cdot\|_{\hat{\mathbb{H}}_{f}^{s}}\) are equivalent. Let us define
\[\mathcal{V}:=\{\boldsymbol{u}\in\hat{\mathbb{C}}_{p}^{\infty}(\mathbb{T}^{d} ;\mathbb{R}^{d}):\nabla\cdot\boldsymbol{u}=0\}.\]
The spaces \(\mathbb{H}\), \(\tilde{\mathbb{L}}^{p}\), \(p\in(2,\infty]\) and \(\mathbb{V}\) denote the closure of \(\mathcal{V}\) in the Lebesgue spaces \(\mathrm{L}^{2}(\mathbb{T}^{d};\mathbb{R}^{d})\), \(\mathrm{L}^{p}(\mathbb{T}^{d};\mathbb{R}^{d})\), \(p\in(2,\infty]\) and the Sobolev space \(\mathrm{H}^{1}(\mathbb{T}^{d};\mathbb{R}^{d})\), respectively. The zero mean condition provides the well-known Poincare inequality,
\[\lambda_{1}\|\boldsymbol{u}\|_{\mathbb{H}}^{2}\leqslant\|\boldsymbol{u}\|_{ \mathbb{V}}^{2}, \tag{2.1}\]
where \(\lambda_{1}=\frac{4\pi^{2}}{\mathrm{L}^{2}}\) ([42, Lemma 5.40]). Then, we characterize the spaces \(\mathbb{H}\), \(\tilde{\mathbb{L}}^{p}\) and \(\mathbb{V}\) with the norms
\[\|\boldsymbol{u}\|_{\mathbb{H}}^{2}:=\int_{\mathbb{T}^{d}}|\boldsymbol{u}(x)|^ {2}\mathrm{d}x,\quad\|\boldsymbol{u}\|_{\mathbb{L}^{p}}^{p}=\int_{\mathbb{T}^{ d}}|\boldsymbol{u}(x)|^{p}\mathrm{d}x\ \ \text{and}\ \ \|\boldsymbol{u}\|_{\mathbb{V}}^{2}:=\int_{\mathbb{T}^{d}}|\nabla \boldsymbol{u}(x)|^{2}\mathrm{d}x,\]
respectively. Let \((\cdot,\cdot)\) and \(\langle\cdot,\cdot\rangle\) denote the inner product in the Hilbert space \(\mathbb{H}\) and the induced duality between the spaces \(\mathbb{V}\) and its dual \(\mathbb{V}^{\prime}\) as well as \(\tilde{\mathbb{L}}^{p}\) and its dual \(\tilde{\mathbb{L}}^{p^{\prime}}\), where \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\), respectively. Note that \(\mathbb{H}\) can be identified with its own dual \(\mathbb{H}^{\prime}\).
### Linear operator
Let \(\mathcal{P}_{p}:\mathbb{L}^{p}(\mathbb{T}^{d})\to\tilde{\mathbb{L}}^{p}\), \(p\in[1,\infty)\) be the Helmholtz-Hodge (or Leray) projection (see [3]). Note that \(\mathcal{P}_{p}\) is a bounded linear operator and for \(p=2\), \(\mathcal{P}:=\mathcal{P}_{2}\) is an orthogonal projection ([45, Section 2.1]). We define the Stokes operator
\[\mathrm{A}\boldsymbol{u}:=-\mathcal{P}\Delta\boldsymbol{u},\ \boldsymbol{u}\in \mathrm{D}(\mathrm{A}):=\mathbb{V}\cap\hat{\mathbb{H}}_{p}^{2}(\mathbb{T}^{d}).\]
Note that \(\mathrm{D}(\mathrm{A})\) can also be written as \(\mathrm{D}(\mathrm{A})=\big{\{}\boldsymbol{u}\in\hat{\mathbb{H}}_{p}^{2}( \mathbb{T}^{d}):\nabla\cdot\boldsymbol{u}=0\big{\}}\). On a torus, \(\mathcal{P}\) and \(\Delta\) commutes ([45, Lemma 2.9]), so that \(\mathrm{A}\boldsymbol{u}=-\Delta\boldsymbol{u}\). Moreover, for \(d\leqslant 4\), by Sobolev's inequality, one has \(\mathrm{D}(\mathrm{A})\subset\mathbb{H}^{2}\subset\mathbb{L}^{p}\), for all \(p\in[1,\infty)\). Note that the operator \(\mathrm{A}\) is a non-negative self-adjoint operator in \(\mathbb{H}\) with a compact resolvent and
\[\langle\mathrm{A}\boldsymbol{u},\boldsymbol{u}\rangle=\|\boldsymbol{u}\|_{ \mathbb{V}}^{2},\ \ \text{for all}\ \ \boldsymbol{u}\in\mathbb{V},\ \ \text{so that}\ \ \|\mathrm{A}\boldsymbol{u}\|_{\mathbb{V}^{\prime}}\leqslant\|\boldsymbol{u}\|_{ \mathbb{V}}. \tag{2.2}\]
Since \(\mathrm{A}^{-1}\) is a compact self-adjoint operator in \(\mathbb{H}\), we obtain a complete family of orthonormal eigenfunctions \(\{\boldsymbol{e}_{k}\}_{k=1}^{\infty}\subset\hat{\mathbb{C}}_{p}^{\infty}( \mathbb{T}^{d};\mathbb{R}^{d})\) such that \(\mathrm{A}\boldsymbol{e}_{k}=\lambda_{k}\boldsymbol{e}_{k}\), for \(k=1,2,\ldots\) and \(0<\lambda_{1}\leqslant\lambda_{2}\leqslant\cdots\to\infty\) are the eigenvalues of \(\mathrm{A}\). Note that \(\lambda_{1}=\frac{4\pi^{2}}{\mathrm{L}^{2}}\) is the smallest eigenvalue of
appearing in the Poincare-Wirtinger inequality (2.1). In the sequel, we require the fractional powers of \(\mathrm{A}\) also. For \(\mathbf{u}\in\mathbb{H}\) and \(\alpha>0\), one can define \(\mathrm{A}^{\alpha}\mathbf{u}=\sum_{k=1}^{\infty}\lambda_{k}^{\alpha}\mathbf{u}_{k}\mathbf{ e}_{k},\ \mathbf{u}\in\mathrm{D}(\mathrm{A}^{\alpha})\), where \(\mathrm{D}(\mathrm{A}^{\alpha})=\big{\{}\mathbf{u}\in\mathbb{H}:\sum_{k=1}^{\infty} \lambda_{k}^{2\alpha}|\mathbf{u}_{k}|^{2}<+\infty\big{\}}.\) Here \(\mathrm{D}(\mathrm{A}^{\alpha})\) is equipped with the norm \(\|\mathrm{A}^{\alpha}\mathbf{u}\|_{\mathbb{H}}=\big{(}\sum_{k=1}^{\infty}\lambda_{ k}^{2\alpha}|\mathbf{u}_{k}|^{2}\big{)}^{1/2}.\) Note that \(\mathrm{D}(\mathrm{A}^{0})=\mathbb{H}\), \(\mathrm{D}(\mathrm{A}^{\frac{1}{2}})=\mathbb{V}\) and \(\mathrm{D}(\mathrm{A}^{-\frac{1}{2}})=\mathbb{V}^{\prime}\). It is easy to observe that \(\mathrm{D}(\mathrm{A}^{\frac{\alpha}{2}})=\big{\{}\mathbf{u}\in\hat{\mathbb{H}}_{ \text{\tiny P}}^{\alpha}(\mathbb{T}^{d}):\nabla\cdot\mathbf{u}=0\big{\}}\) and \(\|\mathrm{A}^{\frac{\alpha}{2}}\mathbf{u}\|_{\mathbb{H}}=C\|\mathbf{u}\|_{\hat{\mathbb{ H}}_{\text{\tiny P}}^{\alpha}}\), for all \(\mathbf{u}\in\mathrm{D}(\mathrm{A}^{\frac{\alpha}{2}})\), \(\alpha\geq 0\) (cf. [42]). Using Rellich-Kondrachov compactness embedding theorem, we infer that for any \(0\leq s_{1}<s_{2}\), the embedding \(\mathrm{D}(\mathrm{A}^{s_{2}})\hookrightarrow\mathrm{D}(\mathrm{A}^{s_{1}})\) is compact.
### Bilinear operator
Let us define the trilinear form \(b(\cdot,\cdot,\cdot):\mathbb{V}\times\mathbb{V}\times\mathbb{V}\to\mathbb{R}\) by
\[b(\mathbf{u},\mathbf{v},\mathbf{w})=\int_{\mathbb{T}^{d}}(\mathbf{u}(x)\cdot\nabla)\mathbf{v}(x) \cdot\mathbf{w}(x)\mathrm{d}x=\sum_{i,j=1}^{d}\int_{\mathbb{T}^{d}}\mathbf{u}_{i}(x) \frac{\partial\mathbf{v}_{j}(x)}{\partial x_{i}}\mathbf{w}_{j}(x)\mathrm{d}x.\]
If \(\mathbf{u},\mathbf{v}\) are such that the linear map \(b(\mathbf{u},\mathbf{v},\cdot)\) is continuous on \(\mathbb{V}\), the corresponding element of \(\mathbb{V}^{\prime}\) is denoted by \(\mathcal{B}(\mathbf{u},\mathbf{v})\). We also denote \(\mathcal{B}(\mathbf{u})=\mathcal{B}(\mathbf{u},\mathbf{u})=\mathcal{P}[(\mathbf{u}\cdot\nabla) \mathbf{u}]\). An integration by parts yields
\[\begin{cases}b(\mathbf{u},\mathbf{v},\mathbf{w})=-b(\mathbf{u},\mathbf{w},\mathbf{v}),\ \ \text{for all}\ \ \mathbf{u},\mathbf{v},\mathbf{w}\in\mathbb{V},\\ b(\mathbf{u},\mathbf{v},\mathbf{v})=0,\ \ \text{for all}\ \ \mathbf{u},\mathbf{v}\in\mathbb{V}.\end{cases} \tag{2.3}\]
The following estimates on the trilinear form \(b(\cdot,\cdot,\cdot)\) is useful in the sequel ([49, Chapter 2, Section 2.3]):
\((i)\) For \(d=2\),
\[|b(\mathbf{u},\mathbf{v},\mathbf{w})|\leq C\begin{cases}\|\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\bm {u}\|_{\mathbb{V}}^{1/2}\|\mathbf{v}\|_{\mathbb{V}}\|\mathbf{w}\|_{\mathbb{H}}^{1/2}\| \mathbf{w}\|_{\mathbb{V}}^{1/2},&\text{for all}\ \ \mathbf{u},\mathbf{v},\mathbf{w}\in\mathbb{V},\\ \|\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\mathbf{u}\|_{\mathbb{V}}^{1/2}\|\mathbf{v}\|_{\mathbb{V }}^{1/2}\|\Lambda\mathbf{v}\|_{\mathbb{H}}^{1/2}\|\mathbf{w}\|_{\mathbb{H}},&\text{for all}\ \ \mathbf{u}\in\mathbb{V},\mathbf{v}\in\mathrm{D}(\mathrm{A}),\mathbf{w}\in\mathbb{H},\\ \|\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\Lambda\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\mathbf{v}\|_{ \mathbb{V}}\|\mathbf{w}\|_{\mathbb{H}},&\text{for all}\ \ \mathbf{u}\in\mathrm{D}(\mathrm{A}),\mathbf{v}\in\mathbb{V},\mathbf{w}\in\mathbb{H}.\end{cases} \tag{2.4}\]
\((ii)\) For \(d=3\),
\[|b(\mathbf{u},\mathbf{v},\mathbf{w})|\leq C\begin{cases}\|\mathbf{u}\|_{\mathbb{H}}^{1/4}\|\bm {u}\|_{\mathbb{V}}^{3/4}\|\mathbf{v}\|_{\mathbb{V}}\|\mathbf{w}\|_{\mathbb{H}}^{1/4} \|\mathbf{w}\|_{\mathbb{V}}^{3/4},&\text{for all}\ \ \mathbf{u},\mathbf{v},\mathbf{w}\in\mathbb{V},\\ \|\mathbf{u}\|_{\mathbb{V}}\|\mathbf{v}\|_{\mathbb{V}}^{1/2}\|\Lambda\mathbf{v}\|_{\mathbb{H }}^{1/2}\|\mathbf{w}\|_{\mathbb{H}},&\text{for all}\ \ \mathbf{u}\in\mathbb{V},\mathbf{v}\in\mathrm{D}(\mathrm{A}),\mathbf{w}\in\mathbb{H},\\ \|\mathbf{u}\|_{\mathbb{V}}^{1/2}\|\Lambda\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\mathbf{v}\|_{ \mathbb{V}}\|\mathbf{w}\|_{\mathbb{H}},&\text{for all}\ \ \mathbf{u}\in\mathrm{D}(\mathrm{A}),\mathbf{v}\in\mathbb{V},\mathbf{w}\in\mathbb{H}.\end{cases} \tag{2.5}\]
The above estimates can be derived by using Hoder's, Sobolev's, Ladyzhenskaya's, Agmon's and Gagliardo-Nirenberg's inequalities.
### Nonlinear operator
Let us now consider the operator \(\mathcal{C}(\mathbf{u}):=\mathcal{P}(|\mathbf{u}|^{r-1}\mathbf{u})\). It is immediate that \(\langle\mathcal{C}(\mathbf{u}),\mathbf{u}\rangle=\|\mathbf{u}\|_{\mathbb{\mathbb{L}}^{r+1}}^ {r+1}\). From [34, Subsection 2.4], we have
\[\langle\mathcal{C}(\mathbf{u})-\mathcal{C}(\mathbf{v}),\mathbf{u}-\mathbf{v}\rangle \geq\frac{1}{2}\|\mathbf{u}\|^{\frac{r-1}{2}}(\mathbf{u}-\mathbf{v})\|_{\mathbb{H}}^{ 2}+\frac{1}{2}\|\mathbf{v}\|^{\frac{r-1}{2}}(\mathbf{u}-\mathbf{v})\|_{\mathbb{H}}^{2}\] \[\geq\frac{1}{2^{r-1}}\|\mathbf{u}-\mathbf{v}\|_{\mathbb{\mathbb{L}}^{r+1}} ^{r+1}\geq 0, \tag{2.6}\]
for \(r\geq 1\). The map \(\mathcal{C}(\cdot):\mathbb{\mathbb{L}}^{r+1}\to\mathbb{\mathbb{L}}^{\frac{r+1}{r}}\) is Gateaux differentiable with Gateaux derivative
\[\mathcal{C}^{\prime}(\mathbf{u})\mathbf{v}=\left\{\begin{array}{cl}\mathcal{P}(\mathbf{v} ),&\text{for $r=1$,}\\ \left\{\begin{array}{cl}\mathcal{P}(|\mathbf{u}|^{r-1}\mathbf{v})+(r-1)\mathcal{P} \Big{(}\frac{\mathbf{u}}{|\mathbf{u}|^{3-r}}(\mathbf{u}\cdot\mathbf{v})\Big{)},&\text{if $\mathbf{u}\neq\mathbf{0}$,}\\ \mathbf{0},&\text{if $\mathbf{u}=\mathbf{0}$,}\\ \mathcal{P}(|\mathbf{u}|^{r-1}\mathbf{v})+(r-1)\mathcal{P}(|\mathbf{u}|\mathbf{u}|^{r-3}(\bm {u}\cdot\mathbf{v})),&\text{for $r\geq 3$,}\end{array}\right.\end{array}\right. \tag{2.7}\]
for all \(\mathbf{u},\mathbf{v}\in\widetilde{\mathbb{L}}^{r+1}\). Moreover, for \(r\geq 3\), \(\mathcal{C}(\cdot)\) is twice Gateaux differentiable with second order Gateaux derivative
\[\mathcal{C}^{\prime\prime}(\mathbf{u})(\mathbf{v}\otimes\mathbf{w})=\left\{\begin{array}{ ll}(r-1)\mathcal{P}\{|\mathbf{u}|^{r-3}[(\mathbf{u}\cdot\mathbf{w})\mathbf{v}+(\mathbf{u}\cdot\mathbf{v})\mathbf{w} +(\mathbf{w}\cdot\mathbf{v})\mathbf{u}]\}\\ +\left\{\begin{array}{ll}(r-1)(r-3)\mathcal{P}\Big{[}\frac{\mathbf{u}}{|\mathbf{u}|^ {5-r}}(\mathbf{u}\cdot\mathbf{v})(\mathbf{u}\cdot\mathbf{w})\Big{]},&\text{for }\mathbf{u}\neq\mathbf{0}, \\ \mathbf{0}&\text{for }\mathbf{u}=\mathbf{0},\\ (r-1)\mathcal{P}\{|\mathbf{u}|^{r-3}[(\mathbf{u}\cdot\mathbf{w})\mathbf{v}+(\mathbf{u}\cdot\mathbf{v}) \mathbf{w}+(\mathbf{w}\cdot\mathbf{v})\mathbf{u}]\}\\ +(r-1)(r-3)\mathcal{P}[|\mathbf{u}|^{r-5}(\mathbf{u}\cdot\mathbf{v})(\mathbf{u}\cdot\mathbf{w}) \mathbf{u}],&\text{for }r\geq 5,\end{array}\right. \tag{2.8}\]
for all \(\mathbf{u},\mathbf{v},\mathbf{w}\in\widetilde{\mathbb{L}}^{r+1}\).
**Remark 2.1**.: _On a torus (cf. [17, 34]), we have_
\[\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{3(r+1)}}^{r+1}\leq C\int_{\mathbb{T}^{d}}| \nabla\mathbf{u}(x)|^{2}|\mathbf{u}(x)|^{r-1}\mathrm{d}x, \tag{2.9}\]
_for \(d=3\) and \(r\geq 1\). Moreover, we obtain_
\[\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{p(r+1)}}^{r+1}=\|\mathbf{u}\|^{r+1}\|_{ \widetilde{2}}^{2}\|_{\widetilde{\mathbb{L}}^{2p}(\mathbb{T}^{d})}^{2}\leq C \int_{\mathbb{T}^{d}}|\nabla|\mathbf{u}|^{\frac{r+1}{2}}|^{2}\mathrm{d}x\leq C\int _{\mathbb{T}^{d}}|\nabla\mathbf{u}(x)|^{2}|\mathbf{u}(x)|^{r-1}\mathrm{d}x, \tag{2.10}\]
_for \(d=2\) and for all \(p\in[2,\infty)\)._
Taking the Helmholtz-Hodge projection onto the system (1.1), we rewrite
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{u}(t)}{ \mathrm{d}t}+\mu\mathrm{A}\mathbf{u}(t)+\mathrm{B}(\mathbf{u}(t))+\alpha\mathbf{u}(t)+ \beta\mathcal{C}(\mathbf{u}(t))&=\mathbf{f},\\ \mathbf{u}(0)&=\mathbf{x},\end{aligned}\right. \tag{2.11}\]
where for simplicity of notation we used \(\mathcal{P}\mathbf{f}\) as \(\mathbf{f}\). For the case \(d=2,\ r\in[1,\infty)\) and \(d=3,\ r\in[3,\infty)\ (2\beta\mu\geq 1\) for \(d=r=3)\), for \(\mathbf{x}\in\mathbb{H}\) and \(\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{V}^{\prime})\), the existence and uniqueness of global Leray-Hopf weak solutions
\[\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{2}(0,T;\mathbb{V})\cap \mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})\]
satisfying the energy equality
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+2\mu\int_{0}^{t}\|\mathbf{u}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s+2\beta\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{L}^{r+1}}^{r+1}\mathrm{d}s\] \[=\|\mathbf{x}\|_{\mathbb{H}}^{2}+2\int_{0}^{t}\langle\mathbf{f}(s),\mathbf{u} (s)\rangle\mathrm{d}s, \tag{2.12}\]
for all \(t\in[0,T]\) of the system (2.11) is established in [17, 21, 31], etc. Moreover, for \(\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\), one can obtain the existence of a unique strong solution \(\mathbf{u}\in\mathrm{C}((0,T];\mathbb{V})\cap\mathrm{L}^{2}(\epsilon,T;\mathrm{D}( \mathrm{A}))\cap\mathrm{L}^{r+1}(\epsilon,T;\widetilde{\mathbb{L}}^{3(r+1)})\), for any \(\epsilon>0\) to the system (1.1) (cf. [17]).
## 3. Backward Uniqueness and Applications
In this section, we prove the backward uniqueness result for the system (2.11) for the case \(d=2,\ r\in[1,\infty)\) and \(d=3,\ r\in[3,\infty)\ (2\beta\mu\geq 1\) for \(d=r=3)\) by using a log convexity method. We use the regularity estimates of the system (2.11) to obtain the required results. We follow the works [5, 16, 24, 38, 43], etc. to establish the backward uniqueness results.
**Theorem 3.1** (Backward uniqueness).: _Let \(\mathbf{x}\in\mathbb{H}\), \(\mathbf{f}\in\mathrm{W}^{1,2}(0,T;\mathbb{H})\) and \(\mathbf{u}_{1},\mathbf{u}_{2}\) satisfy the first equation in the system (2.11). If \(\mathbf{u}_{1}(T)=\mathbf{u}_{2}(T)\) in \(\mathbb{H}\), then \(\mathbf{u}_{1}(t)=\mathbf{u}_{2}(t)\) in \(\mathbb{H}\) for all \(t\in[0,T]\)._
Proof.: The proof of this theorem has been divided into the following steps: For completeness, we provide the forward uniqueness result also.
**Step 1:** Energy equality and forward uniqueness: Let us first consider the case \(d=2,3\) and \(r\in(3,\infty)\). Taking the inner product with \(\mathbf{u}(\cdot)\) to the first equation in (2.11) and then integrating from \(0\) to \(T\), we obtain for all \(t\in[0,T]\)
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+2\mu\int_{0}^{t}\|\mathbf{u}(s)\|_{ \forall}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s+2\beta\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{L}^{r+1}}^{r+1}\mathrm{d}s\] \[=\|\mathbf{x}\|_{\mathbb{H}}^{2}+2\int_{0}^{t}\langle\mathbf{f}(s),\mathbf{u} (s)\rangle\mathrm{d}s\leqslant\|\mathbf{x}\|_{\mathbb{H}}^{2}+\mu\int_{0}^{t}\|\bm {u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+\frac{1}{\mu}\int_{0}^{t}\|\mathbf{f}(s)\|_{ \forall^{\prime}}^{2}\mathrm{d}s.\]
Therefore, we have
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+\mu\int_{0}^{t}\|\mathbf{u}(s)\|_{ \forall}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s+2\beta\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{L}^{r+1}}^{r+1}\mathrm{d}s\] \[\leqslant\|\mathbf{x}\|_{\mathbb{H}}^{2}+\frac{1}{\mu}\int_{0}^{T}\| \mathbf{f}(t)\|_{\forall^{\prime}}^{2}\mathrm{d}t=K, \tag{3.1}\]
for all \(t\in[0,T]\). One can establish the forward uniqueness of the system (1.1) in the following way: Let \(\mathbf{u}_{1}(\cdot)\) and \(\mathbf{u}_{2}(\cdot)\) be two weak solutions of the system (2.11) with the same initial data and forcing, say \(\mathbf{x}\) and \(\mathbf{f}\), respectively. Then \(\mathbf{u}=\mathbf{u}_{1}-\mathbf{u}_{2}\) satisfies the following system in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\) for a.e. \(t\in[0,T]\)
\[\left\{\frac{\mathrm{d}\mathbf{u}(t)}{\mathrm{d}t}+\mu\mathrm{A}\mathbf{u} (t)+[\mathrm{B}(\mathbf{u}_{1}(t),\mathbf{u}(t))+\mathrm{B}(\mathbf{u}(t),\mathbf{u}_{2}(t))] +\alpha\mathbf{u}(t)+\beta[\mathcal{C}(\mathbf{u}_{1}(t))-\mathcal{C}(\mathbf{u}_{1}(t))]= \mathbf{0},\right.\] \[\left.\mathbf{u}(0)=\mathbf{0}.\right. \tag{3.2}\]
Taking the inner product with \(\mathbf{u}(\cdot)\) and integrating from \(0\) to \(t\), we obtain
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+2\mu\int_{0}^{t}\|\mathbf{u}(s)\|_{ \forall}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s+2\beta\int_{0}^{t}\langle\mathcal{C}(\mathbf{u}_{1}(s))-\mathcal{C}( \mathbf{u}_{1}(s)),\mathbf{u}(s)\rangle\mathrm{d}s\] \[=\|\mathbf{u}(0)\|_{\mathbb{H}}^{2}-2\int_{0}^{t}\langle\mathrm{B}( \mathbf{u}(s),\mathbf{u}_{2}(s)),\mathbf{u}(s)\rangle\mathrm{d}s, \tag{3.3}\]
for all \(t\in[0,T]\). Note that \(\langle\mathrm{B}(\mathbf{u}_{1})-\mathrm{B}(\mathbf{u}_{2}),\mathbf{u}\rangle=\langle \mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle\), since \(\langle\mathrm{B}(\mathbf{u}_{1},\mathbf{u}),\mathbf{u}\rangle=0\). For \(d=2,3\) and \(r\in(3,\infty)\), using Holder's and Young's inequalities, we estimate \(|\langle\mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle|\) as
\[|\langle\mathrm{B}(\mathbf{u},\mathbf{u}),\mathbf{u}_{2}\rangle|\leqslant\|\mathbf{u}\|_{ \forall}\|\mathbf{u}_{2}\mathbf{u}\|_{\mathbb{H}}\leqslant\frac{\mu}{2}\|\mathbf{u}\|_{ \forall}^{2}+\frac{1}{2\mu}\|\mathbf{u}_{2}\mathbf{u}\|_{\mathbb{H}}^{2}. \tag{3.4}\]
We take the term \(\|\mathbf{u}_{2}\mathbf{u}\|_{\mathbb{H}}^{2}\) from (3.4) and use Holder's and Young's inequalities to estimate it as
\[\int_{\mathcal{O}}|\mathbf{u}_{2}(x)|^{2}|\mathbf{u}(x)|^{2}\mathrm{d}x =\int_{\mathcal{O}}|\mathbf{u}_{2}(x)|^{2}|\mathbf{u}(x)|^{\frac{4}{r-1}}| \mathbf{u}(x)|^{\frac{2(r-3)}{r-1}}\mathrm{d}x\] \[\leqslant\left(\int_{\mathcal{O}}|\mathbf{u}_{2}(x)|^{r-1}|\mathbf{u}(x)| ^{2}\mathrm{d}x\right)^{\frac{2}{r-1}}\left(\int_{\mathcal{O}}|\mathbf{u}(x)|^{2} \mathrm{d}x\right)^{\frac{r-3}{r-1}}\] \[\leqslant\frac{\beta\mu}{2}\bigg{(}\int_{\mathcal{O}}|\mathbf{u}_{2}( x)|^{r-1}|\mathbf{u}(x)|^{2}\mathrm{d}x\bigg{)}+\frac{r-3}{r-1}\bigg{(}\frac{4}{\beta\mu(r-1)} \bigg{)}^{\frac{2}{r-3}}\bigg{(}\int_{\mathcal{O}}|\mathbf{u}(x)|^{2}\mathrm{d}x \bigg{)}, \tag{3.5}\]
for \(r>3\). Therefore, from (3.4), we have
\[|\langle\mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle|\leq\frac{\mu}{2}\|\mathbf{u}\|_{ \mathbb{V}}^{2}+\frac{\beta}{4}\|\mathbf{u}_{2}|^{\frac{r-1}{2}}\mathbf{u}\|_{\mathbb{H }}^{2}+\frac{\vartheta}{2\mu}\|\mathbf{u}\|_{\mathbb{H}}^{2}, \tag{3.6}\]
where \(\vartheta=\frac{r-3}{r-1}\Big{(}\frac{4}{\beta\mu(r-1)}\Big{)}^{\frac{2}{r-3}}\). Moreover, we have
\[\beta\langle\mathcal{C}(\mathbf{u}_{1})-\mathcal{C}(\mathbf{u}_{2}),\mathbf{u}\rangle\geq \frac{\beta}{2}\|\mathbf{u}_{1}|^{\frac{r-1}{2}}\mathbf{u}\|_{\mathbb{H}}^{2}+\frac{ \beta}{2}\|\mathbf{u}_{2}|^{\frac{r-1}{2}}\mathbf{u}\|_{\mathbb{H}}^{2}. \tag{3.7}\]
Using (3.6) and (3.7) in (3.3), we deduce
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+\mu\int_{0}^{t}\|\mathbf{u}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s+\frac{\beta}{2^{r}}\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{L}^{r+1}}^{ r+1}\mathrm{d}s\] \[\leq\|\mathbf{u}(0)\|_{\mathbb{H}}^{2}+\frac{\vartheta}{\mu}\int_{0}^ {t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s, \tag{3.8}\]
where we have used (2.6) also. An application of Gronwall's inequality in (3.8) yields for all \(t\in[0,T]\)
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}\leq\|\mathbf{u}(0)\|_{\mathbb{H}}^{2}e^{\frac{ \vartheta T}{\mu}}, \tag{3.9}\]
and the forward uniqueness follows. For the case \(d=r=3\), we estimate \(\langle\mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle\) as
\[|\langle\mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle|\leq\|\mathbf{u}\|_{\mathbb{V} }\|\mathbf{u}_{2}|\mathbf{u}\|_{\mathbb{H}}\leq\theta\mu\|\mathbf{u}\|_{\mathbb{V}}+\frac{ 1}{4\theta\mu}\|\mathbf{u}_{2}|\mathbf{u}\|_{\mathbb{H}}^{2}, \tag{3.10}\]
for some \(0<\theta\leq 1\). Using (3.7) and (3.10), one can deduce from (3.3) that
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+(1-\theta)\mu\int_{0}^{t}\|\mathbf{u}( s)\|_{\mathbb{V}}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s\] \[\quad+\left(\beta-\frac{1}{2\theta\mu}\right)\int_{0}^{t}\|\mathbf{u }_{2}(s)|\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s\leq\|\mathbf{u}(0)\|_{\mathbb{H}} ^{2}, \tag{3.11}\]
for all \(t\in[0,T]\). For \(2\theta\mu\geq 1\), one can obtain the required result. For \(d=2\) and \(r\in[1,3]\), we estimate \(\langle\mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle\) using Holder's, Ldayzhenkaya's and Young's inequalities as
\[|\langle\mathrm{B}(\mathbf{u},\mathbf{u}_{2}),\mathbf{u}\rangle| \leq\|\mathbf{u}_{2}\|_{\widetilde{\mathrm{L}}^{4}}\|\mathbf{u}\|_{ \mathbb{V}}\|\mathbf{u}\|_{\widetilde{\mathrm{L}}^{4}}\leq 2^{1/4}\|\mathbf{u}_{2}\|_{ \widetilde{\mathrm{L}}^{4}}\|\mathbf{u}\|_{\mathbb{V}}^{3/2}\|\mathbf{u}\|_{\mathbb{H }}^{1/2}\] \[\leq\frac{\mu}{2}\|\mathbf{u}\|_{\mathbb{V}}^{2}+\frac{27}{16\mu^{3}} \|\mathbf{u}_{2}\|_{\widetilde{\mathrm{L}}^{4}}^{4}\|\mathbf{u}\|_{\mathbb{H}}^{2}. \tag{3.12}\]
Combining (3.7) and (3.12), and substituting it in (3.3), we obtain
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}+\mu\int_{0}^{t}\|\mathbf{u}(s)\|_{ \mathrm{V}}^{2}\mathrm{d}s+2\alpha\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s+\frac{\beta}{2^{r-2}}\int_{0}^{t}\|\mathbf{u}(s)\|_{\widetilde{ \mathrm{L}}^{r+1}}^{r+1}\mathrm{d}s\] \[\leq\|\mathbf{u}(0)\|_{\mathbb{H}}^{2}+\frac{27}{8\mu^{3}}\int_{0}^{t} \|\mathbf{u}_{2}(s)\|_{\widetilde{\mathrm{L}}^{4}}^{4}\|\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s, \tag{3.13}\]
for all \(t\in[0,T]\). An application of Gronwall's inequality and then Ladyzhenskaya's inequality in (3.13) yields for all \(t\in[0,T]\)
\[\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}\leq\|\mathbf{u}(0)\|_{\mathbb{H}}^{2}\exp\!\left( \frac{27}{4\mu^{3}}\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}\int_{0}^{T}\| \mathbf{u}_{2}(t)\|_{\mathbb{V}}^{2}\mathrm{d}t\right)\!, \tag{3.14}\]
and the uniqueness follows.
**Step 2:** Further energy estimates: For \(d=2,3\) and \(r\in(3,\infty)\), taking the inner product with \(\Lambda\mathbf{u}(\cdot)\) to the first equation in (2.11) and then integrating from \(\epsilon\) to \(t\), we find
\[\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}+2\mu\int_{\epsilon}^{t}\|\Lambda\mathbf{u }(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+2\alpha\int_{\epsilon}^{t}\|\mathbf{u}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s\] \[\quad+2\beta\int_{\epsilon}^{t}\|\mathbf{u}(s)|^{\frac{r-1}{2}}| \nabla\mathbf{u}(s)|_{\mathbb{H}}^{2}\mathrm{d}s+8\beta\bigg{[}\frac{(r-1)}{(r+1)^ {2}}\bigg{]}\int_{\epsilon}^{t}\|\nabla|\mathbf{u}(s)|^{\frac{r+1}{2}}\|_{\mathbb{H }}^{2}\mathrm{d}s\] \[=\|\mathbf{u}(\epsilon)\|_{\mathbb{V}}^{2}+\left\{\begin{array}{cl }2\int_{0}^{t}(\mathbf{f}(s),\Lambda\mathbf{u}(s))\mathrm{d}s,&\text{for $d=2$},\\ 2\int_{\epsilon}^{t}(\mathrm{B}(\mathbf{u}(s)),\Lambda\mathbf{u}(s))\mathrm{d}s+2\int_ {0}^{t}(\mathbf{f}(s),\Lambda\mathbf{u}(s))\mathrm{d}s,&\text{for $d=3$},\end{array}\right.\] \[\leqslant\|\mathbf{u}(\epsilon)\|_{\mathbb{V}}^{2}+\left\{\begin{array} []{cl}\mu\int_{\epsilon}^{t}\|\Lambda\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+ \frac{1}{\mu}\int_{0}^{t}\|\mathbf{f}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s,&\text{for $d=2$},\\ \mu\int_{\epsilon}^{t}\|\Lambda\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+\beta \int_{\epsilon}^{t}\|\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H }}^{2}\mathrm{d}s&\text{for $d=3$},\end{array}\right. \tag{3.15}\]
where we have used the fact that \((\mathrm{B}(\mathbf{u}),\Lambda\mathbf{u})=0\) in \(d=2\) (Lemma 3.1, [49]) and performed a calculation similar to (3.6). Therefore, integrating the above inequality for \(\epsilon\in(0,t)\) and then using (3.1), we have for all \(t\in[\epsilon,T]\),
\[\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}\leqslant\left\{\begin{array}{cl} \frac{K}{\mu t}+\frac{1}{\mu}\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2} \mathrm{d}t,&\text{for $d=2$},\\ \frac{K}{\mu t}+\frac{2K\theta}{\mu^{2}}+\frac{2}{\mu}\int_{0}^{T}\|\mathbf{f}(t) \|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=3$}.\end{array}\right. \tag{3.16}\]
Therefore, from (3.15), we further deduce
\[\mu\int_{\epsilon}^{t}\|\Lambda\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+\beta \int_{\epsilon}^{t}\|\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H }}^{2}\mathrm{d}s\leqslant\left\{\begin{array}{cl}\frac{K}{2\mu t}+\frac{2} {\mu}\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=2$},\\ \frac{K}{\mu t}+\frac{4K\theta}{\mu^{2}}+\frac{4}{\mu}\int_{0}^{T}\|\mathbf{f}(t) \|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=3$},\end{array}\right. \tag{3.17}\]
for all \(t\in[\epsilon,T].\) Using Remark 2.1, we obtain from (3.17) that
\[\int_{\epsilon}^{t}\|\mathbf{u}(s)\|_{\mathbb{H}^{3(r+1)}}^{r+1}\mathrm{d}s \leqslant\left\{\begin{array}{cl}C\bigg{(}\frac{K}{2\mu t}+\frac{2}{\mu} \int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t\bigg{)},&\text{for $d=2$},\\ C\bigg{(}\frac{K}{\mu t}+\frac{4K\theta}{\mu^{2}}+\frac{4}{\mu}\int_{0}^{T}\| \mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t\bigg{)},&\text{for $d=3$}.\end{array}\right. \tag{3.18}\]
For \(d=r=3\), we estimate \(|(\mathrm{B}(\mathbf{u}),\Lambda\mathbf{u})|\) as
\[|(\mathrm{B}(\mathbf{u}),\Lambda\mathbf{u})|\leqslant\|\Lambda\mathbf{u}\|_{\mathbb{H}}\| \mathbf{u}\|\nabla\mathbf{u}\|_{\mathbb{H}}\leqslant\frac{\theta\mu}{2}\|\Lambda\mathbf{u} \|_{\mathbb{H}}^{2}+\frac{1}{2\theta\mu}\|\mathbf{u}\|\nabla\mathbf{u}\|_{\mathbb{H}}^{ 2},\]
for some \(0<\theta\leqslant 1\). Thus, from (3.15), we deduce
\[\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}+\mu(1-\theta)\int_{\epsilon}^{t}\| \Lambda\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+2\alpha\int_{\epsilon}^{t}\|\mathbf{u }(s)\|_{\mathbb{V}}^{2}\mathrm{d}s+2\bigg{(}\beta-\frac{1}{2\theta\mu}\bigg{)} \int_{\epsilon}^{t}\|\mathbf{u}(s)\|\nabla\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\leqslant\|\mathbf{u}(\epsilon)\|_{\mathbb{V}}^{2}+\frac{1}{\mu}\int_{ 0}^{t}\|\mathbf{f}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s, \tag{3.19}\]
for all \(t\in[\epsilon,T]\) and some \(0<\theta\leqslant 1\). Therefore, for \(2\beta\mu\geqslant 1\), a calculation similar to (3.17) yields
\[\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}+\mu(1-\theta)\int_{\epsilon}^{t}\| \Lambda\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+2\bigg{(}\beta-\frac{1}{2\theta \mu}\bigg{)}\int_{\epsilon}^{t}\|\mathbf{u}(s)\|\nabla\mathbf{u}(s)\|_{\mathbb{H}}^{2} \mathrm{d}s\] \[\leqslant\frac{K}{\mu t}+\frac{1}{\mu}\int_{0}^{T}\|\mathbf{f}(t)\|_{ \mathbb{H}}^{2}\mathrm{d}t, \tag{3.20}\]
for all \(t\in[\epsilon,T]\).
Taking the inner product with \(\frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t}\) to the first equation in (2.11), we obtain for a.e. \(t\in[\epsilon,T]\)
\[\begin{split}&\left\|\frac{\mathrm{d}\boldsymbol{u}(t)}{\mathrm{d}t }\right\|_{\mathbb{H}}^{2}+\frac{\mu}{2}\frac{\mathrm{d}}{\mathrm{d}t}\| \boldsymbol{u}(t)\|_{\mathbb{V}}^{2}+\frac{\alpha}{2}\frac{\mathrm{d}}{ \mathrm{d}t}\|\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}+\frac{\beta}{r+1}\frac{ \mathrm{d}}{\mathrm{d}t}\|\boldsymbol{u}(t)\|_{\mathbb{L}^{r+1}}^{\tau+1}\\ &=-\bigg{(}\mathrm{B}(\boldsymbol{u}(t)),\frac{\mathrm{d} \boldsymbol{u}(t)}{\mathrm{d}t}\bigg{)}+\bigg{(}\boldsymbol{f}(t),\frac{ \mathrm{d}\boldsymbol{u}(t)}{\mathrm{d}t}\bigg{)}\\ &\leqslant\left\|\frac{\mathrm{d}\boldsymbol{u}(t)}{\mathrm{d}t} \right\|_{\mathbb{H}}\left\|\|\boldsymbol{u}(t)\|\nabla\boldsymbol{u}(t)\| \right\|_{\mathbb{H}}+\|\boldsymbol{f}(t)\|_{\mathbb{H}}\bigg{\|}\frac{ \mathrm{d}\boldsymbol{u}(t)}{\mathrm{d}t}\bigg{\|}_{\mathbb{H}}\\ &\leqslant\frac{1}{2}\bigg{\|}\frac{\mathrm{d}\boldsymbol{u}(t)}{ \mathrm{d}t}\bigg{\|}_{\mathbb{H}}^{2}+\|\boldsymbol{u}(t)\|\nabla\boldsymbol{ u}(t)\|_{\mathbb{H}}^{2}+\|\boldsymbol{f}(t)\|_{\mathbb{H}}^{2}\\ &\leqslant\frac{1}{2}\bigg{\|}\frac{\mathrm{d}\boldsymbol{u}(t)} {\mathrm{d}t}\bigg{\|}_{\mathbb{H}}^{2}+\frac{\beta\mu}{2}\|\boldsymbol{u}(t) \|^{\frac{r-1}{2}}|\nabla\boldsymbol{u}(t)|\|_{\mathbb{H}}^{2}+\vartheta\| \boldsymbol{u}(t)\|_{\mathbb{V}}^{2}.\end{split}\]
Integrating the above inequality from \(\epsilon\) to \(t\), we deduce by using Ladyzhenskaya's inequality as
\[\begin{split}&\int_{\epsilon}^{t}\!\!\left\|\frac{\mathrm{d} \boldsymbol{u}(s)}{\mathrm{d}t}\right\|_{\mathbb{H}}^{2}\!\!\mathrm{d}s+\mu\| \boldsymbol{u}(t)\|_{\mathbb{V}}^{2}+\alpha\|\boldsymbol{u}(t)\|_{\mathbb{H}} ^{2}+\frac{2\beta}{r+1}\|\boldsymbol{u}(t)\|_{\mathbb{L}^{r+1}}^{\tau+1}\\ &\leqslant\mu\|\boldsymbol{u}(\epsilon)\|_{\mathbb{V}}^{2}+ \alpha\|\boldsymbol{u}(\epsilon)\|_{\mathbb{H}}^{2}+\frac{2\beta}{r+1}\| \boldsymbol{u}(\epsilon)\|_{\mathbb{L}^{r+1}}^{\tau+1}+\beta\mu\int_{\epsilon }^{t}\|\boldsymbol{u}(s)|^{\frac{r-1}{2}}|\nabla\boldsymbol{u}(s)|\|_{ \mathbb{H}}^{2}\mathrm{d}s+2\vartheta\int_{0}^{t}\!\|\boldsymbol{u}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s\\ &\leqslant\frac{2\beta}{r+1}\|\boldsymbol{u}(\epsilon)\|_{ \mathbb{L}^{r+1}}^{r+1}+\left\{\begin{array}{ll}\bigg{(}\alpha+\frac{ \vartheta}{\mu}+\frac{1}{t}\bigg{)}K+3\int_{0}^{T}\|\boldsymbol{f}(t)\|_{ \mathbb{H}}^{2}\mathrm{d}t,&\text{ for }d=2,\\ \bigg{(}\alpha+\frac{4\vartheta}{\mu}+\frac{1}{t}\bigg{)}K+6\int_{0}^{T}\| \boldsymbol{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{ for }d=3,\end{array}\right.\end{split} \tag{3.21}\]
where we have used (3.1) and (3.17). Integrating the above inequality over \(\epsilon\in(0,t)\) results to
\[\begin{split}&\int_{\epsilon}^{t}s\bigg{\|}\frac{\mathrm{d} \boldsymbol{u}(s)}{\mathrm{d}t}\bigg{\|}_{\mathbb{H}}^{2}\!\!\mathrm{d}s\\ &\leqslant\frac{2\beta}{r+1}\int_{0}^{t}\|\boldsymbol{u}( \epsilon)\|_{\mathbb{L}^{r+1}}^{\tau+1}\mathrm{d}\epsilon+\left\{\begin{array}[ ]{ll}\bigg{(}\alpha+\frac{2\vartheta}{\mu}+\frac{2}{t}\bigg{)}Kt+3\int_{0}^{T }\|\boldsymbol{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{ for }d=2,\\ \bigg{(}\alpha+\frac{8\vartheta}{\mu}+\frac{2}{t}\bigg{)}Kt+6\int_{0}^{T}\| \boldsymbol{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{ for }d=3,\end{array}\right.\end{split} \tag{3.22}\]
Since \(\int_{\epsilon}^{t}\!\!\left\|\frac{\mathrm{d}\boldsymbol{u}(s)}{\mathrm{d}t} \right\|_{\mathbb{H}}^{2}\!\!\mathrm{d}s\leqslant\int_{\epsilon}^{t}s\big{\|} \frac{\mathrm{d}\boldsymbol{u}(s)}{\mathrm{d}t}\big{\|}_{\mathbb{H}}^{2}\! \mathrm{d}s\), from (3.22), we also have
(3.23)
so that \(\frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t}\in\mathrm{L}^{2}(\epsilon,T;\mathbb{ H})\) for any \(\epsilon>0\). Therefore, one can conclude that \(\boldsymbol{u}\in\mathrm{C}((0,T];\mathbb{V})\cap\mathrm{L}^{2}(\epsilon,T; \mathrm{D}(\mathrm{A}))\cap\mathrm{L}^{r+1}(\epsilon,T;\widehat{\mathbb{L}}^{ 3(r+1)})\) for any \(\epsilon>0\).
Let us define \(\boldsymbol{v}=\frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t}\) and differentiating (2.11) with respect to \(t\), we find that \(\boldsymbol{v}(\cdot)\) satisfies
\[\frac{\mathrm{d}\boldsymbol{v}(t)}{\mathrm{d}t}+\mu\mathrm{A}\boldsymbol{v}(t) +\mathrm{B}(\boldsymbol{u}(t),\boldsymbol{v}(t))+\mathrm{B}(\boldsymbol{v}(t), \boldsymbol{u}(t))+\alpha\boldsymbol{v}(t)+\beta\mathcal{C}^{\prime}( \boldsymbol{u}(t))\boldsymbol{v}(t)=\boldsymbol{f}_{t}, \tag{3.24}\]
for a.e. \(t\in[\epsilon,T]\). Taking the inner product with \(\mathbf{v}(\cdot)\) in the above equation, we obtain for a.e. \(t\in[\epsilon,T]\)
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{v}(t)\|_{\mathbb{H}} ^{2}+\mu\|\mathbf{v}(t)\|_{\mathbb{V}}^{2}+\alpha\|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+ \beta\|\|\mathbf{u}(t)|^{\frac{r-1}{2}}|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+(r-1)\beta\| \mathbf{u}(t)|^{\frac{r-3}{2}}(\mathbf{u}(t)\cdot\mathbf{v}(t))\|_{\mathbb{H}}^{2}\] \[=-(\mathrm{B}(\mathbf{v}(t),\mathbf{u}(t)),\mathbf{v}(t))+\langle\mathbf{f}_{t}(t ),\mathbf{v}(t)\rangle\] \[\leqslant\|\mathbf{v}(t)\|_{\mathbb{V}}\|\mathbf{u}(t)\|\mathbf{v}(t)\|_{ \mathbb{H}}+\|\mathbf{f}_{t}(t)\|_{\mathbb{V}^{\prime}}\|\mathbf{v}(t)\|_{\mathbb{V}}\] \[\leqslant\frac{\mu}{2}\|\mathbf{v}(t)\|_{\mathbb{V}}^{2}+\frac{\beta }{2}\|\mathbf{u}(t)|^{\frac{r-1}{2}}|\mathbf{v}(t)|\|_{\mathbb{H}}^{2}+\frac{\vartheta} {\mu}\|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+\frac{1}{\mu}\|\mathbf{f}_{t}(t)\|_{\mathbb{V }^{\prime}}^{2}, \tag{3.25}\]
where we have performed a calculation similar to (3.6). Integrating the above inequality from \(\epsilon_{1}\) to \(t\), we deduce
\[\|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+\mu\int_{\epsilon_{1}}^{t}\|\mathbf{v} (s)\|_{\mathbb{V}}^{2}\mathrm{d}s+2\alpha\int_{\epsilon_{1}}^{t}\|\mathbf{v}(s)\|_ {\mathbb{H}}^{2}\mathrm{d}s+\beta\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)|^{\frac{r -1}{2}}|\mathbf{v}(s)|\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\quad+2(r-1)\beta\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)|^{\frac{r-3}{ 2}}(\mathbf{u}(s)\cdot\mathbf{v}(s))\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\leqslant\|\mathbf{v}(\epsilon_{1})\|_{\mathbb{H}}^{2}+\frac{2 \vartheta}{\mu}\int_{\epsilon_{1}}^{t}\|\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d }s+\frac{2}{\mu}\int_{\varepsilon_{1}}^{t}\|\mathbf{f}_{t}(s)\|_{\mathbb{V}^{ \prime}}^{2}\mathrm{d}s. \tag{3.26}\]
Once again integrating with respect to \(\epsilon_{1}\) from \(\epsilon\) to \(t\) in (3.2), we arrive at
\[\|\mathbf{v}(t)\|_{\mathbb{H}}^{2} \leqslant\frac{1}{(t-\epsilon)}\int_{\epsilon}^{t}\|\mathbf{v}( \epsilon_{1})\|_{\mathbb{H}}^{2}\mathrm{d}\epsilon_{1}+\frac{2\vartheta}{\mu} \int_{\epsilon}^{t}\|\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+\frac{2}{\mu} \int_{\epsilon}^{t}\|\mathbf{f}_{t}(s)\|_{\mathbb{V}^{\prime}}^{2}\mathrm{d}s\] \[\leqslant\frac{2}{\mu}\int_{0}^{T}\|\mathbf{f}_{t}(t)\|_{\mathbb{V}^{ \prime}}^{2}\mathrm{d}t\] \[\quad+\frac{1}{\epsilon}\bigg{(}\frac{1}{(t-\epsilon)}+\frac{2 \vartheta}{\mu}\bigg{)}\Bigg{\{}\begin{array}{ll}\bigg{(}\alpha+\frac{2 \vartheta}{\mu}+\frac{2}{t}+\frac{1}{(r+1)t}\bigg{)}Kt+3\int_{0}^{T}\|\mathbf{f}(t )\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=2$},\\ \bigg{(}\alpha+\frac{8\vartheta}{\mu}+\frac{2}{t}+\frac{1}{(r+1)t}\bigg{)}Kt+ 6\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=3$},\end{array} \tag{3.27}\]
for all \(0<\epsilon<\epsilon_{1}<t<T\), where we have used (3.23) also. Therefore, we deduce from (3.2) that
\[\mu\int_{\epsilon_{1}}^{t}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2}\mathrm{d }s+2\alpha\int_{\epsilon_{1}}^{t}\|\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+ \beta\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)|^{\frac{r-1}{2}}|\mathbf{v}(s)|\|_{ \mathbb{H}}^{2}\mathrm{d}s\] \[\quad+2(r-1)\beta\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)|^{\frac{r-3}{ 2}}(\mathbf{u}(s)\cdot\mathbf{v}(s))\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\leqslant\frac{4}{\mu}\int_{0}^{T}\|\mathbf{f}_{t}(t)\|_{\mathbb{V}^{ \prime}}^{2}\mathrm{d}t\] \[\quad+\frac{1}{\epsilon}\bigg{(}\frac{1}{(t-\epsilon)}+\frac{4 \vartheta}{\mu}\bigg{)}\Bigg{\{}\begin{array}{ll}\bigg{(}\alpha+\frac{2 \vartheta}{\mu}+\frac{2}{t}+\frac{1}{(r+1)t}\bigg{)}Kt+3\int_{0}^{T}\|\mathbf{f}(t )\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=2$},\\ \bigg{(}\alpha+\frac{8\vartheta}{\mu}+\frac{2}{t}+\frac{1}{(r+1)t}\bigg{)}Kt+ 6\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t,&\text{for $d=3$},\end{array} \tag{3.28}\]
for all \(0<\epsilon<\epsilon_{1}<t<T\).
For \(d=3\) and \(r\geqslant 5\), taking the inner product with \(\frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t}\) to the first equation in (3.24), we find
\[\left\|\frac{\mathrm{d}\mathbf{v}(t)}{\mathrm{d}t}\right\|_{\mathbb{H}}^{2}+\frac{ \mu}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{v}(t)\|_{\mathbb{V}}^{2}+\frac{ \alpha}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+\frac{ \beta}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{u}(t)|^{\frac{r-1}{2}}|\mathbf{v}(t)\|_{ \mathbb{H}}^{2}\]
\[\leq\frac{\beta}{2}\|\boldsymbol{u}\|^{\frac{r-3}{2}}(\boldsymbol{u} \cdot\boldsymbol{v})\|_{\mathbb{H}}\|\boldsymbol{v}\|_{\mathbb{V}}^{2}+C\beta\| \boldsymbol{u}\|_{\mathbb{L}^{3(r+1)}}^{\frac{r-3}{2}+1}\|\boldsymbol{v}\|_{ \mathbb{V}}^{2}+C\beta\|\boldsymbol{v}\|_{\mathbb{H}}^{2}. \tag{3.32}\]
Substituting (3.30) and (3.32) in (3.29) and then integrating from \(\epsilon_{2}\) to \(t\), we obtain
\[\int_{\epsilon_{2}}^{t}\!\!\left\|\frac{\mathrm{d}\boldsymbol{v}( s)}{\mathrm{d}t}\right\|_{\mathbb{H}}^{2}\!\!\mathrm{d}s+\mu\|\boldsymbol{v}(t) \|_{\mathbb{V}}^{2}+\alpha\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}+\beta\| \boldsymbol{u}(t)|^{\frac{r-1}{2}}|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}+ \beta(r-1)\|\|\boldsymbol{u}(t)|^{\frac{r-3}{2}}|\boldsymbol{u}(t)\cdot \boldsymbol{v}(t)\|_{\mathbb{H}}^{2}\] \[\leq\mu\|\boldsymbol{v}(\epsilon_{2})\|_{\mathbb{V}}^{2}+\alpha \|\boldsymbol{v}(\epsilon_{2})\|_{\mathbb{H}}^{2}+\beta\|\boldsymbol{u}( \epsilon_{2})|^{\frac{r-1}{2}}|\boldsymbol{v}(\epsilon_{2})\|_{\mathbb{V}}^{2} +\beta(r-1)\|\boldsymbol{u}(\epsilon_{2})|^{\frac{r-3}{2}}|\boldsymbol{u}( \epsilon_{2})\cdot\boldsymbol{v}(\epsilon_{2})\|_{\mathbb{H}}^{2}\] \[\quad+2\int_{\epsilon_{2}}^{t}\|\boldsymbol{f}_{t}(s)\|_{\mathbb{ H}}^{2}\!\!\mathrm{d}s+C\int_{\epsilon_{2}}^{t}\|\boldsymbol{u}(s)\|_{ \mathbb{H}}\|\boldsymbol{v}(s)\|_{\mathbb{V}}^{2}\!\!\mathrm{d}s+\beta\int_{ \epsilon_{2}}^{t}\|\boldsymbol{u}(s)\|_{\mathbb{H}}^{2}(\boldsymbol{u}(s) \cdot\boldsymbol{v}(s))\|_{\mathbb{H}}^{2}\!\!\mathrm{d}s\]
\[+C\beta\int_{\epsilon_{2}}^{t}\|\mathbf{u}(s)\|_{\mathbb{T}^{\beta(r+1)}}^{ \frac{(r-3)(r+1)}{r-1}}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2}\mathrm{d}s+C\beta\int_{ \epsilon_{2}}^{t}\|\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s, \tag{3.33}\]
for all \(t\in[\epsilon_{1},T]\). Integrating the above inequality over \(\epsilon_{2}\in(\epsilon_{1},t)\), we have
\[\mu\|\mathbf{v}(t)\|_{\mathbb{V}}^{2}+\alpha\|\mathbf{v}(t)\|_{\mathbb{H} }^{2}+\beta\|\mathbf{u}(t)\|_{\mathbb{H}}^{\frac{r-1}{2}}|\mathbf{v}(t)\|_{\mathbb{H} }^{2}+\beta(r-1)\|\mathbf{u}(t)\|_{\mathbb{H}}^{\frac{r-3}{2}}|\mathbf{u}(t)\cdot\mathbf{v} (t)\|_{\mathbb{H}}^{2}\] \[\leq\frac{1}{\epsilon_{1}}\bigg{[}\mu\int_{\epsilon_{1}}^{t}\| \mathbf{v}(\epsilon_{2})\|_{\mathbb{V}}^{2}\mathrm{d}\epsilon_{2}+\alpha\int_{ \epsilon_{1}}^{t}\|\mathbf{v}(\epsilon_{2})\|_{\mathbb{H}}^{2}\mathrm{d}\epsilon_ {2}+\beta\int_{\epsilon_{1}}^{t}\|\mathbf{u}(\epsilon_{2})\|_{\mathbb{V}}^{2} \mathrm{d}\epsilon_{2}\] \[\quad+\beta(r-1)\int_{\epsilon_{1}}^{t}\|\mathbf{u}(\epsilon_{2})\| ^{\frac{r-3}{2}}|\mathbf{u}(\epsilon_{2})\cdot\mathbf{v}(\epsilon_{2})\|_{\mathbb{H} }^{2}\mathrm{d}\epsilon_{2}\bigg{]}+2\int_{\epsilon_{1}}^{t}(s-\epsilon_{1})\| \mathbf{f}_{t}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\quad+C\int_{\epsilon_{1}}^{t}(s-\epsilon_{1})\|\mathbf{u}(s)\|_{ \mathbb{V}}\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2} \mathrm{d}s+\beta\int_{\epsilon_{1}}^{t}(s-\epsilon_{1})\|\mathbf{u}(s)\|^{\frac{ r-3}{2}}_{\mathbb{H}}(\mathbf{u}(s)\cdot\mathbf{v}(s))\|_{\mathbb{H}}^{2}\|\mathbf{v}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s\] \[\quad+C\beta\int_{\epsilon_{1}}^{t}(s-\epsilon_{1})\|\mathbf{v}(s)\|_ {\mathbb{H}}^{2}\mathrm{d}s+C\beta\int_{\epsilon_{1}}^{t}(s-\epsilon_{1})\| \mathbf{u}(s)\|_{\mathbb{T}^{\frac{r-1}{r-1}}}^{\frac{(r-3)(r+1)}{r-1}}\|\mathbf{v}(s )\|_{\mathbb{V}}^{2}\mathrm{d}s\] \[\leq\frac{1}{\epsilon}\bigg{\{}\frac{4}{\mu}\int_{0}^{T}\|\mathbf{f}_ {t}(t)\|_{\mathbb{V}^{\prime}}^{2}\mathrm{d}t+\frac{1}{\epsilon}\bigg{(}\frac {1}{(t-\epsilon)}+\frac{4\vartheta}{\mu}\bigg{)}\bigg{(}\alpha+\frac{8 \vartheta}{\mu}+\frac{2}{t}+\frac{1}{(r+1)t}\bigg{)}Kt+6\int_{0}^{T}\|\mathbf{f}( t)\|_{\mathbb{H}}^{2}\mathrm{d}t\bigg{\}}\] \[\quad+2T\int_{0}^{T}\|\mathbf{f}_{t}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t +CT\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)\|_{\mathbb{V}}\|\mathrm{A}\mathbf{u}(s)\|_{ \mathbb{H}}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2}\mathrm{d}s\] \[\quad+\beta T\int_{\epsilon_{1}}^{t}\|\|\mathbf{u}(s)\|^{\frac{r-3}{2 }}(\mathbf{u}(s)\cdot\mathbf{v}(s))\|_{\mathbb{H}}^{2}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2} \mathrm{d}s+C\beta T\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)\|_{\mathbb{T}^{\frac{r -3}{2(r+1)}}}^{\frac{(r-3)(r+1)}{r-1}}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2}\mathrm{d}s, \tag{3.34}\]
for all \(0<\epsilon<\epsilon_{1}<t<T\), where we have used (3.28) also. An application of Gronwall's inequality in (3.34) yields
\[\|\mathbf{v}(t)\|_{\mathbb{V}}^{2}\] \[\leq\bigg{[}\frac{1}{\epsilon}\bigg{\{}\frac{4}{\mu}\int_{0}^{T} \|\mathbf{f}_{t}(t)\|_{\mathbb{V}^{\prime}}^{2}\mathrm{d}t+\frac{1}{\epsilon} \bigg{(}\frac{1}{(t-\epsilon)}+\frac{4\vartheta}{\mu}\bigg{)}\bigg{(}\alpha+ \frac{8\vartheta}{\mu}+\frac{2}{t}+\frac{1}{(r+1)t}\bigg{)}Kt+6\int_{0}^{T}\| \mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t\bigg{\}}\] \[\quad+2T\int_{0}^{T}\|\mathbf{f}_{t}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t \bigg{]}\exp\bigg{\{}CT^{\frac{3}{2}}\sup_{t\in[\epsilon_{1},t]}\|\mathbf{u}(t)\|_ {\mathbb{V}}\bigg{(}\!\int_{\epsilon_{1}}^{t}\!\|\mathrm{A}\mathbf{u}(s)\|_{ \mathbb{H}}^{2}\mathrm{d}s\bigg{)}^{\frac{1}{2}}\] \[\quad+\beta T\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)|^{\frac{r-3}{2}}| \mathbf{u}(s)\cdot\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+C\beta T\frac{r+1}{r-1} \bigg{(}\!\int_{\epsilon_{1}}^{t}\|\mathbf{u}(s)\|_{\mathbb{T}^{3(r+1)}}^{r+1} \mathrm{d}s\bigg{)}^{\frac{r-3}{r-1}}\bigg{\}}\,\leq C, \tag{3.35}\]
for all \(0<\epsilon<\epsilon_{1}<t<T\), and the right hand side is bounded by using (3.16), (3.17) and (3.28).
**Step 3:** Backward uniqueness: Let us now prove the backward uniqueness property. Let us first consider the case \(d=2,3\) and \(r\in(3,\infty)\). Let \(\mathbf{u}_{1}(\cdot)\) and \(\mathbf{u}_{2}(\cdot)\) be two solutions of the system (2.11) with the same final data, say \(\mathbf{\xi}\) and forcing \(\mathbf{f}\in\mathrm{W}^{1,2}(0,T;\mathbb{H})\). Then \(\mathbf{u}=\mathbf{u}_{1}-\mathbf{u}_{2}\) satisfies the following system in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\) for a.e. \(t\in[0,T]\) and in \(\mathbb{H}\) for a.e. \(t\in[\epsilon_{1},T]\), for any \(0<\epsilon<\epsilon_{1}<T\):
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{u}(t)}{\mathrm{d}t}+ \mathcal{A}(\mathbf{u}(t))&=-[\mathrm{B}(\mathbf{u}_{1}(t),\mathbf{u}(t))+ \mathrm{B}(\mathbf{u}(t),\mathbf{u}_{2}(t))]=h(\mathbf{u}(t)),\\ \mathbf{u}(T)&=\mathbf{0},\end{aligned}\right. \tag{3.36}\]
where
\[\mathcal{A}(\mathbf{u}):=\mu\mathrm{A}\mathbf{u}+\alpha\mathbf{u}+\beta[\mathcal{C}(\mathbf{u}_{1} )-\mathcal{C}(\mathbf{u}_{2})]=\mu\mathrm{A}\mathbf{u}+\alpha\mathbf{u}+\beta\int_{0}^{1} \mathcal{C}^{\prime}(\theta\mathbf{u}+\mathbf{u}_{2})\mathbf{u}\mathrm{d}\theta. \tag{3.37}\]
By the monotonicity of \(\mathcal{C}(\cdot)\) (see (2.6)), we know that
\[\langle\mathcal{A}(\mathbf{u}),\mathbf{u}\rangle=\mu\|\mathbf{u}\|_{\mathbb{V}}^{2}+ \alpha\|\mathbf{u}\|_{\mathbb{H}}^{2}+\langle\mathcal{C}(\mathbf{u}_{1})-\mathcal{C}( \mathbf{u}_{2}),\mathbf{u}\rangle\geq\mu\|\mathbf{u}\|_{\mathbb{V}}^{2}. \tag{3.38}\]
If we denote \(\partial_{t}\mathbf{u}=\mathbf{v}\), then
\[\partial_{t}[\mathcal{A}(\mathbf{u})]=\mu\mathrm{A}\mathbf{v}+\alpha\mathbf{v}+\beta\int_ {0}^{1}\mathcal{C}^{\prime}(\theta\mathbf{u}+\mathbf{u}_{2})\mathbf{v}\mathrm{d}\theta+ \beta\int_{0}^{1}\mathcal{C}^{\prime\prime}(\theta\mathbf{u}+\mathbf{u}_{2})(\mathbf{u} \otimes(\theta\mathbf{v}+\partial_{t}\mathbf{u}_{2}))\mathrm{d}\theta.\]
It should be noted that
\[\langle\partial_{t}[\mathcal{A}(\mathbf{u})]\mathbf{v},\mathbf{u}\rangle=\langle\mathcal{ A}(\mathbf{u}),\mathbf{v}\rangle+\beta\bigg{\langle}\!\int_{0}^{1}\!\mathcal{C}^{ \prime\prime}(\theta\mathbf{u}+\mathbf{u}_{2})(\mathbf{u}\otimes(\theta\mathbf{v}+\partial_{t }\mathbf{u}_{2}))\mathrm{d}\theta,\mathbf{u}\bigg{\rangle}, \tag{3.39}\]
since
\[\langle\mathcal{A}(\mathbf{u}),\mathbf{v}\rangle =\langle\mu\mathrm{A}\mathbf{u},\mathbf{v}\rangle+\alpha(\mathbf{u},\mathbf{v})+ \beta\int_{\mathbb{T}^{d}}\int_{0}^{1}|\theta\mathbf{u}+\mathbf{u}_{2}|^{r-1}\mathrm{d }\theta\mathbf{u}\cdot\mathbf{v}\mathrm{d}x\] \[\quad+\beta(r-1)\int_{\mathbb{T}^{d}}\int_{0}^{1}|\theta\mathbf{u}+ \mathbf{u}_{2}|^{r-3}((\theta\mathbf{u}+\mathbf{u}_{2})\cdot\mathbf{v})((\theta\mathbf{u}+\mathbf{u}_ {2})\cdot\mathbf{u})\mathrm{d}\theta\mathrm{d}x\] \[=\bigg{\langle}\mathbf{u},\mu\mathrm{A}\mathbf{v}+\alpha\mathbf{v}+\beta\int_ {\mathbb{T}^{d}}\int_{0}^{1}|\theta\mathbf{u}+\mathbf{u}_{2}|^{r-1}\mathrm{d}\theta \mathbf{v}\mathrm{d}x\] \[\quad\quad+\beta(r-1)\int_{\mathbb{T}^{d}}\int_{0}^{1}(\theta\mathbf{ u}+\mathbf{u}_{2})|\theta\mathbf{u}+\mathbf{u}_{2}|^{r-3}((\theta\mathbf{u}+\mathbf{u}_{2}) \cdot\mathbf{v})\mathrm{d}\theta\mathrm{d}x\bigg{\rangle}\] \[=\bigg{\langle}\mathbf{u},\mu\mathrm{A}\mathbf{v}+\alpha\mathbf{v}+\beta\int_ {0}^{1}\mathcal{C}^{\prime}(\theta\mathbf{u}+\mathbf{u}_{2})\mathbf{v}\mathrm{d}\theta \bigg{\rangle}.\]
Our aim is to show that if \(\mathbf{u}(T)=\mathbf{0}\), then \(\mathbf{u}(t)=\mathbf{0}\) for all \(t\in[0,T]\). We prove this result by a contradiction first in the interval \([\epsilon_{1},T,]\) for any \(\epsilon_{1}>0\) and then by using the continuity of \(\|\mathbf{u}(t)\|_{\mathbb{H}}\) in \([0,T]\) and the arbitrariness of \(\epsilon_{1}\), one can obtain the required result in \([0,T]\). Assume that there exists some \(t_{0}\in[\epsilon_{1},T)\) such that \(\mathbf{u}(t_{0})\neq\mathbf{0}\). Since the mapping \(t\mapsto\|\mathbf{u}(t)\|_{\mathbb{H}}\) is continuous, the following alternative holds:
1. for all \(t\in[t_{0},T]\), \(\|\mathbf{u}(t)\|_{\mathbb{H}}>0\) or
2. there exists a \(t_{1}\in(t_{0},T)\) such that for all \(t\in(t_{0},t_{1})\), \(\|\mathbf{u}(t)\|_{\mathbb{H}}>0\) and \(\mathbf{u}(t_{1})=\mathbf{0}\).
In the second case, denote by \(\Lambda(t)\), the ratio
\[\Lambda(t)=\frac{\langle\mathcal{A}(\mathbf{u}(t)),\mathbf{u}(t)\rangle}{\|\mathbf{u}(t)\|_ {\mathbb{H}}^{2}}\geq\mu\frac{\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}}{\|\mathbf{u}(t)\|_{ \mathbb{H}}^{2}}, \tag{3.40}\]
where we have used (3.38). The ratio \(\frac{\|\mathbf{u}\|_{\mathbb{H}}^{2}}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\) is known as "Dirichlet's quotient" ([43]). Therefore, we have the following equation for \(\Lambda(t)\):
\[\frac{\mathrm{d}\Lambda}{\mathrm{d}t} =\frac{\langle\partial_{t}[\mathcal{A}(\mathbf{u})],\mathbf{u}\rangle}{\| \mathbf{u}\|_{\mathbb{H}}^{2}}+\frac{\langle\mathcal{A}(\mathbf{u})-2\Lambda\mathbf{u}, \partial_{t}\mathbf{u}\rangle}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\] \[=\frac{\beta}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\bigg{\langle}\!\int_{0 }^{1}\mathcal{C}^{\prime\prime}(\theta\mathbf{u}+\mathbf{u}_{2})(\mathbf{u}\otimes( \theta\partial_{t}\mathbf{u}+\partial_{t}\mathbf{u}_{2}))\mathrm{d}\theta,\mathbf{u} \bigg{\rangle}+2\frac{\langle\mathcal{A}(\mathbf{u})-\Lambda\mathbf{u},\partial_{t} \mathbf{u}\rangle}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}, \tag{3.41}\]
where we have used (3.39) also. Since \(\langle\mathcal{A}(\mathbf{u})-\Lambda\mathbf{u},\mathbf{u}\rangle=0\) and \(\partial_{t}\mathbf{u}=-\mathcal{A}(\mathbf{u})+h(\mathbf{u})\), it follows that
\[\frac{1}{2}\frac{\mathrm{d}\Lambda}{\mathrm{d}t}+\frac{\|\mathcal{A }(\mathbf{u})-\Lambda\mathbf{u}\|_{\mathbb{H}}^{2}}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\] \[=\frac{\beta}{2\|\mathbf{u}\|_{\mathbb{H}}^{2}}\bigg{\langle}\!\int_{ 0}^{1}\mathcal{C}^{\prime\prime}(\theta\mathbf{u}+\mathbf{u}_{2})(\mathbf{u}\otimes( \theta\partial_{t}\mathbf{u}_{1}+(1-\theta)\partial_{t}\mathbf{u}_{2}))\mathrm{d} \theta,\mathbf{u}\bigg{\rangle}+\frac{\langle\mathcal{A}(\mathbf{u})-\Lambda\mathbf{u},h( \mathbf{u})\rangle}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\] \[\leqslant\frac{\beta}{2\|\mathbf{u}\|_{\mathbb{H}}^{2}}\int_{0}^{1} \|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})(\mathbf{u} \otimes(\theta\partial_{t}\mathbf{u}_{1}+(1-\theta)\partial_{t}\mathbf{u}_{2}))\|_{ \mathbb{V}^{\prime}}\mathrm{d}\theta\|\mathbf{u}\|_{\mathbb{V}}\] \[\quad+\frac{1}{2}\frac{\|\mathcal{A}(\cdot)\mathbf{u}-\Lambda\mathbf{u} \|_{\mathbb{H}}^{2}}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}+\frac{1}{2}\frac{\|h(\mathbf{u} )\|_{\mathbb{H}}^{2}}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}. \tag{3.42}\]
For \(d=2\), \(r\in(3,\infty)\) and \(d=3\), \(r\in(3,5]\), we estimate the term \(\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})(\mathbf{u} \otimes(\theta\partial_{t}\mathbf{u}+\partial_{t}\mathbf{u}_{2}))\|_{\mathbb{V}^{ \prime}}\) from (3.42) using Holder's and Sobolev's inequalities as
\[\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{ 2})(\mathbf{u}\otimes(\theta\partial_{t}\mathbf{u}_{1}+(1-\theta)\partial_{t}\mathbf{u}_{ 2}))\|_{\mathbb{V}^{\prime}}\] \[\leqslant\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta) \mathbf{u}_{2})(\mathbf{u}\otimes\partial_{t}\mathbf{u}_{1})\|_{\mathbb{V}^{\prime}}+\| \mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})(\mathbf{u} \otimes\partial_{t}\mathbf{u}_{2})\|_{\mathbb{V}^{\prime}}\] \[\leqslant C\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta )\mathbf{u}_{2})(\mathbf{u}\otimes\partial_{t}\mathbf{u}_{1})\|_{\mathbb{L}_{\frac{r+1}{ r}}^{r+1}}+C\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})(\mathbf{u} \otimes\partial_{t}\mathbf{u}_{2})\|_{\mathbb{L}_{\frac{r+1}{r}}^{r+1}}\] \[\leqslant C\big{(}\|\mathbf{u}_{1}\|_{\mathbb{L}_{r+1}^{r+1}}+\|\mathbf{ u}_{2}\|_{\mathbb{L}_{r+1}^{r+1}}\big{)}^{r-2}\|\mathbf{u}\|_{\mathbb{L}_{r+1}^{r+1}} \|\partial_{t}\mathbf{u}_{1}\|_{\mathbb{L}_{r+1}^{r+1}}+C\big{(}\|\mathbf{u}_{1}\|_{ \mathbb{L}_{r+1}^{r+1}}+\|\mathbf{u}_{2}\|_{\mathbb{L}_{r+1}^{r+1}}\big{)}^{r-2}\| \mathbf{u}\|_{\mathbb{L}_{r+1}^{r+1}}\big{\|}\partial_{t}\mathbf{u}_{2}\|_{\mathbb{L} _{r+1}^{r+1}}\] \[\leqslant C\Big{(}\|\mathbf{u}_{1}\|_{\mathbb{L}_{r+1}^{r-2}}^{r-2}+ \|\mathbf{u}_{2}\|_{\mathbb{L}_{r+1}^{r-2}}^{r-2}\Big{)}(\|\partial_{t}\mathbf{u}_{1} \|_{\mathbb{V}}+\|\partial_{t}\mathbf{u}_{2}\|_{\mathbb{V}})\|\mathbf{u}\|_{\mathbb{V}}, \tag{3.43}\]
for all \(0<\theta<1\). For \(d=3\) and \(r\in(5,\infty)\), we estimate \(\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})(\mathbf{u} \otimes(\theta\partial_{t}\mathbf{u}+\partial_{t}\mathbf{u}_{2}))\|_{\mathbb{V}^{\prime}}\) using Gagliardo-Nirenberg's Holder's and interpolation inequalities as
\[\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2 })(\mathbf{u}\otimes(\theta\partial_{t}\mathbf{u}_{1}+(1-\theta)\partial_{t}\mathbf{u}_{2} ))\|_{\mathbb{V}^{\prime}}\] \[\leqslant C\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1- \theta)\mathbf{u}_{2})(\mathbf{u}\otimes\partial_{t}\mathbf{u}_{1})\|_{\mathbb{L}_{\frac{r}{ 2}}^{6}}+C\|\mathcal{C}^{\prime\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})( \mathbf{u}\otimes\partial_{t}\mathbf{u}_{2})\|_{\mathbb{L}_{\frac{r}{2}}^{6}}\] \[\leqslant C\big{(}\|\mathbf{u}_{1}\|_{\mathbb{L}_{2}^{(r-2)}}+\|\mathbf{ u}_{2}\|_{\mathbb{L}_{2}^{(r-2)}}\big{)}^{r-2}\|\mathbf{u}\|_{\mathbb{L}_{\theta}}\| \partial_{t}\mathbf{u}_{1}\|_{\mathbb{L}_{\theta}^{6}}+C\big{(}\|\mathbf{u}_{1}\|_{ \mathbb{L}_{2}^{(r-2)}}+\|\mathbf{u}_{2}\|_{\mathbb{L}_{2}^{(r-2)}}\big{)}^{r-2}\| \mathbf{u}\|_{\mathbb{L}_{\theta}^{6}}\|\partial_{t}\mathbf{u}_{2}\|_{\mathbb{L}_{\theta}^{6}}\] \[\leqslant C\bigg{(}\|\mathbf{u}_{1}\|_{\mathbb{L}_{r+1}^{r+1}}\|\mathbf{ u}_{1}\|_{\mathbb{L}_{2}^{(r+1)}}^{\frac{3(r-5)}{4}}+\|\mathbf{u}_{2}\|_{\mathbb{L}_{r+1}^{r+1}} \|\mathbf{u}_{2}\|_{\mathbb{L}_{2}^{(r+1)}}^{\frac{3(r-5)}{4}}\big{)}(\|\partial_{t} \mathbf{u}_{1}\|_{\mathbb{V}}+\|\partial_{t}\mathbf{u}_{2}\|_{\mathbb{V}})\|\mathbf{u}\|_{ \mathbb{V}}. \tag{3.44}\]
For \(d=2\), using Holder's, Agmon's and Young's inequalities, and (2.4), we have
\[\|h(\mathbf{u})\|_{\mathbb{H}}^{2} \leqslant\|\mathrm{B}(\mathbf{u}_{1},\mathbf{u})+\mathrm{B}(\mathbf{u},\mathbf{u}_ {2})\|_{\mathbb{H}}^{2}\] \[\leqslant C\|\mathbf{u}_{1}\|_{\mathbb{L}^{\infty}}^{2}\|\mathbf{u}\|_{ \mathbb{V}}^{2}+C\|\mathbf{u}\|_{\mathbb{H}}\|\mathbf{u}\|_{\mathbb{V}}\|\mathbf{u}_{2}\|_{ \mathbb{H}}\|\mathrm{A}\mathbf{u}_{2}\|_{\mathbb{H}}\] \[\leqslant C\|\mathbf{u}_{1}\|_{\mathbb{H}}\|\Lambda\mathbf{u}_{1}\|_{ \mathbb{H}}\|\mathbf{u}\|_{\mathbb{V}}^{2}+C\|\mathbf{u}_{2}\|_{\mathbb{H}}\|\Lambda\mathbf{u}_ {2}\|_{\mathbb{H}}\|\mathbf{u}\|_{\mathbb{V}}^{2}. \tag{3.45}\]
For \(d=3\), a similar calculation yields (see (2.5))
\[\|h(\mathbf{u})\|_{\mathbb{H}}^{2}\leqslant C(\|\mathbf{u}_{1}\|_{\mathbb{V}}\|\Lambda \mathbf{u}_{1}\|_{\mathbb{H}}+C\|\mathbf{u}_{2}\|_{\mathbb{V}}\|\Lambda\mathbf{u}_{2}\|_{ \mathbb{H}})\|\mathbf{u}\|_{\mathbb{V}}^{2}. \tag{3.46}\]
Therefore, for \(d
\[\leq\frac{C\beta}{\mu}\Lambda\Big{(}\|\mathbf{u}_{1}\|_{\mathbb{L}^{s+1}} ^{r-2}+\|\mathbf{u}_{2}\|_{\mathbb{L}^{r+1}}^{r-2}\Big{)}(\|\partial_{t}\mathbf{u}_{1}\| _{\mathbb{V}}+\|\partial_{t}\mathbf{u}_{2}\|_{\mathbb{V}})\] \[\quad+\frac{C}{\mu}\Lambda\bigg{\{}\begin{array}{ll}(\|\mathbf{u}_{ 1}\|_{\mathbb{H}}\|\Lambda\mathbf{u}_{1}\|_{\mathbb{H}}+C\|\mathbf{u}_{2}\|_{\mathbb{H }}\|\Lambda\mathbf{u}_{2}\|_{\mathbb{H}})&\text{for $d=2$},\\ (\|\mathbf{u}_{1}\|_{\mathbb{V}}\|\Lambda\mathbf{u}_{1}\|_{\mathbb{H}}+C\|\mathbf{u}_{2}\| _{\mathbb{V}}\|\Lambda\mathbf{u}_{2}\|_{\mathbb{H}})&\text{for $d=3$},\end{array} \tag{3.47}\]
where we have used (3.38) also. The variation of constants formula results to
\[\Lambda(t)\leq\Lambda(t_{0})\Bigg{\{}\begin{aligned} &\exp\Bigg{[}\frac{C\beta T^{ \frac{1}{2}}}{\mu}\sum\limits_{i=1}^{2}\sup_{t\in[t_{0},T]}\|\mathbf{u}_{i}(t)\|_{ \mathbb{V}}^{r-2}\Big{(}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Therefore, from (3.50), we easily have
\[-\frac{\mathrm{d}}{\mathrm{d}t}\log\|\mathbf{u}(t)\|_{\mathbb{H}}\leq 2\Lambda(t)+ \frac{C}{\mu^{\frac{4+d}{4-d}}}\bigg{(}\|\mathbf{u}_{1}(t)\|_{\widetilde{\mathbb{L}} ^{4}}^{\frac{8}{4-d}}+\|\mathbf{u}_{2}(t)\|_{\widetilde{\mathbb{L}}^{4}}^{\frac{8}{4 -d}}\bigg{)}. \tag{3.52}\]
According to (3.48) and (3.16), the right hand side of (3.52) is integrable on \((t_{0},t_{1})\) and this contradicts the fact that \(\mathbf{u}(t_{1})=\mathbf{0}\). Thus we are in the case (i) of the alternative and backward uniqueness result follows.
For the case, \(d=3\) and \(2\beta\mu\geq 1\), one can show the backward uniqueness in a similar way as in the previous case.
For the case, \(d=2\) and \(r\in[1,3]\), we slightly modify the system (3.36) and consider
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{u}(t)}{ \mathrm{d}t}+\widetilde{\mathcal{A}}\mathbf{u}(t)&=-\bigg{[}\mathrm{ B}(\mathbf{u}_{1}(t),\mathbf{u}(t))+\mathrm{B}(\mathbf{u}(t),\mathbf{u}_{2}(t))\\ &\quad+\beta\int_{0}^{1}\mathcal{C}^{\prime}(\theta\mathbf{u}_{1}(t) +(1-\theta)\mathbf{u}_{2}(t))\mathrm{d}\theta\mathbf{u}(t)\bigg{]}=\widetilde{h}(\mathbf{ u}(t)),\\ \mathbf{u}(T)&=\mathbf{0},\end{aligned}\right. \tag{3.53}\]
where \(\widetilde{\mathcal{A}}\mathbf{u}=\mu\mathrm{A}\mathbf{u}+\alpha\mathbf{u}\). In the this case, we denote \(\widetilde{\Lambda}(t)\), the ratio
\[\widetilde{\Lambda}(t)=\frac{\langle\widetilde{\mathcal{A}}\mathbf{u}(t),\mathbf{u}(t )\rangle}{\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}}=\frac{\mu\|\mathbf{u}(t)\|_{\mathbb{V}} ^{2}+\alpha\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}}{\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}} \geq\mu\frac{\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}}{\|\mathbf{u}(t)\|_{\mathbb{H}}^{2}}.\]
Calculations similar to (3.41) and (3.42) provide
\[\frac{1}{2}\frac{\mathrm{d}\widetilde{\Lambda}}{\mathrm{d}t}+\frac{\| \widetilde{\mathcal{A}}\mathbf{u}-\widetilde{\Lambda}\mathbf{u}\|_{\mathbb{H}}^{2}}{ \|\mathbf{u}\|_{\mathbb{H}}^{2}}=\frac{\langle\widetilde{\mathcal{A}}\mathbf{u}- \widetilde{\Lambda}\mathbf{u},\widetilde{h}(\mathbf{u})\rangle}{\|\mathbf{u}\|_{\mathbb{H }}^{2}}\leq\frac{1}{2}\frac{\|\widetilde{\mathcal{A}}\mathbf{u}-\widetilde{\Lambda }\mathbf{u}\|_{\mathbb{H}}^{2}}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}+\frac{1}{2}\frac{\| \widetilde{h}(\mathbf{u})\|_{\mathbb{H}}^{2}}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}. \tag{3.54}\]
A bound similar to (3.48) can be obtained by demonstrating bounds akin to (3.45) and (3.51). Using Holder's, Gagliardo-Nirenberg's, Sobolev's and Young's inequalities, we find
\[\bigg{\|}\int_{0}^{1}\mathcal{C}^{\prime}(\theta\mathbf{u}_{1}+(1- \theta)\mathbf{u}_{2})\mathrm{d}\theta\mathbf{u}\bigg{\|}_{\mathbb{H}}^{2} \leq C\Big{(}\|\mathbf{u}_{1}\|_{\widetilde{\mathbb{L}}^{2(r+1)}}^{2(r- 1)}+\|\mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^{2(r+1)}}^{2(r-1)}\Big{)}\|\mathbf{u}\|_{ \widetilde{\mathbb{L}}^{r+1}}^{2}\] \[\leq C\Big{(}\|\mathbf{u}_{1}\|_{\mathbb{V}}^{2(r-1)}+\|\mathbf{u}_{2}\|_ {\mathbb{V}}^{2(r-1)}\Big{)}\|\mathbf{u}\|_{\mathbb{V}}^{2}, \tag{3.55}\]
and
\[\frac{1}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\bigg{|}\bigg{\langle}\int_{ 0}^{1}\mathcal{C}^{\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})\mathrm{d} \theta\mathbf{u},\mathbf{u}\bigg{\rangle}\bigg{|}\] \[\leq\frac{1}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\bigg{\|}\int_{0}^{1} \mathcal{C}^{\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})\mathrm{d}\theta \mathbf{u}\bigg{\|}_{\mathbb{V}}\|\mathbf{u}\|_{\mathbb{V}}\] \[\leq\frac{C}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\Big{(}\|\mathbf{u}_{1}\|_{ \widetilde{\mathbb{L}}^{r+1}}^{r-1}+\|\mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^{r+ 1}}^{r-1}\Big{)}\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{u}\|_{\mathbb{V}}\] \[\leq\frac{C}{\|\mathbf{u}\|_{\mathbb{H}}^{2}}\Big{(}\|\mathbf{u}_{1}\|_{ \widetilde{\mathbb{L}}^{r+1}}^{r-1}+\|\mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^{r+ 1}}^{r-1}\Big{)}\|\mathbf{u}\|_{\mathbb{V}}^{\frac{2r}{r+1}}\|\mathbf{u}\|_{\mathbb{H}}^{ \frac{2}{r+1}}\] \[\leq\widetilde{\Lambda}+\frac{C}{\mu^{r+1}}\Big{(}\|\mathbf{u}_{1}\|_{ \widetilde{\mathbb{L}}^{r+1}}^{(r+1)(r-1)}+\|\mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^ {r+1}}^{(r+1)(r-1)}\Big{)}, \tag{3.56}\]
and one can complete the proof as in the previous case. We point out that the same method can be used to prove for the case \(d=2\), \(r\in(3,\infty)\) and \(d=3\), \(r\in[3,5]\) (\(2\beta\mu\geq 1\) for \(d=r=3\)) also (see (5.9) below), but not for \(d=3,r\in(5,\infty)\). The primary benefit of this method is that we
only need the regularity \(\mathbf{u}\in\mathrm{C}((0,T];\mathbb{V})\cap\mathrm{L}^{2}(\epsilon,T;\mathrm{D}( \mathrm{A}))\cap\mathrm{L}^{r+1}(\epsilon,T;\widehat{\mathbb{L}}^{3(r+1)})\). For \(\alpha=\beta=0\), one can obtain the backward uniqueness results for 2D NSE also (cf. [24, 43, 50], etc.).
**Corollary 3.2**.: _Under the assumption of Theorem 3.1, either \(\mathbf{u}\) vanishes identically or \(\mathbf{u}\) never vanishes._
Proof.: Combining the forward uniqueness proved in Step 1 and the backward uniqueness proved in Step 3 of the proof of Theorem 3.1, one can conclude the proof.
## 4. Applications
As a consequence of the backward uniqueness result, we first establish the approximate controllability result with respect to the initial data. Then we use the backward uniqueness in attractor theory to show that solution map is injective. Furthermore, we establish a crucial estimate on the global attractor which helps to prove the zero Lipschitz deviation of the global attractor. The uniqueness as well as continuity with respect to the Eulerian initial data of Lagrangian trajectories in 2D and 3D CBF flows are also examined in this section.
### Approximate controllability
As an immediate consequence of the backward uniqueness result, we obtain the _approximate controllability with respect to the initial data, which is viewed as a start controller_ (cf. [5, 28], etc.) for 2D and 3D CBF equations. We follow the works [5, 12], etc. to obtain our required results.
**Definition 4.1**.: _Let \(T>0\).The system (2.11) is approximately controllable with respect to the initial data in time \(T\) if, for every \(\mathbf{u}_{1}\in\mathbb{H}\), and for every \(\epsilon>0\), there exists an \(\mathbf{x}\in\mathbb{H}\) such that the solution \(\mathbf{u}^{\mathbf{x}}\) of the problem (2.11) satisfies \(\|\mathbf{u}^{\mathbf{x}}(T)-\mathbf{u}_{1}\|_{\mathbb{H}}\leqslant\epsilon.\)_
Corollary 3.2 implies the lack of null-controllability of the model (2.11) with respect to the initial data, asserting that there are no non-trivial initial data that can be steered to zero.
In order to establish the approximate controllability with respect to initial data, we need the following result from [6].
**Lemma 4.2** (Proposition IV.1, [6]).: _Let \(\mathrm{S}(t)\) be a family of continuous nonlinear operators in \(\mathbb{H}\). We assume that for a \(\mathbf{u}_{0}\in\mathbb{H}\), the map \(\mathbf{x}\mapsto\mathrm{S}(t)(\mathbf{x})\) is Frechet differentiable at the point \(\mathbf{u}_{0}\) and denote its Frechet derivative by \(D\mathrm{S}(t)(\mathbf{u}_{0})\). If the operator \((D\mathrm{S}(t)(\mathbf{u}_{0}))^{*}\) is injective, then the subspace generated by \(\mathrm{S}(t)(\mathbb{H})\) is dense in \(\mathbb{H}\)._
**Theorem 4.3** (Approximate controllability).: _The space \(\{\mathbf{u}^{\mathbf{x}}(T):\mathbf{x}\in\mathbb{H}\}\) is dense in \(\mathbb{H}\), where \(\mathbf{u}^{\mathbf{x}}(\cdot)\) is the unique solution to the system (2.11)._
Proof.: Let us define the semiflow (nonlinear semigroup) \(\mathrm{S}(t):\mathbb{H}\to\mathbb{H}\) by
\[\mathrm{S}(t)(\mathbf{x})=\mathbf{u}^{\mathbf{x}}(t),\ t\in[0,T],\]
where \(\mathbf{u}^{\mathbf{x}}(\cdot)\) is the unique solution to the system (2.11). It can be easily seen that \(\mathrm{S}(T)\) is Frechet differentiable on \(\mathbb{H}\) and its Frechet derivative \(\Gamma:\mathbb{H}\to\mathbb{H}\) is given by \(\Gamma\mathbf{y}=D\mathrm{S}(T)(\mathbf{x})\mathbf{y}=\mathbf{v}(T)\), where \(\mathbf{v}\in\mathrm{C}([0,T];\mathbb{H})\) is the unique solution of the equation
\[\left\{\frac{\mathrm{d}\mathbf{v}(t)}{\mathrm{d}t}+\mu\mathrm{A}\mathbf{v}(t)+ \mathrm{B}^{\prime}(\mathbf{u}(t))\mathbf{v}(t)+\alpha\mathbf{v}(t)+\beta\mathcal{C}^{ \prime}(\mathbf{u}(t))\mathbf{v}(t)=\mathbf{0},\right. \tag{4.1}\] \[\left.\mathbf{v}(0)=\mathbf{y},\right.\]
where \(\langle\mathrm{B}^{\prime}(\mathbf{u})\mathbf{v},\mathbf{w}\rangle=\langle\mathrm{B}(\mathbf{ u},\mathbf{v})+\mathrm{B}(\mathbf{v},\mathbf{u}),\mathbf{w}\rangle=b(\mathbf{u},\mathbf{v},\mathbf{w})+b( \mathbf{v},\mathbf{u},\mathbf{w})\), and \(\mathcal{C}^{\prime}(\cdot)\) is defined in (2.7). As the system (4.1) is linear and the base state \(\mathbf{u}(\cdot)\) has the regularity results discussed in the proof of Theorem 3.1, the existence and uniqueness of weak solution of the system can
be obtained by using a standard Faedo-Galerkin method (see [35]). Then, the dual operator \(\Gamma^{*}:\mathbb{H}\to\mathbb{H}\) is given by \(\Gamma^{*}\boldsymbol{p}=\boldsymbol{z}(0)\), for all \(\boldsymbol{p}\in\mathbb{H}\), where \(\boldsymbol{z}(\cdot)\) is the solution to the following backward dual equation:
\[\begin{cases}-\frac{\mathrm{d}\boldsymbol{z}(t)}{\mathrm{d}t}+\mu \Lambda\boldsymbol{z}(t)+(\mathrm{B}^{\prime}(\boldsymbol{u}(t)))^{*} \boldsymbol{z}(t)+\alpha\boldsymbol{z}(t)+\beta\mathcal{C}^{\prime}( \boldsymbol{u}(t))\boldsymbol{z}(t)=\boldsymbol{0},\\ \boldsymbol{z}(T)=\boldsymbol{p},\end{cases} \tag{4.2}\]
where \(\langle(\mathrm{B}^{\prime}(\boldsymbol{u}))^{*}\boldsymbol{v},\boldsymbol{w }\rangle=\langle\boldsymbol{v},\mathrm{B}^{\prime}(\boldsymbol{u}) \boldsymbol{w}\rangle=b(\boldsymbol{v},\boldsymbol{u},\boldsymbol{w})+b( \boldsymbol{v},\boldsymbol{w},\boldsymbol{u})\), for \(\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in\mathbb{V}\). The system (4.2) is well-posed for all \(\boldsymbol{p}\in\mathbb{H}\) (see [35]). The operator \(\Gamma^{*}\) is injective on \(\mathbb{H}\) is a consequence of the following backward uniqueness result (Theorem 3.1):
\[\left\{-\frac{\mathrm{d}\boldsymbol{z}(t)}{\mathrm{d}t}+\mu \Lambda\boldsymbol{z}(t)+(\mathrm{B}^{\prime}(\boldsymbol{u}(t)))^{*} \boldsymbol{z}(t)+\alpha\boldsymbol{z}(t)+\beta\mathcal{C}^{\prime}( \boldsymbol{u}(t))\boldsymbol{z}(t)=\boldsymbol{0},\right\}\Rightarrow \boldsymbol{z}\equiv\boldsymbol{0}. \tag{4.3}\]
If \(\mathcal{N}(\Gamma^{*})\) and \(\mathcal{R}(\Gamma)\) denote the null space and range space of the operators \(\Gamma^{*}\) and \(\Gamma\), respectively, then \(\mathcal{N}(\Gamma^{*})=\mathcal{R}(\Gamma)^{\perp}\). Suppose that \(\boldsymbol{u}_{1}\in\mathbb{H}\) is orthogonal to the range of \(\Gamma\) in \(\mathbb{H}\). We solve the system (4.2) with \(\boldsymbol{z}(T)=\boldsymbol{u}_{1}\). Let \(\boldsymbol{v}(\cdot)\) be a solution of (4.1) with \(\boldsymbol{v}(0)=\boldsymbol{u}_{0}\) for any \(\boldsymbol{u}_{0}\in\mathbb{H}\). Multiplying by \(\boldsymbol{v}(\cdot)\), the equation satisfied by \(\boldsymbol{z}(\cdot)\), we get
\[(\boldsymbol{v}(T),\boldsymbol{u}_{1})=(\boldsymbol{u}_{0},\boldsymbol{z}(0)),\ \ \text{for all}\ \ \boldsymbol{u}_{0}\in\mathbb{H}.\]
Since \(\boldsymbol{u}_{1}\) is orthogonal to \(\boldsymbol{v}(T)\) in \(\mathbb{H}\), we deduce that
\[(\boldsymbol{u}_{0},\boldsymbol{z}(0))=0,\ \ \text{for all}\ \ \boldsymbol{u}_{0}\in\mathbb{H}.\]
But then \(\boldsymbol{z}(0)=\boldsymbol{0}\) in \(\mathbb{H}\) and by the backward uniqueness result \(\boldsymbol{z}\equiv\boldsymbol{0}\) in \(\mathbb{H}\) and \(\boldsymbol{u}_{1}=\boldsymbol{0}\). The injectivity of the operator \(\Gamma^{*}\) in \(\mathbb{H}\) and Proposition IV.1, [6] (see Lemma 4.2) imply that the space \(\{\mathrm{S}(T)(\boldsymbol{x}):\boldsymbol{x}\in\mathbb{H}\}\) is dense in \(\mathbb{H}\), and the approximate controllability result follows.
### The Lipschitz deviation
In this section, we discuss an another consequence of the backward uniqueness result in connection with the global attractors for 2D and 3D CBF equations (cf. [24, 43] for 2D NSE). We prove that the Lipschitz deviation for the global attractor of the 2D and 3D CBF equations is zero. Similar results for 2D NSE have been obtained in [40, 43], etc. and we follow these works to establish the required result.
Inspired from [20], the authors in [37] defined a new quantity called _Lipschitz deviation_, which measures how well a compact set \(\mathrm{X}\) in a Hilbert space \((\mathcal{H},\|\cdot\|_{\mathcal{H}})\) can be approximated by graphs of Lipschitz functions (with prescribed Lipschitz constant) defined over a finite dimensional subspace of \(\mathcal{H}\). The _Hausdorff semidistance_ between two non-empty subsets \(\mathrm{X},\mathrm{Y}\subseteq\mathcal{H}\) is defined by
\[\mathrm{dist}(\mathrm{X},\mathrm{Y}):=\sup_{x\in\mathrm{X}}\inf_{y\in\mathrm{Y} }\|x-y\|_{\mathcal{H}}.\]
**Definition 4.4** (Definition 2.1, [40]).: _Let \(\mathrm{X}\) be a compact subset of a real Hilbert space \(\mathcal{H}\). Let \(\delta_{m}(\mathrm{X},\varepsilon)\) be the smallest dimension of a linear subspace \(\mathrm{U}\subset\mathcal{H}\) such that_
\[\mathrm{dist}(\mathrm{X},\mathrm{G}_{\mathrm{U}}[\varphi])<\varepsilon,\]
_for some \(m\)-Lipschitz function \(\varphi:\mathrm{U}\to\mathrm{U}^{\perp}\), that is,_
\[\|\varphi(u)-\varphi(v)\|_{\mathcal{H}}\leqslant m\|u-v\|_{\mathcal{H}},\ \ \text{for all}\ \ u,v\in\mathrm{U},\]
_where \(\mathrm{U}^{\perp}\) is the orthogonal complement of \(\mathrm{U}\) in \(\mathcal{H}\) and \(\mathrm{G}_{\mathrm{U}}[\varphi]\) is the graph of \(\varphi\) over \(\mathrm{U}\):_
\[\mathrm{G}_{\mathrm{U}}[\varphi]=\{u+\varphi(u):u\in\mathrm{U}\}.\]
_The \(m\)-Lipschitz deviation of \(\mathrm{X}\), \(\mathrm{dev}_{m}(\mathrm{X})\), is given by_
\[\mathrm{dev}_{m}(\mathrm{X})=\limsup_{\varepsilon\to 0}\frac{\log\delta_{m}( \mathrm{X},\varepsilon)}{-\log\varepsilon}.\]
_The Lipschitz deviation of \(\mathrm{X}\), \(\mathrm{dev}(\mathrm{X})\), is given by_
\[\mathrm{dev}(\mathrm{X})=\lim_{m\to\infty}\mathrm{dev}_{m}(\mathrm{X}).\]
From [25, Theorem 3.5] (see [35] for 2D CBF equations), we know that under the condition \(\boldsymbol{f}\in\mathbb{H}\), independent of \(t\), the system (2.11) possesses a global attractor
\[\mathscr{A}=\bigg{\{}\boldsymbol{x}\in\mathbb{H}:\mathrm{S}(t)(\boldsymbol{x}) \text{ exists for all }t\in\mathbb{R},\text{ and }\sup_{t\in\mathbb{R}}\|\mathrm{S}(t)(\boldsymbol{x})\|_{ \mathbb{H}}<\infty\bigg{\}},\]
where \(\mathrm{S}(t)(\boldsymbol{x})\) denotes the solution to the system (2.11) starting at \(\boldsymbol{x}\).
**Lemma 4.5**.: _The semigroup \(\mathrm{S}(t):\mathbb{H}\to\mathbb{H}\) is injective, for every \(t>0\)._
Proof.: Suppose that \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{H}\) and \(\boldsymbol{u}(T)=\boldsymbol{v}(T)\), that is, \(\mathrm{S}(T)(\boldsymbol{x})=\mathrm{S}(T)(\boldsymbol{y})\), for some \(T>0\). Then by Theorem 3.1, we know that \(\mathrm{S}(t)(\boldsymbol{x})=\mathrm{S}(t)(\boldsymbol{y})\) for all \(t\in[0,T]\) and in particular \(\boldsymbol{x}=\boldsymbol{y}\) and hence the injectivity of \(\mathrm{S}(t):\mathbb{H}\to\mathbb{H}\) follows.
If the semigroup \(\mathrm{S}(t)\) is injective on \(\mathscr{A}\), then the dynamics, restricted to \(\mathscr{A}\), actually define a dynamical system, that is, \(\mathrm{S}(t)\big{|}_{\mathscr{A}}\) makes sense for all \(t\in\mathbb{R}\), not just for \(t\geq 0\) and \(\mathrm{S}(t)\mathscr{A}=\mathscr{A}\) for all \(t\in\mathbb{R}\) (Theorem 13, [43]).
The following results for 2D NSE can be found in [24, Theorem 3.1], [41, Theorem 5.1], [43, Proposition 43], [44, Theorem 13.3], etc.
**Theorem 4.6**.: _For \(d=2\), \(r\in[1,\infty)\) and \(d=3\), \(r\in[3,\infty)\) (\(2\beta\mu\geq 1\) for \(d=r=3\)), under the above assumptions, we have_
\[\sup_{\boldsymbol{u}_{1},\boldsymbol{u}_{2}\in\mathscr{A},\boldsymbol{u}_{1} \neq\boldsymbol{u}_{2}}\frac{\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{ \mathbb{V}}^{2}}{\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{\mathbb{H}}^{2} \log\Bigl{(}\frac{M_{0}^{2}}{\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{ \mathbb{H}}^{2}}\Bigr{)}}<\infty, \tag{4.4}\]
_where \(M_{0}\geq 4\sup_{\boldsymbol{x}\in\mathbb{H}}\|\boldsymbol{x}\|_{\mathbb{H}}\)._
Proof.: For any \(\boldsymbol{x}\in\mathbb{H}\), from [25, Theorem 3.4], we infer that \(\|\mathrm{S}(t)(\boldsymbol{x})\|_{\mathbb{V}}\leq M_{1}\) for sufficiently large \(t\). By using the estimates similar to (3.27) and (3.35) on time derivative, one can show further that \(\|\mathrm{AS}(t)(\boldsymbol{x})\|_{\mathbb{H}}\leq M_{2}\) for sufficiently large \(t\) (see (4.33) below and [42, Propsoition 12.4] for 2D NSE). In particular, \(\mathscr{A}\) is bounded in \(\mathrm{D}(\mathrm{A})\). From the definition of \(M_{0}\), it is clear that \(\log\Bigl{(}\frac{M_{0}^{2}}{\|\boldsymbol{u}_{1}-\boldsymbol{u}_{2}\|_{ \mathbb{H}}^{2}}\Bigr{)}\geq\log 4\geq 1\).
Let us now consider a "log-Dirichlet's quotient" (cf. [24])
\[\widetilde{\mathrm{Q}}(t)=\frac{\widetilde{\Lambda}(t)}{\log\Bigl{(}\frac{M_{0 }^{2}}{\|\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}}\Bigr{)}}.\]
Then, differentiating \(\widetilde{\mathrm{Q}}(t)\) with respect to \(t\) and then using (3.47), (3.51) and (3.56), we find
\[\frac{\mathrm{d}\widetilde{\mathrm{Q}}}{\mathrm{d}t}=\frac{\log\Bigl{(}\frac{ M_{0}^{2}}{\|\boldsymbol{u}\|_{\mathbb{H}}^{2}}\Bigr{)}\frac{\mathrm{d} \widetilde{\Lambda}}{\mathrm{d}t}+\widetilde{\Lambda}\frac{\mathrm{d}}{\mathrm{ d}t}\log(\|\boldsymbol{u}\|_{\mathbb{H}}^{2})}{\Bigl{[}\log\Bigl{(}\frac{M_{0}^{2}}{ \|\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}}\Bigr{)}\Bigr{]}^{2}}=\frac{\log\Bigl{(} \frac{M_{0}^{2}}{\|\boldsymbol{u}\|_{\mathbb{H}}^{2}}\Bigr{)}\frac{\mathrm{d} \widetilde{\Lambda}}{\mathrm{d}t}+\widetilde{\Lambda}\frac{2}{\|\boldsymbol{u} \|_{\mathbb{H}}^{2}}\langle\partial_{t}\boldsymbol{u},\boldsymbol{u}\rangle}{ \Bigl{[}\log\Bigl{(}\frac{M_{0}^{2}}{\|\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}} \Bigr{)}\Bigr{]}^{2}}\]
\[\leqslant k_{1}(t)\widetilde{\mathrm{Q}}+\frac{-2\widetilde{\Lambda}^{2} +2\widetilde{\Lambda}^{\lfloor\lambda(\mathbf{u}),\mathbf{u}\rfloor}_{\mathbb{H}}^{2}}{ \left[\log\Bigl{(}\frac{M_{0}^{2}}{|\mathbf{u}(t)|_{\mathbb{H}}^{2}}\Bigr{)}\right] ^{2}}\leqslant k_{1}(t)\widetilde{\mathrm{Q}}-\widetilde{\mathrm{Q}}^{2}+k_{2}(t) \frac{\widetilde{\mathrm{Q}}}{\log\Bigl{(}\frac{M_{0}^{2}}{|\mathbf{u}(t)|_{ \mathbb{H}}^{2}}\Bigr{)}}\] \[\leqslant-\widetilde{\mathrm{Q}}^{2}+(k_{1}(t)+k_{2}(t)) \widetilde{\mathrm{Q}}, \tag{4.5}\]
where
\[k_{1}=\left\{\begin{array}{ll}\frac{C}{\mu}(\|\mathbf{u}_{1}\|_{ \mathbb{H}}\|\mathbb{A}\mathbf{u}_{1}\|_{\mathbb{H}}+C\|\mathbf{u}_{2}\|_{\mathbb{H}} \|\Lambda\mathbf{u}_{2}\|_{\mathbb{H}})+\frac{C\beta}{\mu}\Bigl{(}\|\mathbf{u}_{1}\|_{ \mathbb{V}}^{2(r-1)}+\|\mathbf{u}_{2}\|_{\mathbb{V}}^{2(r-1)}\Bigr{)},&\text{for $d=2$},\\ \frac{C}{\mu}(\|\mathbf{u}_{1}\|_{\mathbb{V}}\|\mathbb{A}\mathbf{u}_{1}\|_{\mathbb{H}} +C\|\mathbf{u}_{2}\|_{\mathbb{V}}\|\Lambda\mathbf{u}_{2}\|_{\mathbb{H}})\\ +\frac{C\beta}{\mu}\biggl{(}\|\mathbf{u}_{1}\|_{\mathbb{L}^{r-1}_{r+1}}^{\frac{-1} {2}}\|\mathbf{u}_{1}\|_{\mathbb{L}^{3(r+1)}}^{\frac{3(r-1)}{2}}+\|\mathbf{u}_{2}\|_{ \mathbb{L}^{r+1}}^{\frac{r-1}{2}}\|\mathbf{u}_{2}\|_{\mathbb{L}^{3(r+1)}}^{\frac{3 (r-1)}{2}}\biggr{)},&\text{for $d=3$},\\ k_{2}=\left\{\begin{array}{ll}\frac{C}{\mu^{4}+d}\biggl{(}\|\mathbf{u}_{1}\|_{ \mathbb{L}^{4}}^{\frac{8}{4-d}}+\|\mathbf{u}_{2}\|_{\mathbb{L}^{4}}^{\frac{8}{4-d }}\biggr{)}+\frac{C}{\mu^{r+1}}\Bigl{(}\|\mathbf{u}_{1}\|_{\mathbb{L}^{r+1}}^{(r+1 )(r-1)}+\|\mathbf{u}_{2}\|_{\mathbb{L}^{r+1}}^{(r+1)(r-1)}\Bigr{)},&\\ \hskip 142.26378pt\text{for $d=2,r\in[1,\infty)$ \ and \ $d=3,r\in[3,5]$},\\ \frac{C}{\mu^{7}}\Bigl{(}\|\mathbf{u}_{1}\|_{\mathbb{L}^{4}}^{8}+\|\mathbf{u}_{2}\|_{ \mathbb{L}^{4}}^{8}\Bigr{)}+\frac{C}{\mu^{4}}\Bigl{(}\|\mathbf{u}_{1}\|_{\mathbb{ L}^{2(r-1)}}^{4(r-1)}+\|\mathbf{u}_{2}\|_{\mathbb{L}^{2(r-1)}}^{4(r-1)}\Bigr{)},&\\ \hskip 142.26378pt\text{for $d=3,r\in[5,\infty)$}.\end{array}\right.\]
By defining \(\upsilon=\widetilde{\mathrm{Q}}^{-1}\), from (4.5), we deduce
\[\frac{\mathrm{d}\upsilon}{\mathrm{d}t}+(k_{1}(t)+k_{2}(t))\upsilon\geqslant 1.\]
An application of the variation of constants formula yields
\[\widetilde{\mathrm{Q}}(t)\leqslant\frac{\widetilde{\mathrm{Q}}(0)\exp\Bigl{(} \mathsf{s}_{0}^{t}(k_{1}(s)+k_{2}(s))\mathrm{d}s\Bigr{)}}{1+\widetilde{\mathrm{ Q}}(0)\,\mathsf{s}_{0}^{t}\exp\bigl{(}\mathsf{s}_{0}^{s}(k_{1}(r)+k_{2}(r)) \mathrm{d}r\bigr{)}\mathrm{d}s}. \tag{4.6}\]
As \(t\to\infty\), the right hand side of \(\widetilde{\mathrm{Q}}(t)\) in (4.6) tends to \(k_{1}(t)+k_{2}(t)\leqslant C(M_{1},M_{2})\) (since \(\mathrm{D}(\mathrm{A})\subset\mathbb{L}^{p}\) for any \(p\in[2,\infty)\)). Therefore, we obtain that there exists \(T\) such that
\[\widetilde{\mathrm{Q}}(t)\leqslant C(M_{1},M_{2})\ \text{ for all }\ t\geqslant T,\]
where \(C(M_{1},M_{2})\) and \(T\) are independent of \(\widetilde{\mathrm{Q}}(0)\).
Let us now consider \(\mathbf{x},\mathbf{y}\in\mathscr{A}\). Since solutions in the attractor exist for all time, we know there exists \(t\geqslant T\) such that \(\mathbf{x}=\mathrm{S}(t)(\mathbf{u}_{1}(-t))\) and \(\mathbf{y}=\mathrm{S}(t)(\mathbf{u}_{2}(-t))\) with \(\mathbf{x}\neq\mathbf{y}\). Since \(\mathrm{S}(\cdot)\) injective (Lemma 4.5), \(\mathbf{u}_{1}(-t)\neq\mathbf{u}_{2}(-t)\). Moreover, \(\widetilde{\mathrm{Q}}(-t)<\infty\) implies that \(\widetilde{\mathrm{Q}}(0)\leqslant C(M_{1},M_{2})\). Finally, we have
\[\sup_{\mathbf{x},\mathbf{y}\in\mathscr{A},\ \mathbf{x}\neq\mathbf{y}}\widetilde{\mathrm{Q}}(t) \leqslant C(M_{1},M_{2}),\]
so that (4.4) follows.
It should be noted that whether (4.4) holds without the factor \(\log\Bigl{(}\frac{M_{0}^{2}}{|\mathbf{u}_{1}-\mathbf{u}_{2}|_{\mathbb{H}}^{2}}\Bigr{)}\) is an open problem. The above result can be used to obtain the 1-log-Lipschitz continuity of \(\mathrm{A}:\mathscr{A}\to\mathbb{H}\) (cf. [41, Corollary 5.2] for 2D NSE).
**Corollary 4.7**.: _There exists a constant \(\widehat{K}>0\) such that_
\[\|\mathrm{A}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}\leqslant\widehat{K}\|\mathbf{u}_{1 }-\mathbf{u}_{2}\|_{\mathbb{H}}\log\Biggl{(}\frac{\widehat{M}_{0}^{2}}{\|\mathbf{u}_{1}- \mathbf{u}_{2}\|_{\mathbb{H}}^{2}}\Biggr{)},\ \text{for all }\ \mathbf{u}_{1},\mathbf{u}_{2}\in\mathscr{A},\ \mathbf{u}_{1}\neq\mathbf{u}_{2}, \tag{4.7}\]
_for some \(\widehat{M}_{0}\geq 4\sup\limits_{\mathbf{x}\in\mathbb{H}}\|\mathrm{A}^{\frac{1}{2}}\mathbf{x} \|_{\mathbb{H}}\)._
Proof.: We use the fact that \(\mathscr{A}\) is bounded in \(\mathrm{D}(\mathrm{A})\). Let us now consider \(\widetilde{h}(\mathbf{u})\) from (3.53) and estimate \(\|\mathrm{A}^{\frac{1}{2}}\widetilde{h}(\mathbf{u})\|_{\mathbb{H}}\) using fractional Leibniz rule ([15, Theorem 1]), and Sobolev's inequality as
\[\|\mathrm{A}^{\frac{1}{2}}\widetilde{h}(\mathbf{u})\|_{\mathbb{H}}\] \[\leq\|\mathrm{A}^{\frac{1}{2}}\mathrm{B}(\mathbf{u}_{1},\mathbf{u})\|_{ \mathbb{H}}+\|\mathrm{A}^{\frac{1}{2}}\mathrm{B}(\mathbf{u},\mathbf{u}_{2})\|_{ \mathbb{H}}+\beta\bigg{\|}\mathrm{A}^{\frac{1}{2}}\bigg{(}\int_{0}^{1} \mathcal{C}^{\prime}(\theta\mathbf{u}_{1}+(1-\theta)\mathbf{u}_{2})\mathrm{d}\theta \mathbf{u}\bigg{)}\bigg{\|}_{\mathbb{H}}\] \[\leq C\|\mathrm{A}(\mathbf{u}_{1}\otimes\mathbf{u})\|_{\mathbb{H}}+\| \mathrm{A}(\mathbf{u}\otimes\mathbf{u}_{2})\|_{\mathbb{H}}+C\beta\|\mathrm{A}^{\frac{1 }{2}}(|\mathbf{u}_{1}|^{r-1}\mathbf{u})\|_{\mathbb{H}}+C\beta\|\mathrm{A}^{\frac{1}{2} }(|\mathbf{u}_{2}|^{r-1}\mathbf{u})\|_{\mathbb{H}}\] \[\leq C(\|\mathbf{u}_{1}\|_{\mathbb{H}}+\|\mathrm{A}\mathbf{u}_{2}\|_{ \mathbb{H}})\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{\infty}}+C\big{(}\|\mathbf{u}_{1} \|_{\widetilde{\mathbb{L}}^{\infty}}+\|\mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^{ \infty}}\big{)}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\] \[\quad+C\beta\bigg{(}\|\mathbf{u}_{1}\|_{\mathbb{L}^{\infty}}^{r-1}+\| \mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^{\infty}}^{r-1}\bigg{)}\|\mathrm{A}^{ \frac{1}{2}}\mathbf{u}\|_{\mathbb{H}}+C\beta\Big{(}\|\mathbf{u}_{1}\|_{\widetilde{ \mathbb{L}}^{\infty}}^{r-2}\|\mathrm{A}^{\frac{1}{2}}\mathbf{u}_{1}\|_{\mathbb{H} }+\|\mathbf{u}_{2}\|_{\widetilde{\mathbb{L}}^{\infty}}^{r-2}\|\mathrm{A}^{\frac{1 }{2}}\mathbf{u}_{2}\|_{\mathbb{H}}\Big{)}\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{ \infty}}\] \[\leq C(M_{1},M_{2})\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}, \tag{4.8}\]
for sufficiently large \(t\). Therefore, calculations similar to the proof of Theorem 4.6 yield for all \(\mathbf{u}_{1},\mathbf{u}_{2}\in\mathscr{A}\) with \(\mathbf{u}_{1}\neq\mathbf{u}_{2}\),
\[\|\mathrm{A}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}^{2}\leq C_{1}\| \mathrm{A}^{\frac{1}{2}}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}^{2}\log\!\Bigg{(} \frac{\widehat{M}_{0}^{2}}{\|\mathrm{A}^{\frac{1}{2}}(\mathbf{u}_{1}-\mathbf{u}_{2})\| _{\mathbb{H}}^{2}}\Bigg{)}, \tag{4.9}\]
where \(\widehat{M}_{0}\geq 4\sup\limits_{\mathbf{x}\in\mathbb{H}}\|\mathrm{A}^{\frac{1}{2}} \mathbf{x}\|_{\mathbb{H}}\) and \(C_{1}\) is a constant. Therefore, combining (1.5) and (4.9), we obtain
\[\|\mathrm{A}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}^{2}\leq C_{0}C_{1}\|\mathbf{u}_{ 1}-\mathbf{u}_{2}\|_{\mathbb{H}}^{2}\log\!\left(\frac{M_{0}^{2}}{\|\mathbf{u}_{1}-\mathbf{ u}_{2}\|_{\mathbb{H}}^{2}}\right)\log\!\Bigg{(}\frac{\widehat{M}_{0}^{2}}{\| \mathrm{A}^{\frac{1}{2}}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}^{2}}\Bigg{)}.\]
Since \(\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{\mathbb{H}}\leq\frac{1}{\sqrt{\lambda_{1}}}\| \mathrm{A}^{\frac{1}{2}}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}\), we further have
\[\|\mathrm{A}(\mathbf{u}_{1}-\mathbf{u}_{2})\|_{\mathbb{H}}^{2}\leq C_{0}C_{1}\|\mathbf{u}_{ 1}-\mathbf{u}_{2}\|_{\mathbb{H}}^{2}\log\!\left(\frac{M_{0}^{2}}{\|\mathbf{u}_{1}-\mathbf{ u}_{2}\|_{\mathbb{H}}^{2}}\right)\log\!\Bigg{(}\frac{\widehat{M}_{0}^{2}}{\lambda_{1} \|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{\mathbb{H}}^{2}}\Bigg{)}. \tag{4.10}\]
One can choose \(M_{0},\widehat{M}_{0}\) and \(\widehat{M}_{0}\) such that \(M_{0}\leq\frac{\widehat{M}_{0}}{\sqrt{\lambda_{1}}}\leq\widehat{M}_{0}\). Therefore, we finally derive (4.7) for \(\widehat{K}=\sqrt{C_{0}C_{1}}\).
Using either Theorem 4.6 or Corollary 4.7, it follows from [41, Proposition 4.2] (or Corollary 4.4) that there exists a family of approximating Lipschitz manifolds \(\mathcal{M}_{N}\) (\(N=2M_{0}e^{\frac{-\lambda_{n+1}}{2C_{0}}}\), see (4.12) below), such that the global attractor \(\mathscr{A}\) associated with the 2D and 3D CBF equations lies within an exponentially small neighbourhood of \(\mathcal{M}_{N}\) and hence has zero Lipschitz deviation.
**Theorem 4.8**.: _For \(d=2\), \(r\in[1,\infty)\) and \(d=3\), \(r\in[3,\infty)\) (\(2\beta\mu\geq 1\) for \(d=r=3\)), if \(\mathbf{f}\in\mathbb{H}\), then \(\mathrm{dev}(\mathscr{A})=0\), where \(\mathscr{A}\) is the global attractor for the 2D and 3D CBF equations._
Proof.: Let \(\mathrm{P}_{n}\) be the orthogonal projection onto the first \(n\) eigenfunctions of the Stokes operator \(\mathrm{A}\) and \(\mathrm{Q}_{n}=\mathrm{I}-\mathrm{P}_{n}\) so that
\[\mathrm{P}_{n}\mathbf{x}=\sum_{k=1}^{n}(\mathbf{x},\mathbf{e}_{k})\mathbf{e}_{k}\ \ \text{and}\ \ \mathrm{Q}_{n}\mathbf{x}=\sum_{k=n+1}^{\infty}(\mathbf{x},\mathbf{e}_{k})\mathbf{e}_{k}.\]
Consider a subset X of \(\mathscr{A}\) that is maximal for the relation
\[\|\mathrm{Q}_{n}(\mathbf{x}-\mathbf{y})\|_{\mathbb{H}}\leqslant\|\mathrm{P}_{n}(\mathbf{x}- \mathbf{y})\|_{\mathbb{H}},\ \ \text{for all}\ \ \mathbf{x},\mathbf{y}\in\mathrm{X}. \tag{4.11}\]
For every \(\mathbf{p}\in\mathrm{P}_{n}\mathrm{X}\) with \(\mathbf{p}=\mathrm{P}_{n}\mathbf{x},\ \mathbf{x}\in\mathrm{X}\), define \(\phi_{n}(\mathbf{p})=\mathrm{Q}_{n}\mathbf{x}.\) From (4.11), this is well-defined and
\[\|\phi_{n}(\mathbf{p})-\phi_{n}(\mathbf{p}^{\prime})\|_{\mathbb{H}}\leqslant\|\mathbf{p}- \mathbf{p}^{\prime}\|_{\mathbb{H}},\ \ \text{for all}\ \ \mathbf{p},\mathbf{p}^{\prime}\in\mathrm{P}_{n}\mathrm{X}.\]
Since X is a closed subset of the compact set \(\mathscr{A}\), \(\mathrm{P}_{n}\mathrm{X}\) is closed, using [51, Theorem 12.3], we can extend \(\phi_{n}\) from the closed set \(\mathrm{P}_{n}\mathrm{X}\subset\mathrm{P}_{n}\mathbb{H}\) to a function \(\Phi:\mathrm{P}_{n}\mathbb{H}\to\mathrm{Q}_{n}\mathbb{H}\), preserving the Lipschitz constant. Our aim is to show that
\[\mathrm{dist}(\mathscr{A},\mathrm{G}_{\mathrm{P}_{n}\mathbb{H}}[\Phi]) \leqslant\varepsilon_{n}=2M_{0}e^{\frac{-\lambda_{n+1}}{2C_{0}}}, \tag{4.12}\]
where \(\{\lambda_{n}\}_{n=1}^{\infty}\) is the set of eigenvalues of A. Indeed, if \(\mathbf{x}\in\mathscr{A}\) but \(\mathbf{x}\notin\mathrm{X}\), then there is a \(\mathbf{y}\in\mathrm{X}\) such that
\[\|\mathrm{Q}_{n}(\mathbf{x}-\mathbf{y})\|_{\mathbb{H}}\geqslant\|\mathrm{P}_{n}(\mathbf{x} -\mathbf{y})\|_{\mathbb{H}}. \tag{4.13}\]
Setting \(\mathbf{w}=\mathbf{x}-\mathbf{y}\), we have \(\|\mathrm{Q}_{n}\mathbf{w}\|_{\mathbb{H}}\leqslant\|\mathbf{w}\|_{\mathbb{H}}\leqslant \|\mathrm{P}_{n}\mathbf{w}\|_{\mathbb{H}}+\|\mathrm{Q}_{n}\mathbf{w}\|_{\mathbb{H}} \leqslant 2\|\mathrm{Q}_{n}\mathbf{w}\|_{\mathbb{H}}\) and
\[\|\mathrm{A}^{\frac{1}{2}}\mathbf{w}\|_{\mathbb{H}}^{2}=\sum_{k=1}^{\infty}\lambda _{k}|(\mathbf{w},\mathbf{e}_{k})|^{2}\geqslant\sum_{k=n+1}^{\infty}\lambda_{k}|(\mathbf{w },\mathbf{e}_{k})|^{2}\geqslant\lambda_{n+1}\|\mathrm{Q}_{n}\mathbf{w}\|_{\mathbb{H}} ^{2}.\]
Therefore, using (1.5), we deduce
\[\lambda_{n+1}\|\mathrm{Q}_{n}\mathbf{w}\|_{\mathbb{H}}^{2}\leqslant\|\mathrm{A}^{ \frac{1}{2}}\mathbf{w}\|_{\mathbb{H}}^{2}\leqslant 4C\|\mathrm{Q}_{n}\mathbf{w}\|_{ \mathbb{H}}^{2}\log\biggl{(}\frac{M_{0}^{2}}{\|\mathrm{Q}_{n}\mathbf{w}\|_{ \mathbb{H}}^{2}}\biggr{)}, \tag{4.14}\]
so that \(\|\mathrm{Q}_{n}\mathbf{w}\|_{\mathbb{H}}\leqslant\frac{\varepsilon_{n}}{2}=M_{0} e^{\frac{-\lambda_{n+1}}{2C_{0}}}\), and (4.12) follows. The estimate (4.12) implies that \(\delta_{1}(\mathscr{A},\varepsilon_{n})=n\) and hence
\[\limsup_{n\to\infty}\frac{\log\delta_{1}(\mathscr{A},\varepsilon_{n})}{-\log \varepsilon_{n}}=\limsup_{n\to\infty}\frac{\log n}{\frac{\lambda_{n+1}}{2C_{0 }}-\log(2M_{0})}. \tag{4.15}\]
Since the eigenvalues \(\lambda_{n}\sim\lambda_{1}n^{\frac{d}{2}}\) as \(n\to\infty\) ([14, Page 54]), we further have
\[\limsup_{n\to\infty}\frac{\log n}{\lambda_{n}}=0.\]
Therefore, \(\mathrm{dev}(\mathscr{A})\leqslant\mathrm{dev}_{1}(\mathscr{A})=0\).
### The uniqueness of Lagrangian trajectories in 2D and 3D CBF flows
Let us now prove the uniqueness of Lagrangian trajectories in 2D and 3D CBF flows by using the log-Lipschitz regularity. Given an initial data in \(\mathbb{H}^{d-2}(\mathbb{T}^{d})\) (\(\mathbb{H}^{2}(\mathbb{T}^{3})\) for \(d=3\) and \(r\in(5,\infty)\)), the 2D and 3D CBF equations (1.1) have a unique solution. Given such a solution, we show that the Lagrangian particle trajectories are also unique. More precisely, the question is whether the solutions of the ordinary differential equation
\[\mathbf{X}^{\prime}=\mathbf{u}(\mathbf{X},t),\ \mathbf{X}(0)=\mathbf{X}_{0}, \tag{4.16}\]
are unique, when \(\mathbf{u}(t)\) is a solution of the CBF with initial data \(\mathbf{u}_{0}\in\mathbb{H}^{d-2}(\mathbb{T}^{d})\). For \(\mathbf{W}=\mathbf{X}-\mathbf{Y}\), a calculation similar to (3.50) yields
\[-\frac{\mathrm{d}}{\mathrm{d}t}\log|\mathbf{W}(t)|=-\frac{1}{2|\mathbf{W}(t)|^{2}}\frac{ \mathrm{d}}{\mathrm{d}t}|\mathbf{W}(t)|^{2}=-\frac{(\mathbf{W}(t),\mathbf{W}^{\prime}(t))}{ |\mathbf{W}(t)|^{2}}\leqslant\frac{|\mathbf{u}(\mathbf{X},t)-\mathbf{u}(\mathbf{Y},t)|}{|\mathbf{W}(t)|}\]
\[=\frac{1}{|\boldsymbol{W}(t)|}\biggl{|}\int_{0}^{1}\!\nabla\boldsymbol{u}( \theta\boldsymbol{X}+(1-\theta\boldsymbol{Y}),t)\mathrm{d}\theta\cdot \boldsymbol{W}(t)\biggr{|}\] \[\leq C\|\nabla\boldsymbol{u}(t)\|_{\mathbb{L}^{\infty}}\leq C\| \boldsymbol{u}(t)\|_{\mathbb{H}^{s}},\ \ \text{for}\ \ s>\frac{d}{2}+1, \tag{4.17}\]
where we have used the Sobolev's inequality. Integrating the above inequality from \(t_{0}\) to \(t>t_{0}\), one obtains
\[-\log|\boldsymbol{W}(t)|\leq-\log|\boldsymbol{W}(t_{0})|+\int_{t_{0}}^{t}\| \boldsymbol{u}(r)\|_{\mathbb{H}^{s}}\mathrm{d}r,\ \ \text{for}\ \ s>\frac{d}{2}+1. \tag{4.18}\]
If \(\boldsymbol{u}\in\mathrm{L}^{1}(0,T;\mathbb{H}^{s}(\mathbb{T}^{d}))\), for \(s>\frac{d}{2}+1\), then one could put \(t_{0}=0\) in (4.18) and immediately obtain uniqueness. For the critical case \(s=\frac{d}{2}+1\), by using the log-Lipschitz regularity (see [53, Theorem 2]), we know that
\[|\boldsymbol{u}(\boldsymbol{X},t)-\boldsymbol{u}(\boldsymbol{Y},t)|\leq C\| \boldsymbol{u}(t)\|_{\mathbb{H}^{\frac{d}{2}+1}}|\boldsymbol{X}-\boldsymbol{ Y}|(-\log|\boldsymbol{X}-\boldsymbol{Y}|)^{\frac{1}{2}}.\]
Therefore, from (4.17), we immediately have
\[-\frac{\mathrm{d}}{\mathrm{d}t}[(-\log|\boldsymbol{W}(t)|)^{\frac{1}{2}}]^{2} \leq C\|\boldsymbol{u}(t)\|_{\mathbb{H}^{\frac{d}{2}+1}}(-\log|\boldsymbol{W} (t))^{\frac{1}{2}}. \tag{4.19}\]
Integrating the above inequality from \(t_{0}\) to \(t>t_{0}\), we find
\[-(-\log|\boldsymbol{W}(t)|)^{\frac{1}{2}}\leq-(-\log|\boldsymbol{W}(t_{0})|)^ {\frac{1}{2}}+C\int_{t_{0}}^{t}\|\boldsymbol{u}(r)\|_{\mathbb{H}^{\frac{d}{2}+ 1}}\mathrm{d}r. \tag{4.20}\]
Thus, if \(\boldsymbol{u}\in\mathrm{L}^{1}(0,T;\mathbb{H}^{\frac{d}{2}+1}(\mathbb{T}^{d}))\), then we can take \(t_{0}=0\) in (4.20) and immediately obtain uniqueness. However, such a regularity for \(\boldsymbol{u}\) is not known to be true for NSE, nor even for solutions of the heat equation.
It has been proved in [8, Theorem 3.2.1] that if
\[\boldsymbol{u}\in\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{d}{2}-1}(\Omega))\ \ \text{and}\ \ \sqrt{t}\boldsymbol{u}\in\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{d}{2}+1}(\Omega)),\]
then the ODE (4.16) has a unique solution for every \(\boldsymbol{X}_{0}\in\mathbb{T}^{d}\). One can weaken the assumption on \(\boldsymbol{u}\) to \(\boldsymbol{u}\in\mathrm{L}^{p}(0,T;\mathbb{H}^{\frac{d}{2}-1}(\Omega))\) for some \(p>1\) ([9]). Moreover, from [8, Theorem 3.2.2], we can obtain the continuity with respect to the initial data. Suppose that \(\boldsymbol{u}_{n}\to\boldsymbol{u}\) strogly in \(\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{d}{2}-1}(\Omega))\) and that \(\sqrt{t}\boldsymbol{u}_{n}\) is bounded in \(\mathrm{L}^{2}(0,T;\mathbb{H}^{\frac{d}{2}+1}(\Omega)).\) For some \(\boldsymbol{X}_{0}\in\mathbb{T}^{d}\), let \(\boldsymbol{X}_{n}(t)\) be the unique solution of
\[\boldsymbol{X}_{n}^{\prime}=\boldsymbol{u}_{n}(\boldsymbol{X}_{n},t),\ \boldsymbol{X}_{n}(0)=\boldsymbol{X}_{0}. \tag{4.21}\]
Then \(\boldsymbol{X}_{n}(t)\to\boldsymbol{X}(t)\) uniformly on \([0,T]\), where \(\boldsymbol{X}(t)\) solves (4.16). A similar result to the following theorem for 2D NSE in periodic domains is obtained in [8], and in bounded domains it is established in [9].
**Theorem 4.9**.: _If_
\[\begin{cases}\boldsymbol{x}\in\mathring{\mathbb{L}}_{p}^{2}(\mathbb{T}^{2}),\boldsymbol{f}\in\mathrm{L}_{p}^{2}(0,T;\mathring{\mathbb{L}}^{2}(\mathbb{T} ^{2}))\ \ \text{for}\ \ d=2,r\in[1,\infty),\\ \boldsymbol{x}\in\mathring{\mathbb{H}}^{1}(\mathbb{T}^{3}),\boldsymbol{f}\in \mathrm{L}_{p}^{2}(0,T;\mathring{\mathbb{H}}^{\frac{1}{2}}(\mathbb{T}^{3}))\ \ \text{for}\ \ d=3,r\in[3,5]\ (2\beta \mu\geq 1\ \ \text{for}\ \ r=3),\\ \boldsymbol{x}\in\mathring{\mathbb{H}}_{p}^{2}(\mathbb{T}^{3}),\boldsymbol{f} \in\mathrm{W}^{1,2}(0,T;\mathring{\mathbb{L}}_{p}^{2}(\mathbb{T}^{3}))\cap \mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}_{p}^{\frac{1}{2}}(\mathbb{T}^{3}))\ \ \text{for}\ \ d=3,r\in(5,\infty),\end{cases} \tag{4.22}\]
_and \(\boldsymbol{u}(t)\) is the corresponding solution of the 2D and 3D CBF equations (1.1) on \([0,T]\), respectively, then the solution \(\boldsymbol{X}(\cdot)\) of (4.16) is unique. Furthermore, for each fixed \(\boldsymbol{X}_{0}\in\mathbb{T}^{d},\) the map \(\boldsymbol{x}\mapsto\boldsymbol{X}(\cdot)\) is continuous from \(\mathring{\mathbb{H}}_{p}^{d-2}(\mathbb{T}^{d})\ (\mathring{\mathbb{H}}_{p}^{2}(\mathbb{T}^{3})\) for \(d=3\) and \(r\in(5,\infty)\)) into \(\mathrm{C}([0,T];\mathbb{R}^{d}).\)_
Proof.: For \(d=2\), \(r\in[1,\infty)\), \(\mathbf{x}\in\mathbb{H}\) and \(\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\), there exists a unique weak solution \(\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{2}(0,T;\mathbb{V})\cap \mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})\) to the system (2.11). From the estimates (3.16) and (3.17), it is clear that \(\sqrt{t}\mathbf{u}\in\mathrm{L}^{\infty}(0,T;\mathbb{V})\cap\mathrm{L}^{2}(0,T; \mathrm{D}(\mathrm{A}))\). Therefore, we obtain \(\sqrt{t}\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}_{p}^{2}(\mathbb{T}^ {2}))\), since \(\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq C\|\mathbf{u}\|_{\mathring{\mathbb{H}}_{p}^{ 2}}\). Hence [8, Theorem 3.2.1] now guarantees the uniqueness of the Lagrangian trajectories corresponding to weak solutions of the equations in two dimensions. For the continuity with respect to the initial data, [8, Theorem 3.2.2] requires uniform estimates for \(\sqrt{t}\mathbf{u}_{n}\in\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}_{p}^{2}(\mathbb{ T}^{2}))\), when \(\mathbf{u}_{n}(0)\to\mathbf{x}\) strongly in \(\mathring{\mathbb{L}}_{p}^{2}(\mathbb{T}^{2})\). These follow immediately from (3.1) and (3.17) since \(\mathbf{u}_{n}(0)\) is uniformly bounded in \(\mathring{\mathbb{L}}_{p}^{2}(\mathbb{T}^{2})\). The strong convergence of \(\mathbf{u}_{n}\to\mathbf{u}\) in \(\mathrm{L}^{2}(0,T;\mathring{\mathbb{L}}_{p}^{2}(\mathbb{T}^{2}))\) follows along an appropriate subsequence by an application of the Aubin-Lions compactness theorem ([42, Theorem 8.1]) from the uniform bounds on \(\mathbf{u}_{n}\in\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}_{p}^{1}(\mathbb{T}^{2}))\) and on \(\frac{\mathrm{d}\mathbf{u}_{n}}{\mathrm{d}t}\in\mathrm{L}^{\frac{r+1}{r}}(0,T; \mathring{\mathbb{H}}_{p}^{-1}(\mathbb{T}^{2}))\).
For the case \(d=3\), \(r\in[3,5]\), \(\mathbf{x}\in\mathbb{H}\) and \(\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}_{p}^{2}(\mathbb{T}^{3}))\), there exists a unique strong solution \(\mathbf{u}\in\mathrm{C}([0,T];\mathbb{V})\cap\mathrm{L}^{2}(0,T;\mathrm{D}( \mathrm{A}))\cap\mathrm{L}^{r+1}(0,T;\tilde{\mathbb{L}}^{3(r+1)})\) to the system (2.11). We need to show that \(\sqrt{t}\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}_{p}^{\frac{5}{2}}( \mathbb{T}^{d}))\). Taking the inner product with \(\mathrm{A}\mathbf{u}(\cdot)\) to the first equation in (2.11) and calculations similar to (3.17) and (3.42) yield
\[\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}+\mu\int_{0}^{t}\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{ H}}^{2}+\beta\int_{0}^{t}\|\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^{ 2}\mathrm{d}s\leq\|\mathbf{x}\|_{\mathbb{V}}^{2}+\frac{2\vartheta K}{\mu^{2}}+ \frac{2}{\mu}\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t, \tag{4.23}\]
for all \(t\in[0,T]\), where \(K\) is defined in (3.1). Taking the inner product with \(t\mathrm{A}^{\frac{3}{2}}\mathbf{u}\) to the first equation in (2.11), we find
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}t\|\Lambda^{\frac {3}{4}}\mathbf{u}(t)\|_{\mathbb{H}}^{2}\Big{)}+\mu t\|\Lambda^{\frac{5}{4}}\mathbf{u}(t )\|_{\mathbb{H}}^{2}+\alpha t\|\Lambda^{\frac{3}{4}}\mathbf{u}(t)\|_{\mathbb{H}}^ {2}\] \[=\frac{1}{2}\|\Lambda^{\frac{3}{4}}\mathbf{u}(t)\|_{\mathbb{H}}^{2}+( \mathbf{f}(t),t\Lambda^{\frac{3}{2}}\mathbf{u}(t))-(\mathrm{B}(\mathbf{u}(t)),t\Lambda^{ \frac{3}{2}}\mathbf{u}(t))-\beta(\mathcal{C}(\mathbf{u}(t)),t\Lambda^{\frac{3}{2}} \mathbf{u}(t)), \tag{4.24}\]
for a.e. \(t\in[0,T]\). We estimate \(|(\mathbf{f},\mathrm{A}^{\frac{3}{2}}\mathbf{u})|\) using Holder's and Young's inequalities as
\[|(\mathbf{f},\mathrm{A}^{\frac{3}{2}}\mathbf{u})|\leq\|\Lambda^{\frac{1}{4}}\mathbf{f}\|_{ \mathbb{H}}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\leq\frac{\mu}{4}\| \Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}^{2}+\frac{1}{\mu}\|\Lambda^{\frac{1} {4}}\mathbf{f}\|_{\mathbb{H}}^{2}. \tag{4.25}\]
We estimate \(|(\mathrm{B}(\mathbf{u}),\Lambda^{\frac{3}{2}}\mathbf{u})|\) using fractional Leibniz rule ([15, Theorem 1]), Holder's, Agmon's and Young's inequalities as
\[|(\mathrm{B}(\mathbf{u}),\Lambda^{\frac{3}{2}}\mathbf{u})| \leq\|\Lambda^{\frac{1}{4}}\mathrm{B}(\mathbf{u})\|_{\mathbb{H}}\| \Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\leq C\|\Lambda^{\frac{3}{4}}(\mathbf{u} \otimes\mathbf{u})\|_{\mathbb{H}}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\] \[\leq C\|\mathbf{u}\|_{\mathring{\mathbb{L}}^{\infty}}\|\Lambda^{\frac{3 }{4}}\mathbf{u}\|_{\mathbb{H}}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\leq C\| \mathbf{u}\|_{\mathbb{V}}^{\frac{1}{2}}\|\Lambda\mathbf{u}\|_{\mathbb{H}}^{\frac{1}{2}}\| \Lambda^{\frac{3}{4}}\mathbf{u}\|_{\mathbb{H}}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{ \mathbb{H}}\] \[\leq\frac{\mu}{8}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}^{2}+ \frac{C}{\mu}\|\mathbf{u}\|_{\mathbb{V}}\|\Lambda\mathbf{u}\|_{\mathbb{H}}\|\Lambda^{ \frac{3}{4}}\mathbf{u}\|_{\mathbb{H}}^{2}. \tag{4.26}\]
Similarly, we estimate \(\beta|(\mathcal{C}(\mathbf{u}),\Lambda^{\frac{3}{2}}\mathbf{u})|\) using Theorem A.6, [22], Holder's Gagliardo-Nirenberg's and Young's inequalities as
\[\beta|(\mathcal{C}(\mathbf{u}),\Lambda^{\frac{3}{2}}\mathbf{u})| \leq\beta\|\Lambda^{\frac{1}{4}}\mathcal{C}(\mathbf{u})\|_{\mathbb{H}} \|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\leq C\beta\|\mathbf{u}\|_{ \mathring{\mathbb{L}}^{3(r-1)}}^{r-1}\|\Lambda^{\frac{1}{4}}\mathbf{u}\|_{ \mathring{\mathbb{L}}^{6}}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\] \[\leq C\beta\|\mathbf{u}\|_{\mathring{\mathbb{L}}^{r+1}}\|\mathbf{u}\|_{ \mathring{\mathbb{L}}^{3(r+1)}}^{(r-2)}\|\Lambda^{\frac{3}{4}}\mathbf{u}\|_{ \mathbb{H}}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}\] \[\leq\frac{\mu}{8}\|\Lambda^{\frac{5}{4}}\mathbf{u}\|_{\mathbb{H}}^{2}+ \frac{C\beta^{2}}{\mu}\|\mathbf{u}\|_{\mathring{\mathbb{L}}^{r+1}}^{2}\|\mathbf{u}\|_{ \mathring{\mathbb{L}}^{3(r+1)}}^{2(r-2)}\|\Lambda^{\frac{3}{4}}\mathbf{u}\|_{ \mathbb{H}}^{2}. \tag{4.27}\]
Combining (4.25)-(4.27) and substituting it in (4.24), we deduce
\[t\|\Lambda^{\frac{3}{4}}\mathbf{u}(t)\|_{\mathbb{H}}^{2}+\mu\int_{0}^{t} s\|\Lambda^{\frac{5}{4}}\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+2\alpha\int_{0}^{t} s\|\Lambda^{\frac{3}{4}}\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\leq\int_{0}^{t}\|\mathbf{u}(s)\|_{\mathbb{V}}\|\Lambda\mathbf{u}(s)\|_{ \mathbb{H}}\mathrm{d}s+\frac{2}{\mu}\int_{0}^{t}s\|\Lambda^{\frac{1}{4}}\mathbf{f} (s)\|_{\mathbb{H}}^{2}\mathrm{d}s+\frac{C}{\mu}\int_{0}^{t}s\|\mathbf{u}(s)\|_{ \mathbb{V}}\|\Lambda\mathbf{u}(s)\|_{\mathbb{H}}\|\Lambda^{\frac{3}{4}}\mathbf{u}(s)\| _{\mathbb{H}}^{2}\mathrm{d}s\] \[\quad+\frac{C\beta^{2}}{\mu}\int_{0}^{t}s\|\mathbf{u}(s)\|_{\mathring {\mathbb{L}}^{r+1}}^{2}\|\mathbf{u}(s)\|_{\mathring{\mathbb{L}}^{3(r+1)}}^{2(r-2 )}\|\Lambda^{\frac{3}{4}}\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s, \tag{4.28}\]
for all \(t\in[0,T]\). An application of Growall's inequality in (4.28) yields
\[t\|\Lambda^{\frac{3}{4}}\mathbf{u}(t)\|_{\mathbb{H}}^{2}+\mu\int_{0} ^{t}s\|\Lambda^{\frac{5}{4}}\mathbf{u}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s\] \[\quad\times\exp\Biggl{\{}\frac{CT^{\frac{1}{2}}}{\mu}\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{V}}\biggl{(}\int_{0}^{T}\|\Lambda\mathbf{u}(t)\|_{ \mathbb{H}}^{2}\mathrm{d}t\biggr{)}^{\frac{1}{2}}\Biggr{\}}\] \[\quad\times\exp\Biggl{\{}\frac{C\beta^{2}}{\mu}T^{\frac{5-r}{r+1} }\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}\biggl{(}\int_{0}^{T}\|\mathbf{u}(t )\|_{\mathring{\mathbb{L}}^{3(r+1)}}^{r+1}\mathrm{d}t\biggr{)}^{\frac{2(r-2)} {r+1}}\Biggr{\}}, \tag{4.29}\]
for all \(t\in[0,T]\) and \(r\in[3,5]\). From the estimate (4.23), it is clear that \(\sqrt{t}\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathrm{D}(\Lambda^{\frac{5}{4}}))\), as required. Standard energy estimate (see (3.1)) shows that \(\mathbf{u}_{n}\) is uniformly bounded in \(\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}^{1}_{p}(\mathbb{T}^{3}))\), and \(\frac{\mathrm{d}\mathbf{u}_{n}}{\mathrm{d}t}\) is uniformly bounded in \(\mathrm{L}^{\frac{r+1}{r}}(0,T;\mathring{\mathbb{H}}^{-1}_{p}(\mathbb{T}^{3}) +\mathbb{L}^{\frac{r+1}{r}}(\mathbb{T}^{d}))\) (this is true even if \(\mathbf{x}\in\mathring{\mathbb{L}}^{2}_{p}(\mathbb{T}^{3})\)). Since for \(0\leq s<1\), \(\mathring{\mathbb{H}}^{1}_{p}(\mathbb{T}^{3})\subset\mathring{\mathbb{H}}^{s} _{p}(\mathbb{T}^{3})\subset\mathring{\mathbb{H}}^{-1}_{p}(\mathbb{T}^{3})+ \mathbb{L}^{\frac{r+1}{r}}(\mathbb{T}^{d})\)) and the first embedding is compact, it follows from the Aubin-Lions compactness theorem that \(\mathbf{u}_{n}\to\mathbf{u}\) strongly in \(\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}^{s}_{p}(\mathbb{T}^{3}))\) for any \(s<1\). In particular \(\mathbf{u}_{n}\to\mathbf{u}\) strongly in \(\mathrm{L}^{2}(0,T;\mathring{\mathbb{H}}^{\frac{1}{2}}_{p}(\mathbb{T}^{3}))\). Thus, we have the continuity with respect to the initial data by applying [8, Theorem 3.2.1].
For \(d=3\), \(r\in(5,\infty)\), taking the inner product with \(\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}\) to the first equation in (2.11) and calculations similar to (3.21) and (3.22) results to
\[\mu\|\mathbf{u}(t)\|_{\mathbb{V}}^{2}+\alpha\|\mathbf{u}(t)\|_{\mathbb{H}} ^{2}+\frac{2\beta}{r+1}\|\mathbf{u}(t)\|_{\mathring{\mathbb{L}}^{r+1}}^{r+1}+\int_{ 0}^{t}\biggl{\|}\frac{\mathrm{d}\mathbf{u}(s)}{\mathrm{d}t}\biggr{\|}_{\mathbb{H}} ^{2}\mathrm{d}s\] \[\quad\leq 2\mu\|\mathbf{x}\|_{\mathbb{V}}^{2}+\alpha\|\mathbf{x}\|_{ \mathbb{H}}^{2}+\frac{2\beta}{r+1}\|\mathbf{x}\|_{\mathring{\mathbb{L}}^{r+1}}^{r+ 1}+\frac{4\vartheta K}{\mu}+2\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}\mathrm{ d}t, \tag{4.30}\]
for all \(t\in[0,T]\). Taking the inner product with \(\mathbf{v}(\cdot)\) to the first equation in (3.24) and calculations similar to (3.27) provide
\[\|\mathbf{v}(t)\|_{\mathbb{H}}^{2} \leq\biggl{\{}\|\mathbf{v}(0)\|_{\mathbb{H}}^{2}+\frac{2}{\mu}\int_{0} ^{T}\|\mathbf{f}_{t}(t)\|_{\mathbb{V}}^{2}\mathrm{d}t\biggr{\}}e^{\frac{2\vartheta KT }{\mu}}\] \[\quad+\frac{2}{\mu\lambda_{1}}\int_{0}^{T}\|\mathbf{f}_{t}(t)\|_{ \mathbb{H}}^{2}\mathrm{d}t\biggr{\}}e^{\frac{2\vartheta KT}{\mu}}=K_{1}, \tag{4.31}\]
for all \(t\in[0,T]\), where \(\boldsymbol{v}=\frac{\mathrm{d}\boldsymbol{v}}{\mathrm{d}t}\). Note that \(\boldsymbol{f}\in\mathrm{W}^{1,2}(0,T;\mathbb{H})\) implies \(\boldsymbol{f}\in\mathrm{C}([0,T];\mathbb{H})\) also. It follows from (2.11) that
\[\mu\|\mathrm{A}\boldsymbol{u}\|_{\mathbb{H}}^{2}+\alpha\| \boldsymbol{u}\|_{\mathbb{H}}^{2}+\beta\|\boldsymbol{u}\|^{\frac{r-1}{2}}\| \nabla\boldsymbol{u}\|_{\mathbb{H}}^{2}+4\beta\bigg{[}\frac{(r-1)}{(r+1)^{2} }\bigg{]}\|\nabla|\boldsymbol{u}|^{\frac{r+1}{2}}\|_{\mathbb{H}}^{2}\] \[=-\bigg{(}\frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t},\mathrm{A} \boldsymbol{u}\bigg{)}-(\mathrm{B}(\boldsymbol{u}),\mathrm{A}\boldsymbol{u}) \leqslant\bigg{\|}\frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t}\bigg{\|}_{ \mathbb{H}}\|\mathrm{A}\boldsymbol{u}\|_{\mathbb{H}}+\|\boldsymbol{u}\|_{ \mathbb{L}^{\infty}}\|\boldsymbol{u}\|_{\mathbb{V}}\|\mathrm{A}\boldsymbol{u }\|_{\mathbb{H}}\] \[\leqslant\frac{\mu}{2}\|\mathrm{A}\boldsymbol{u}\|_{\mathbb{H}} ^{2}+\frac{C}{\mu}\|\boldsymbol{u}\|_{\mathbb{V}}^{6}+\frac{1}{\mu}\bigg{\|} \frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t}\bigg{\|}_{\mathbb{H}}^{2}. \tag{4.32}\]
Therefore, we deduce from (4.23) and (4.31) that
\[\mu\|\mathrm{A}\boldsymbol{u}(t)\|_{\mathbb{H}}^{2}+\beta\|\boldsymbol{u}(t) \|_{\mathbb{L}^{3(r+1)}}^{r+1}\leqslant\frac{C}{\mu}\bigg{\{}\|\boldsymbol{x} \|_{\mathbb{V}}^{2}+\frac{2\vartheta K}{\mu^{2}}+\frac{2}{\mu}\int_{0}^{T}\| \boldsymbol{f}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t\bigg{\}}^{3}+\frac{K_{1}}{\mu}, \tag{4.33}\]
for all \(t\in[0,T]\). Thus from (4.28) and (4.33), one can conclude the proof in this case also.
## 5. The Stochastic Case
Let \((\Omega,\mathscr{F},\mathbb{P})\) be a complete probability space equipped with an increasing family of sub-sigma fields \(\{\mathscr{F}_{t}\}_{0\leqslant t\leqslant T}\) of \(\mathscr{F}\) satisfying:
(i) \(\mathscr{F}_{0}\) contains all elements \(F\in\mathscr{F}\) with \(\mathbb{P}(F)=0\),
(ii) \(\mathscr{F}_{t}=\mathscr{F}_{t+}=\bigcap\limits_{s>t}\mathscr{F}_{s}\), for \(0\leqslant t\leqslant T\).
Let \(\{\mathrm{W}(t)\}_{t\in[0,T]}\) be a one-dimensional real-valued Brownian motion on \((\Omega,\mathscr{F},\{\mathscr{F}_{t}\}_{0\leqslant t\leqslant T},\mathbb{P})\). We consider the following stochastic CBF equations with a linear multiplicative noise:
\[\begin{cases}\mathrm{d}\boldsymbol{u}(t)+[\mu\mathrm{A}\boldsymbol{u}(t)+ \mathrm{B}(\boldsymbol{u}(t))+\alpha\boldsymbol{u}(t)+\beta\mathcal{C}( \boldsymbol{u}(t))]\mathrm{d}t&=\boldsymbol{f}(t)\mathrm{d}t+\sigma\boldsymbol {u}(t)\mathrm{d}\mathrm{W}(t),\\ \boldsymbol{u}(0)&=\boldsymbol{x},\end{cases} \tag{5.1}\]
where \(\sigma\in\mathbb{R}\backslash\{0\}\) and \(\boldsymbol{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\) is a deterministic external forcing. Note that \(z(t)=e^{-\sigma\mathrm{W}(t)}\in\mathrm{C}([0,T];\mathbb{R})\), \(\mathbb{P}\)-a.s. satisfies
\[\begin{cases}\mathrm{d}z(t)=-\sigma z(t)\mathrm{d}\mathrm{W}(t)+\frac{\sigma^ {2}}{2}z(t)\mathrm{d}t,\\ z(0)=1.\end{cases}\]
Using the transformation \(\boldsymbol{v}(t)=\boldsymbol{u}(t)z(t)\), we obtain the following random dynamical system:
\[\begin{cases}\frac{\mathrm{d}\boldsymbol{v}(t)}{\mathrm{d}t}+\mu \mathrm{A}\boldsymbol{v}(t)+\frac{1}{z(t)}\mathrm{B}(\boldsymbol{v}(t))+\bigg{(} \alpha+\frac{\sigma^{2}}{2}\bigg{)}\boldsymbol{v}(t)+\frac{\beta}{[z(t)]^{r-1} }\mathcal{C}(\boldsymbol{v}(t))&=z(t)\boldsymbol{f}(t),\\ \boldsymbol{v}(0)&=\boldsymbol{x}.\end{cases} \tag{5.2}\]
As \(z(t)\in\mathrm{C}([0,T];\mathbb{R})\), \(\mathbb{P}\)-a.s., for \(\boldsymbol{x}\in\mathbb{H}\) and \(\boldsymbol{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\), the existence and uniqueness of weak solution to the system (5.2) can be proved in a similar way as in [2, 17, 31], etc. Our aim is to prove the pathwise backward uniqueness of the system (5.1).
**Theorem 5.1** (Backward uniqueness).: _Let \(\boldsymbol{x}\in\mathbb{H}\), \(\boldsymbol{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\) and \(\boldsymbol{u}_{1},\boldsymbol{u}_{2}\) satisfy the first equation in the system (5.1). For \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,5]\) (\(2\beta\mu\geqslant 1\) for \(d=r=3\)), if \(\boldsymbol{u}_{1}(T)=\boldsymbol{u}_{2}(T)\) in \(\mathbb{H}\), then \(\boldsymbol{u}_{1}(t)=\boldsymbol{u}_{2}(t)\) in \(\mathbb{H}\), \(\mathbb{P}\)-a.s., for all \(t\in[0,T]\)._
Proof.: Since \(\mathbf{v}(t)=\mathbf{u}(t)z(t)\), it is enough to show that if \(\mathbf{v}_{1}(T)=\mathbf{v}_{2}(T)\) in \(\mathbb{H}\), then \(\mathbf{v}_{1}(t)=\mathbf{v}_{2}(t)\) in \(\mathbb{H}\) for all \(t\in[0,T]\). Taking the inner product with \(\mathbf{v}(\cdot)\) to the first equation in (5.2) and then integrating from \(0\) to \(T\), we find
\[\|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+2\mu\int_{0}^{t}\|\mathbf{v}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s+2\bigg{(}\alpha+\frac{\sigma^{2}}{2}\bigg{)}\int_{0 }^{t}\|\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+2\beta\int_{0}^{t}e^{\sigma(r-1 )\mathrm{W}(s)}\|\mathbf{v}(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\mathrm{d}s\] \[=\|\mathbf{x}\|_{\mathbb{H}}^{2}+2\int_{0}^{t}\bigl{\langle}z(s)\mathbf{ f}(s),\mathbf{v}(s)\bigr{\rangle}\mathrm{d}s\leq\|\mathbf{x}\|_{\mathbb{H}}^{2}+\mu \int_{0}^{t}\|\mathbf{v}(s)\|_{\mathbb{V}}^{2}\mathrm{d}s+\frac{1}{\mu}\sup_{s\in[ 0,t]}[z(s)]^{2}\int_{0}^{t}\|\mathbf{f}(s)\|_{\mathbb{V}^{\prime}}^{2}\mathrm{d}s,\]
for all \(t\in[0,T]\). Therefore, we get
\[\|\mathbf{v}(t)\|_{\mathbb{H}}^{2}+2\mu\int_{0}^{t}\|\mathbf{v}(s)\|_{ \mathbb{V}}^{2}\mathrm{d}s+2\bigg{(}\alpha+\frac{\sigma^{2}}{2}\bigg{)}\int_{ 0}^{t}\|\mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+2\beta\int_{0}^{t}e^{\sigma(r- 1)\mathrm{W}(s)}\|\mathbf{v}(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\mathrm{d}s\] \[\leq\|\mathbf{x}\|_{\mathbb{H}}^{2}+\frac{1}{\mu}\sup_{t\in[0,T]}e^{- 2\sigma\mathrm{W}(t)}\int_{0}^{T}\|\mathbf{f}(t)\|_{\mathbb{V}^{\prime}}^{2} \mathrm{d}t=\widehat{K}. \tag{5.3}\]
Also note that
\[\int_{0}^{T}\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\mathrm{d}t\leq \sup_{t\in[0,T]}e^{-\sigma(r-1)\mathrm{W}(t)}\int_{0}^{T}e^{\sigma(r-1) \mathrm{W}(t)}\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\mathrm{d}t \leq\frac{C\widetilde{K}}{2\beta}.\]
Moreover, one can show that \(\mathbf{v}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{2}(0,T;\mathbb{V})\cap \mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})\). For \(d=2,3\) and \(r\in(3,\infty)\), calculations similar to (3.16), (3.17) and (3.18) yield
\[\|\mathbf{v}(t)\|_{\mathbb{V}}^{2}+\mu\int_{\epsilon}^{t}\|\mathrm{A} \mathbf{v}(s)\|_{\mathbb{H}}^{2}\mathrm{d}s+\beta\int_{\epsilon}^{t}e^{\sigma(r-1) \mathrm{W}(s)}\|\mathbf{v}(s)\|_{\mathbb{L}^{3(r+1)}}^{r+1}\mathrm{d}s \tag{5.4}\] \[\leq\left\{\begin{array}{ll}C\bigg{(}\frac{\widehat{K}}{2\mu t}+ \frac{2}{\mu}\sup_{t\in[0,T]}e^{-2\sigma\mathrm{W}(t)}\int_{0}^{T}\|\mathbf{f}(t)\| _{\mathbb{H}}^{2}\mathrm{d}t\bigg{)},&\text{for $d=2$,}\\ C\bigg{(}\frac{\widehat{K}}{\mu t}+\frac{4\widehat{K}\vartheta}{\mu^{2}}+ \frac{4}{\mu}\sup_{t\in[0,T]}e^{-2\sigma\mathrm{W}(t)}\int_{0}^{T}\|\mathbf{f}(t) \|_{\mathbb{H}}^{2}\mathrm{d}t\bigg{)},&\text{for $d=3$,}\end{array}\right.\]
for all \(t\in[\epsilon,T]\), for any \(\epsilon>0\). Therefore, we have \(\mathbf{v}\in\mathrm{C}((0,T];\mathbb{V})\cap\mathrm{L}^{2}(\epsilon,T;\mathrm{D}( \mathrm{A}))\cap\mathrm{L}^{r+1}(\epsilon,T;\widetilde{\mathbb{L}}^{3(r+1)})\) for any \(\epsilon>0\) by showing an estimate similar to (3.23).
Let us now prove the backward uniqueness result for the system (5.1). Let \(\mathbf{v}_{1}(\cdot)\) and \(\mathbf{v}_{2}(\cdot)\) be two solutions of the system (5.2) with the same final data, say \(\mathbf{\xi}\) and external forcing \(\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{H})\). Then \(\mathbf{v}=\mathbf{v}_{1}-\mathbf{v}_{2}\) satisfies the following system in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\) for a.e. \(t\in[0,T]\) and in \(\mathbb{H}\) for a.e. \(t\in[\epsilon,T]\), for any \(0<\epsilon<T\):
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{v}(t)}{\mathrm{d}t}+ \widehat{\mathcal{A}}\mathbf{v}(t)&=-\frac{1}{z(t)}[\mathrm{B}(\mathbf{v}_{1} (t),\mathbf{v}(t))+\mathrm{B}(\mathbf{v}(t),\mathbf{v}_{2}(t))]\\ &\quad+\frac{\beta}{[z(t)]^{r-1}}\int_{0}^{1}\mathsf{C}^{\prime}( \theta\mathbf{v}_{1}(t)+(1-\theta)\mathbf{v}_{2}(t))\mathrm{d}\theta\mathbf{v}(t)\bigg{]} =\widehat{h}(\mathbf{v}(t)),\\ \mathbf{v}(T)&=\mathbf{0},\end{aligned}\right. \tag{5.5}\]
where
\[\widehat{\mathcal{A}}\mathbf{v}:=\mu\mathrm{A}\mathbf{v}+\bigg{(}\alpha+ \frac{\sigma^{2}}{2}\bigg{)}\mathbf{v}. \tag{5.6}\]
Our aim is to show that if \(\mathbf{v}(T)=\mathbf{0}\), then \(\mathbf{v}(t)=\mathbf{0}\) for all \(t\in[0,T]\). We prove this result by a contradiction first in the interval \([\epsilon,T]\), for any \(\epsilon>0\) and then by using the continuity of
in \([0,T]\) and the arbitrariness of \(\epsilon\), one can obtain the required result in \([0,T]\). Assume that there exists some \(t_{0}\in[\epsilon,T)\) such that \(\boldsymbol{v}(t_{0})\neq\boldsymbol{0}\). Since the mapping \(t\mapsto\|\boldsymbol{v}(t)\|_{\mathbb{H}}\) is continuous, the following alternative holds:
1. for all \(t\in[t_{0},T]\), \(\|\boldsymbol{v}(t)\|_{\mathbb{H}}>0\) or
2. there exists a \(t_{1}\in(t_{0},T)\) such that for all \(t\in(t_{0},t_{1})\), \(\|\boldsymbol{v}(t)\|_{\mathbb{H}}>0\) and \(\boldsymbol{v}(t_{1})=\boldsymbol{0}\).
In the second case, denoting by \(\widehat{\Lambda}(t)\), the ratio
\[\widehat{\Lambda}(t)=\frac{\langle\widehat{\mathcal{A}}\boldsymbol{v}(t), \boldsymbol{v}(t)\rangle}{\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}}=\frac{\mu\| \boldsymbol{v}(t)\|_{\mathbb{V}}^{2}+\left(\alpha+\frac{\sigma^{2}}{2}\right) \|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}}{\|\boldsymbol{v}(t)\|_{\mathbb{H}}^ {2}}\geq\mu\frac{\|\boldsymbol{v}(t)\|_{\mathbb{V}}^{2}}{\|\boldsymbol{v}(t) \|_{\mathbb{H}}^{2}}. \tag{5.7}\]
Calculations similar to (3.41) and (3.42) provide
\[\frac{1}{2}\frac{\mathrm{d}\widehat{\Lambda}}{\mathrm{d}t}+\frac{\|\widehat{ \mathcal{A}}\boldsymbol{v}-\widehat{\Lambda}\boldsymbol{v}\|_{\mathbb{H}}^{2}} {\|\boldsymbol{v}\|_{\mathbb{H}}^{2}}=\frac{\langle\widehat{\mathcal{A}} \boldsymbol{v}-\widehat{\Lambda}\boldsymbol{v},\widehat{h}(\boldsymbol{v}) \rangle}{\|\boldsymbol{v}\|_{\mathbb{H}}^{2}}\leq\frac{1}{2}\frac{\|\widehat{ \mathcal{A}}\boldsymbol{v}-\widehat{\Lambda}\boldsymbol{v}\|_{\mathbb{H}}^{2}} {\|\boldsymbol{v}\|_{\mathbb{H}}^{2}}+\frac{1}{2}\frac{\|\widehat{h}( \boldsymbol{v})\|_{\mathbb{H}}^{2}}{\|\boldsymbol{v}\|_{\mathbb{H}}^{2}}. \tag{5.8}\]
For \(d=2\) and \(r\in[1,\infty)\), one can use the estimates given in (3.45) and (3.55) to obtain an estimate for \(\|\widehat{h}(\boldsymbol{v})\|_{\mathbb{H}}^{2}\). Thus, we consider the case \(d=3\) and \(r\in[3,5]\) (\(2\beta\mu\geq 1\) for \(d=3\)). We estimate \(\|\widehat{h}(\boldsymbol{v})\|_{\mathbb{H}}^{2}\) using (3.46) and (3.55) as
\[\|\widehat{h}(\boldsymbol{v})\|_{\mathbb{H}}^{2} \leq\frac{C}{[z(t)]^{2}}(\|\boldsymbol{v}_{1}\|_{\mathbb{V}}\| \mathrm{A}\boldsymbol{v}_{1}\|_{\mathbb{H}}+C\|\boldsymbol{v}_{2}\|_{\mathbb{ V}}\|\mathrm{A}\boldsymbol{v}_{2}\|_{\mathbb{H}})\|\boldsymbol{v}\|_{\mathbb{V}}^{2}\] \[\quad+\frac{C\beta}{[z(t)]^{2(r-1)}}\bigg{(}\|\boldsymbol{v}_{1} \|_{\mathbb{L}^{r+1}}^{\frac{r-1}{2}}\|\boldsymbol{v}_{1}\|_{\mathbb{L}^{3(r+ 1)}}^{\frac{3(r-1)}{2}}+\|\boldsymbol{v}_{2}\|_{\mathbb{L}^{r+1}}^{\frac{r-1 }{2}}\|\boldsymbol{v}_{2}\|_{\mathbb{L}^{3(r+1)}}^{\frac{3(r-1)}{2}}\bigg{)} \|\boldsymbol{v}\|_{\mathbb{V}}^{2}. \tag{5.9}\]
Therefore, from (5.7) and (5.8), we have
\[\frac{\mathrm{d}\widehat{\Lambda}}{\mathrm{d}t} \leq\frac{C}{\mu[z(t)]^{2}}(\|\boldsymbol{v}_{1}\|_{\mathbb{V}}\| \mathrm{A}\boldsymbol{v}_{1}\|_{\mathbb{H}}+C\|\boldsymbol{v}_{2}\|_{\mathbb{ V}}\|\mathrm{A}\boldsymbol{v}_{2}\|_{\mathbb{H}})\widehat{\Lambda}\] \[\quad+\frac{C\beta}{\mu[z(t)]^{2(r-1)}}\bigg{(}\|\boldsymbol{v}_ {1}\|_{\mathbb{L}^{r+1}}^{\frac{r-1}{2}}\|\boldsymbol{v}_{1}\|_{\mathbb{L}^{3( r+1)}}^{\frac{3(r-1)}{2}}+\|\boldsymbol{v}_{2}\|_{\mathbb{L}^{r+1}}^{\frac{r-1 }{2}}\|\boldsymbol{v}_{2}\|_{\mathbb{L}^{3(r+1)}}^{\frac{3(r-1)}{2}}\bigg{)} \widehat{\Lambda}. \tag{5.10}\]
An application of the variation of constants formula yields
\[\widehat{\Lambda}(t) \leq\widehat{\Lambda}(t_{0})\exp\!\left[\frac{CT^{\frac{1}{2}}}{ \mu}\sup_{t\in[0,T]}e^{2\sigma\mathrm{W}(t)}\sum_{i=1}^{2}\sup_{t\in[t_{0},T]} \|\boldsymbol{v}_{i}(t)\|_{\mathbb{V}}\!\left(\int_{t_{0}}^{T}\|\mathrm{A} \boldsymbol{v}_{i}(t)\|_{\mathbb{H}}^{2}\mathrm{d}t\right)^{\frac{1}{2}}\right]\] \[\quad\times\exp\!\left[\frac{C\beta T^{\frac{5-r}{2(r+1)}}}{ \mu}\sup_{t\in[0,T]}e^{2(r-1)\sigma\mathrm{W}(t)}\sum_{i=1}^{2}\sup_{t\in[t_{0 },T]}\|\boldsymbol{v}_{i}(t)\|_{\mathbb{V}}^{\frac{r-1}{2}}\bigg{(}\int_{t_{0 }}^{T}\|\boldsymbol{v}_{i}(t)\|_{\mathbb{L}^{3(r+1)}}^{r+1}\mathrm{d}t\bigg{)} ^{\frac{3(r-1)}{2(r+1)}}\right]\!. \tag{5.11}\]
Using the estimate (5.4), one can easily see that the right hand side of (5.11) is finite.
On the other hand, we infer
\[-\frac{1}{2\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}}\frac{\mathrm{d}}{\mathrm{d}t }\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}=-\frac{\langle\boldsymbol{v}(t), \partial_{t}\boldsymbol{v}(t)\rangle}{\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}}= \widehat{\Lambda}(t)-\frac{\langle\widehat{h}(\boldsymbol{v}(t)),\boldsymbol{v} (t)\rangle}{\|\boldsymbol{v}(t)\|_{\mathbb{H}}^{2}}. \tag{5.12}\]
For \(d=2\), \(r\in[1,\infty)\), we can use calculations similar to (3.51) and (3.56) to estimate \(\frac{|\langle\widehat{h}(\boldsymbol{v}),\boldsymbol{v}\rangle|}{\| \boldsymbol{v}\|_{\mathbb{H}}^{2}}\). Thus, we consider the case \(d=3\) and \(r\in[3,5]\) (\(2\beta\mu\geq 1\) for \(d=3\)). Calculations similar to
(3.51) and (3.56) yield
\[\frac{|\langle\widehat{h}(\boldsymbol{v}),\boldsymbol{v}\rangle|}{ \|\boldsymbol{v}\|_{\mathbb{H}}^{2}} \leqslant\frac{2^{1/2}\big{(}\|\boldsymbol{v}_{1}\|_{\widehat{ \mathbb{L}}^{4}}+\|\boldsymbol{v}_{2}\|_{\widehat{\mathbb{L}}^{4}}\big{)}\| \boldsymbol{v}\|_{\mathbb{V}}^{7/4}}{\|\boldsymbol{v}\|_{\mathbb{H}}^{7/4}}+ \frac{C}{\|\boldsymbol{u}\|_{\mathbb{H}}^{\frac{2r}{r+1}}}\Big{(}\| \boldsymbol{u}_{1}\|_{\mathbb{L}^{r+1}}^{r-1}+\|\boldsymbol{u}_{2}\|_{\mathbb{ L}^{r+1}}^{r-1}\Big{)}\|\boldsymbol{u}\|_{\mathbb{V}}^{\frac{2r}{r+1}}\] \[\leqslant\widehat{\Lambda}+\frac{C}{\mu^{7}}\big{(}\|\boldsymbol {v}_{1}\|_{\widehat{\mathbb{L}}^{4}}^{8}+\|\boldsymbol{v}_{2}\|_{\widehat{ \mathbb{L}}^{4}}^{8}\big{)}+\frac{C}{\mu^{r+1}}\Big{(}\|\boldsymbol{v}_{1}\|_ {\widehat{\mathbb{L}}^{r+1}}^{(r+1)(r-1)}+\|\boldsymbol{v}_{2}\|_{\widehat{ \mathbb{L}}^{r+1}}^{(r+1)(r-1)}\Big{)}. \tag{5.13}\]
Therefore, using (5.13) in (5.12), we easily have
\[-\frac{\mathrm{d}}{\mathrm{d}t}\log\|\boldsymbol{v}(t)\|_{ \mathbb{H}}\] \[\quad\leqslant 2\widehat{\Lambda}(t)+\frac{C}{\mu^{7}}\big{(}\| \boldsymbol{v}_{1}(t)\|_{\mathbb{V}}^{8}+\|\boldsymbol{v}_{2}(t)\|_{\mathbb{V }}^{8}\big{)}+\frac{C}{\mu^{r+1}}\Big{(}\|\boldsymbol{v}_{1}(t)\|_{\mathbb{V }}^{(r+1)(r-1)}+\|\boldsymbol{v}_{2}(t)\|_{\mathbb{V}}^{(r+1)(r-1)}\Big{)}. \tag{5.14}\]
According to (5.11), (5.3) and (5.4), the right hand side of (5.14) is integrable on \((t_{0},t_{1})\) and this contradicts the fact that \(\boldsymbol{v}(t_{1})=\boldsymbol{0}\). Thus we are in the case (i) of the alternative and backward uniqueness result follows.
**Corollary 5.2**.: _Under the assumption of Theorem 5.1, either \(\boldsymbol{u}\) vanishes identically or \(\boldsymbol{u}\) never vanishes._
For \(d=2,r\in[1,\infty)\) and \(d=3,r\in[3,5]\) (\(2\beta\mu\geqslant 1\) for \(d=r=3\)), the following result is a direct consequence of the backward uniqueness result which can be proved in a similar way as in Theorem 4.3.
**Theorem 5.3** (Approximate controllability).: _The space \(\{\boldsymbol{u}^{\boldsymbol{x}}(T):\boldsymbol{x}\in\mathbb{H}\}\) is dense in \(\mathbb{H}\), \(\mathbb{P}\)-a.s., where \(\boldsymbol{u}^{\boldsymbol{x}}(\cdot)\) is the unique solution to the system (5.1)._
**Acknowledgments:** M. T. Mohan would like to thank the Department of Science and Technology -Science and Engineering Research Board (DST-SERB), India for a Mathematical Research Impact Centric Support (MATRICS), File No.: MTR/2021/000066, grant. The author would also like to thank Prof. E. Zuazua, Friedrich-Alexander Universitat Erlangen-Nurnberg for useful discussions.
|
2308.01751
|
ManiVault: A Flexible and Extensible Visual Analytics Framework for
High-Dimensional Data
|
Exploration and analysis of high-dimensional data are important tasks in many
fields that produce large and complex data, like the financial sector, systems
biology, or cultural heritage. Tailor-made visual analytics software is
developed for each specific application, limiting their applicability in other
fields. However, as diverse as these fields are, their characteristics and
requirements for data analysis are conceptually similar. Many applications
share abstract tasks and data types and are often constructed with similar
building blocks. Developing such applications, even when based mostly on
existing building blocks, requires significant engineering efforts. We
developed ManiVault, a flexible and extensible open-source visual analytics
framework for analyzing high-dimensional data. The primary objective of
ManiVault is to facilitate rapid prototyping of visual analytics workflows for
visualization software developers and practitioners alike. ManiVault is built
using a plugin-based architecture that offers easy extensibility. While our
architecture deliberately keeps plugins self-contained, to guarantee maximum
flexibility and re-usability, we have designed and implemented a messaging API
for tight integration and linking of modules to support common visual analytics
design patterns. We provide several visualization and analytics plugins, and
ManiVault's API makes the integration of new plugins easy for developers.
ManiVault facilitates the distribution of visualization and analysis pipelines
and results for practitioners through saving and reproducing complete
application states. As such, ManiVault can be used as a communication tool
among researchers to discuss workflows and results. A copy of this paper and
all supplemental material is available at https://osf.io/9k6jw and source code
at https://github.com/ManiVaultStudio.
|
Alexander Vieth, Thomas Kroes, Julian Thijssen, Baldur van Lew, Jeroen Eggermont, Soumyadeep Basu, Elmar Eisemann, Anna Vilanova, Thomas Höllt, Boudewijn Lelieveldt
|
2023-08-03T13:22:05Z
|
http://arxiv.org/abs/2308.01751v2
|
# ManiVault: A Flexible and Extensible
###### Abstract
Exploration and analysis of high-dimensional data are important tasks in many fields that produce large and complex data, like the financial sector, systems biology, or cultural heritage. Tailor-made visual analytics software is developed for each specific application, limiting their applicability in other fields. However, as diverse as these fields are, their characteristics and requirements for data analysis are conceptually similar. Many applications share abstract tasks and data types and are often constructed with similar building blocks. Developing such applications, even when based mostly on existing building blocks, requires significant engineering efforts. We developed ManiVault, a flexible and extensible open-source visual analytics framework for analyzing high-dimensional data. The primary objective of ManiVault is to facilitate rapid prototyping of visual analytics workflows for visualization software developers and practitioners alike. ManiVault is built using a plugin-based architecture that offers easy extensibility. While our architecture deliberately keeps plugins self-contained, to guarantee maximum flexibility and re-usability, we have designed and implemented a messaging API for tight integration and linking of modules to support common visual analytics design patterns. We provide several visualization and analytics plugins, and ManiVault's API makes the integration of new plugins easy for developers. ManiVault facilitates the distribution of visualization and analysis pipelines and results for practitioners through saving and reproducing complete application states. As such, ManiVault can be used as a communication tool among researchers to discuss workflows and results.
A copy of this paper and all supplemental material is available at ost.io/9k6jw, and source code at github.com/ManiVaultStudio.
High-dimensional data, Visual analytics, Visualization framework, Progressive analytics, Prototyping system.
## 1 Introduction
High-dimensional data has become important and ubiquitous in many applications. Yet, understanding this type of data remains challenging and poses many hurdles ranging from computational efficiency to interpretability. Combinations of automated analysis and interactive visualizations, visual analytics (VA) [12, 25], have proven to assist well in gaining insight for high-dimensional data. A variety of visual encodings and processing algorithms for high-dimensional data exist. At the same time, specialized application domains require specialized workflows for handling their data and often need to adapt established methods to their use case. Even though these domains encounter different domain-specific questions, they often deal with similar abstract data set types. Additionally, abstracting different domain-specific workflows regularly yields similar goals and user tasks [30, 8] which might be tackled with recurring visual encoding components like heatmaps or analytics methods such as dimensionality reduction. It is time-consuming and wastes development resources to reinvent the wheel by re-implementing, e.g., a linked selection mechanism for multiple coordinated views every time a domain-specific VA solution is needed [29, 34, 57, 69, 5]. We developed a visual analytics framework, ManiVault, as a flexible solution for VA software developers, application designers, and practitioners to im
Fig. 1: **Example screenshot of ManiVault used for the exploration of a hyperspectral imaging data set.**
plement algorithms and visual encodings, prototype workflow-specific tool sets, and perform their data exploration and analysis respectively.
Existing VA systems for exploring general multivariate data do not meet all of these goals. Commercial products like Visplore [40, 41] or Spotfire [1, 2] come with wide feature ranges but are closed-source and not easily extensible. Older open-source frameworks like XmdvTool [66] and GGobi [58] are mostly limited to visual analysis and lack analytics functions. ParaView [3] and Inviwo [23] are capable of displaying multivariate data as well but focus on field data and the representation of spatial structures. Business intelligence solutions like Tableau [56, 59] mostly focus on dashboard creation and chart recommendations. Other fast dashboard prototyping tools, like Keshif [70], provide infrastructure like linked selections of various data visualizations but lack analytics capability. With ManiVault we propose a visual analytics framework for general high-dimensional data that is easily extendable and lets both developers and practitioners re-use algorithmic and visualization building blocks for prototyping and reusing visual analytics systems.
Growing data sizes, both in the number of items and dimensions, increasingly complicate interactive analysis. Progressive visual analytics [55] intends to overcome this issue by continuously providing intermediate results of the current data analysis step. The ability to control the analysis based on continuous feedback is crucial for progressive VA systems [4]. In ManiVault we implement a data-centric and modular framework that facilitates continuous data updates and algorithm steering out of the box. The ManiVault core application manages data sets and plugins, which provide both analysis and visualization functionality. This architecture allows for fast data changes, selection updates, and overall flexible data exploration. Additionally, since each plugin is agnostic of any other, the system is easy to extend with new data types, visualizations, and analysis algorithms. ManiVault is written in C++, using the Qt framework [60] for cross-platform GUI development. OpenGL is used for high-performance rendering (e.g., our scatterplup plugin) but viewer plugins based on lower threshold JavaScript libraries like D3 [7] and Vega-Lite [50] are also possible. ManiVault is open source and can be found at github.com/ManiVaultStudio.
To summarize, in this paper we describe
* ManiVault, a modular and extensible visual analytics framework designed for high-dimensional data,
* several functionality extensions in the form of basic data-, viewer-, and analytics plugins, and
* three use cases ranging from plugin development to a practitioner's workflow.
## 2 Related Work
Visual analysis of high- and multidimensional data is broadly discussed in literature [17, 24, 68]. Here, we review the most relevant work on Visual Analytics (VA) systems for multidimensional data and visualization design environments with respect to our framework.
### Visual Analysis and Analytics Systems
VA systems for the exploration and analysis of high-dimensional data are well established both in academia and industry [14, 19]. Table 1 gives an overview comparison between ManiVault and visual analysis tools that we deem most similar. Most VA systems employ coordinated multiple views [47] with linked selections for data exploration, and we follow this approach with ManiVault as well. Chen et al. [10] discuss common practices and guidelines for the layout of multiple views.
Pioneering visual analysis frameworks for _multidimensional data_ include XmdvTool [66], Spotfire [1], GGobi [58] and the InfoVis toolkit [16]. These frameworks mostly focused on displaying data with a variety of visual idioms and enabled exploration with brushing tools and linked selections. XmdvTool was extended with several dimensionality reduction and clustering methods [13, 71, 72]. GGobi [58] integrates with the R language which enables users to apply analysis algorithms via scripting. Spotfire grew into a commercial, closed-source product with extensive analytics capabilities, while the others are open-source, albeit unmaintained. All of these tools predate _Progressive_ VA and are not optimized for the specific needs of continuous updates and steering of analytics processes. ManiVault is designed around the principles of progressive VA from the start using a data-centric architecture. Data-producing and -transforming plugins can continuously update the data managed by the core, while data consumers get automatically notified about these changes. Tableau [59], building on the Polaris system [56], might be the most prominent and representative universal VA system. Marketing itself as a business intelligence tool, Tableau focuses on flexible visualization of various data types and more general analytics functions can be added via Python or R scripts. Similarly, Visplore [40, 41] implements a suit of statistical analysis and visualization methods for tabular data and aims at providing quick visual feedback for visual interactions and data queries. Its commercial offspring [65] offers a more direct integration of scripting languages to supplement built-in analysis functions.
The open-source ParaView [3], like many other analysis frameworks for spatial field data, e.g., volume data, [6, 11, 48, 52] is based on the VTK library [51], and provides a wide range of visualization and analysis functions in an extensible framework. ParaView follows VTK's visualization pipeline and is designed around the flow of data through various transformations to their final visual presentation. Similarly, the commercial Amira Software [54, 61] offers a range of analysis functions for multidimensional volumetric data but it is not freely extensible. Many visual analysis systems traditionally target either geometric or abstract tabular data. However, in recent years, the analysis of spatial and non-spatial data has become increasingly integrated [53]. With ManiVault we create a system for general high-dimensional data that can be extended to handle arbitrary spatial or abstract data types. Our data-centric system design enables flexible exploration workflows instead of having practitioners concerned about data flow through each step of the visualization pipeline.
### Visualization Design Environments
_Visualization design environments_ or similarly visualization prototyping systems are tools for creating visualizations that provide a graphical user interface for specifying visual encodings of data and interaction dynamics. Many such systems exist, and here we provide an overview of the tools most similar to ManiVault.
Lyra [49] offers fine-grained design options for single plots through handles, drop-zones, and other interaction mechanisms for graphical setup of re-usable Vega or Vega-Lite [50] specifications. Lyra 2 [73] extends this framework by letting users define interactions like brushing and selection linkage between multiple plots. iVisDesigner [46]
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & ManiVault & XmdvTool [66] & GGobi [58] & Visplore [65] & Tableau [59] & ParaView [3] & Inviwo [23] \\ \cline{2-9} Focus on high-dim. data & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & — & — \\ Focus on field data & — & — & — & — & — & \(\bullet\) & \(\bullet\) \\ Extensible & \(\bullet\) & \(\bullet^{\text{a}}\) & \(\bullet\) & — & — & \(\bullet\) & \(\bullet\) \\ Visual Analytics & \(\bullet\) & \(\bullet\) & \(\bullet^{\text{b}}\) & \(\bullet\) & \(\bullet\) & \(\bullet^{\text{c}}\) & — \\ Progressive Analytics & \(\bullet\) & — & \(\bullet^{\text{b}}\) & \(\bullet\) & — & — & \(\bullet^{\text{d}}\) & — \\ VA system authoring & \(\bullet\) & — & — & — & \(\bullet^{\text{d}}\) & \(\bullet^{\text{e}}\) & — \\ Active development & \(\bullet\) & — & — & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ License & LGPL-3 & Public domain & EPL & Commercial & Commercial & BSD-3 & BSD-2 \\ \hline \hline \end{tabular}
* No dynamic extension loading
* When used with its API, e.g., in combination with R
* Via Time [27]
* The systems can be extended with Visual Analytics functionality by plugins or Python integration, but the focus is on interactive field visualization
* Focus on dashboards with pre-populated data
\end{table}
Table 1: **Comparison with other visual analysis tools that are most similar to ManiVault.**
follows similar principles but places emphasis on collections of data visualizations in a dashboard format. Keshif [70] focuses on a novice user audience by automatically aggregating data and selecting visual representations based on pre-defined mappings for various data types. In contrast to the above design environments for single or multiple visualizations, ManiVault is a design environment for complete visual analytics systems including automated analysis methods. While the above systems are focused on abstract data, Inviwo [23] presents a visualization prototyping system for spatial field data. Its design allows users to specify visualizations on various abstraction levels, from visual (connecting functional boxes) to conventional programming. Compared to Inviwo's data-flow model, ManiVault is data-centric and focused on providing several visualizations and analytics tool building blocks. ManiVault's core system coordinates views on the data and enables linked selections between views out-of-the-box.
From a plugin-in developer's perspective, ManiVault resembles the prefuse [20] and ComVis [35] toolkits. They provide development environments and software components for building dynamic visualizations. Both focus on non-spatial data and target graph and tabular data set types. Scripting-based solutions like Dash [43] for creating dashboard applications or Voila [28] for converting Jupyter notebooks into standalone web pages provide a GUI front-end to the wide offer of analysis libraries in the Python, R or Julia ecosystems. ManiVault is specifically laid out for progressive and high-dimensional data analysis. Our C++ implementation supports high-performance computations and interactions necessary for visual analytics.
## 2 Design Considerations
We designed ManiVault as a VA framework with multiple user groups in mind. While these groups can overlap, their requirements for the effective and convenient use of ManiVault are varied.
### _General Setting_
High-dimensional data has become ubiquitous in many domains and the analysis of such data plays a pivotal role in acquiring insights into complex systems. Analytics software in different domains targeted at such data generally utilizes comparable sets of analytical and visual tools, such as dimensionality reduction, clustering algorithms, scatterplots, or parallel coordinates plots. These generic tools are then combined with data-, user-, and domain-specific tools and customizations to create a specific application. The primary motivation for developing ManiVault is to facilitate rapid construction of visual analytics applications for high-dimensional data without the need to re-implement common functionality. Modularity is a key aspect for creating reusable tools, both on a code as well as a user-facing abstraction level. The second main motivation for ManiVault is a need for flexible exploratory analysis, but also subsequent sharing of results, as well as the means to recreate the corresponding workflows. We learned of the target user characteristics and design requirements during multiple collaborations with practitioners in various fields [21, 33, 44, 62] spanning several years.
### _Target Users_
We identified three target user groups, each with specific requirements:
1. [label=0.,leftmargin=*]
2. **Developers** use ManiVault to implement new ideas and methods. These users, e.g., visualization researchers, interact with the system via code in order to create customized modules. Developers need the framework to provide a stable API that allows for the integration of their methods with little overhead. Further, they need existing modules to focus on their specific contribution; e.g., a developer of a dimensionality reduction method might want to visualize results in an existing scatterplot module without having to implement their own.
3. **Application designers** combine and adapt existing modules to create stand-alone applications for specific use-cases. Not all options of a view (e.g., the point size in a scatterplot) might be necessary for a specific workflow, and providing all options in the GUI can be distracting. In these scenarios, ManiVault needs to support flexible GUI customization. To minimize the burden, the framework should support such customization directly in the GUI without programming.
4. **Prectitioners** and domain experts use the software to analyze their high-dimensional data. Practitioners need ManiVault to allow for a flexible data exploration process, to provide responsive user interfaces, and to offer domain-specific visualization and analysis modules. Once their analysis is finished, practitioners need the ability to easily share and reproduce the results and their workflow in ManiVault. Given a well-defined workflow, they also need easy access to specified presets of visualization and analysis layouts.
The boundaries between these user groups are fluid. E.g., a skilled practitioner might want to extend a pre-bundled application with a module or develop a module themselves.
### _System Requirements_
Based on the general usage setting and needs of our target users, we define the following high-level requirements for a visual analytics platform such as ManiVault. The framework must be:
1. [label=**R0**]
2. **Extensible**: ManiVault has to provide an interface for adding new functionalities. It must be possible to create modules for new 1. [label=**]**
3. **data types, 2. [label=**0.,leftmargin=*]**
4. **visualizations, 3. [label=0.,leftmargin=*]**
5. **analytics methods, 4. **data transformations, 5. [label=0.,leftmargin=*]**
6. **loading/writing data.
7. **Flexible**: ManiVault must allow for workflows in multiple domains and specifically enable straightforward workflow adaption during use.
8. **Linkable**: ManiVault must provide modules with an API to easily link data selections and synchronize parameters, such that no dependencies between modules are created.
9. **Configurable**: ManiVault must provide options for GUI configuration during runtime through the user interface.
10. **Distributable**: ManiVault must be able to save its current state, including layout, data sets, and settings and reproduce a saved state.
11. **Performance**: ManiVault must be performant when handling large data, stay responsive and provide interfaces to interact with processes during calculation to support progressive VA.
## 3 ManiVault Architecture
In order to ensure easy extensibility (**R1**), ManiVault is implemented as a modular system, see Fig. 1(a). The core application is a lightweight set of managers and any user-facing functionality is dynamically loaded from self-contained libraries, i.e., plugins, respectively discussed in Secs. 4.1 and 4.2 (**R6**). This compartmentalization into a core and extensions provides easier maintainability, better scalability, and faster development. Together with a data-centric system structure (Sec. 4.3), this enables flexible workflows (**R2**) with various analytics and visualization techniques. ManiVault features an intricate notification and parameter sharing system to allow for communicating between plugins, see Sec. 4.4 (**R3**). GUI management objects, called actions (Sec. 4.5), implement a part of the communication system and the configuration and serialization system, see Secs. 4.6 and 4.7 (**R4**, **R5**).
### _Core Application_
ManiVault's core is modularized into a set of managers, actions, and utilities as shown in Fig. 1(a). ManiVault comprises a data-centric architecture: a data manager stores and admimitics access to data sets. All data sets are organized hierarchically, such that derived data sets like clusterings, embeddings, or proper subsets are marked as children of their respective source data. This enables simple access to properties of the parent data set and propagation of selections from derived to source data sets. Analysis, transformation, visualization, and loading/writing functionality as well as the definition of data types themselves are separated into plugins. A plugin manager loads plugins into the core and makes them available to the user. Each plugin can _consume data_, i.e., process existing data in the core and/or _produce data_, i.e., store a new or alter an existing data set in the core. While each plugin is self-contained, communication between plugins is made possible
using two messaging systems (Sec. 4.4). An event manager in the core administers globally defines notifications while actions are used for run-time configurable notifications (see Figs. 1(b) and 1(c)).
The general application layout is handled by a workspace manager which takes care of the arrangement of all GUI widgets provided by view plugins. The core contains two main system view plugins, a data hierarchy, and a data properties viewer. The former displays the internal hierarchical data structure, while the latter shows properties of the data (number of data points, dimensions, active selections) and gives users access to the settings of analytics plugins, as discussed in more detail in Sec. 5. ManiVault provides a number of actions, GUI management objects, and administers any user-defined linking between them, see Sec. 4.5. Further, a project manager is responsible for saving and loading the current state of the application, including loaded data sets, the GUI layout, opened plugins, and linked parameters. Global settings applicable to, e.g., all plugins or the general application layout are handled by a dedicated settings manager.
Additionally, ManiVault's core supplies a set of utilities like dedicated renderers, shaders, color maps, mathematical helper classes, such as vectors and matrices, as well as common algorithms like mean shift clustering. These tools can be used to create a more coherent visualization and analysis setup across plugins. E.g., developers can rely on the availability of a standard set of color map types in every view plugin, while maintaining the ability to introduce custom ones.
### _Plugin Types_
ManiVault works with six distinct plugin types that bundle various types of functionality. The system can be easily extended with new functionality by writing a new plugin that will automatically be loaded on start-up (**R1**). In combination with the data-centric core architecture, this enables a user to perform flexible workflow changes (**R2**).
**Data plugins** enable extending the types of data the system can handle. ManiVault provides a base data plugin class that developers can extend to define a custom data format. E.g., we provide an image data type that extends our basic point data type with image dimensions and thus a mapping of points to image coordinates. The system can generally be extended with arbitrary data formats.
**View plugins** provide a view on the data and allow interaction, such as selection of data elements. Views can be fully-fledged visualizations or simpler views such as lists. View plugins are primarily _data consumers_, i.e., they take a data set as input for visualization, but can also function as _data producers_, e.g., by providing means for annotating data. We provide example plugins with diverse backends, like OpenGL and D3.
**Analytics plugins** allow for the implementation of data analytics modules such as dimensionality reduction. As such, they are primarily _data producers_ but also follow the _data consumer_ API to receive the input data on which they perform calculations.
**Transformation plugins** resemble analytics plugins in code but are semantically different. They are also primarily _data producers_, but while analytics plugins derive new properties, e.g., an embedding, that can have an arbitrary shape, transformation plugins produce data of the same shape, i.e., with identical items and attributes. An example of such a transformation is a normalization of the original data.
**Loader/Writer plugins** respectively load specific types of data into the system (_data producer_) or write it back to file (_data consumer_).
### _Data Handling_
The data handling in ManiVault follows a model-view pattern. Internally, the core's data manager keeps a list of raw data models, data set views, and selection views. A data plugin has to define both a raw data model and data set view -- the selection view is simply another instance of the same data set view on the raw data. The raw data model holds the physical data values of a set and is never exposed directly to non-data plugins. Therefore, for most intents and purposes, the data set views can be regarded as the actual data sets present in the system. They define access to the raw data for all non-data plugins by providing, e.g., views on or copies of it. Each raw data object is associated with exactly one selection object to ensure straightforward selection sharing across all plugins that access a data set. Selection and set views can be separately requested and adjusted. This model-view pattern allows for a simple API and to create and use subsets with minimal overhead.
New data sets can be marked as derived from existing ones, e.g., when a new data set is created by an analytics plugin. The derived data also functions as the user-facing entry point through which the analytics settings can be accessed. This operation will create new data set and raw data objects but no new selection view. Instead, selection views are shared between parent and derived data sets. This simplifies the propagation of selections between views, e.g., a derived embedding shown in a scatterplot and the original data in a parallel coordinates plot. To enable selection sharing between arbitrary data sets, ManiVault lets users group data sets in the hierarchy view. Selections of any data sets within a group and with the same number of data points are then automatically synchronized.
We implemented a set of base data plugins in ManiVault, including plugins for point data, multichannel images, clusters, color, and text data. The development of ManiVault so far primarily targeted the point data type, which can store various high-dimensional integer and floating point formats. Our image data plugin shows the versatility of ManiVault's data handling and the point data type. When loading an image, two data sets are created: a point data set whose raw data object stores the actual pixel values and a child image data set whose raw data object stores metadata like image size. The image data set view provides access to the parent's raw data. This configuration ensures compatibility with analytics, transformation, and view plugins that expect point data to process multichannel images.
The implemented data handling system is lightweight. Besides the basic ManiVault core (\(<90\) MB), the data manager and hierarchy require \(<8\) MB of memory (on Windows). Each loaded data set produces less than \(1.5\) MB overhead in addition to its binary size, stemming from the plugin instance and core integration. More details can be found in Supplemental Material S1.
Fig. 2: **ManiVault’s system architecture. The core manages data and events, provides GUI management (actions), etc. Green @ borders indicate plugins, a light-grey \(>\) background the core. Data flow from the core to data consumer plugins and from data producer plugins to the core is indicated with \(>\) arrows. (b) View A listens to not ifDatasetDataChanged emitted by View B. View B does not listen to the notifyDatasetSelectionChanged event triggered by View A, but any plugin could. (c) a view plugin published a DecimalAction, moving the action in a shared parameters space and immediately subscribes to it. Now, an analytics plugin can connect to the shared action, enabling synchronization across plugins.**
### _Plugin Communication_
Coordinated Multiple Views (CMVs) [47] are the basis for virtually any visual analytics application. While the individual views in a CMV system naturally map to modules in a modular architecture, an essential part of CMV systems is the integration of those views. This enables techniques like brushing and linking [9], where selections on the data are propagated to all views in the system, or the synchronization of parameters, like the viewport in an Overview+Detail system [42]. Enabling such linking of views, without breaking the system's modularity **(R3)** is no trivial task. A plugin should be self-contained with respect to its functionality. Yet, at the same time, plugins need to be able to communicate, such that they can inform other plugins about data changes and that their parameters can be linked and synchronized throughout the application.
We have designed and implemented two interfaces to solve the issue of inter-plugin communication. First, an event-based communication API to cover common system-wide types of events related to data set changes (Sec. 4.4.1) and second a parameter-sharing API (Sec. 4.4.2) as part of our GUI building blocks (Sec. 4.5).
#### 4.4.1 Core Events
The ManiVault core API provides an event-based system for inter-plugin communication using the publish-subscriber pattern. Plugins send predefined events to the core, which distributes them, and all subscribers (typically plugins) can digest these events as depicted in Fig. 1(b). To efficiently support linking and brushing **(R3)**, we have implemented such events for any changes of data values like addition (notifyDatasetAdded), updates (notifyDatasetDataChanged), removal (notifyDatasetRemoved)), changes to data selections (notifyDatasetSelectionChanged) and several other data related changes. A plugin can choose to listen to all events of a certain type or subscribe only to certain events concerning a specific data set.
An example of a linked selection is shown in Fig. 3. The figure shows a screenshot with three views, a scatterplot and a density plot on the left, and the properties of a clustering analysis on the right. Clicking any cluster in the clusters list (Fig. 2(a)) will update the selection set attached to the data set and notify the core of these changes with the notifyDatasetDataSelectionChanged event. The core will then emit the dataSelectionChanged event with the changed data as an argument and subscribed plugins will receive a notification that triggers a refresh of the view with the updated selection (red points in Fig. 2(b)).
#### 4.4.2 Shared Parameters
We designed a complementary API to share parameters between modules **(R3)** using GUI actions (Sec. 4.5). With this system, a plugin parameter is exposed to other plugins by placing it in a public shared parameter pool, i.e., the parameter is _published_ (Fig. 1(c)). From there, other plugins can _subscribe_ to published parameters (provided that the parameter types match). Any change to a published parameter will be synchronized with all subscribed parameters. We provide common GUI elements with ManiVault, that developers can integrate into their plugins such that the user can publish a parameter or subscribe to any published parameter at run-time through the GUI **(R4)**.
Figure 3 presents an example in the form of the kernel bandwidth (sigma) parameter used in kernel density estimation (KDE) employed in density plot visualizations (Fig. 2(c)) but also mean-shift clustering. We have implemented plugins for both that allow real-time changes of the sigma parameter, based on Lampe and Haussers real-time KDE [31]. Linking this parameter between the density plot and the clustering module enables visually finding a suitable density estimation while the clustering is updated on-the-fly. To link the parameters the user simply clicks on the underlined label in the GUI (Fig. 2(d)), e.g., in the density plot view, and chooses "publish". After defining a suitable name for the parameter, the user can then click on the corresponding label in the settings widget of the mean shift clustering plugin (Fig. 2(e)) and click subscribe to be presented with a list of suitable parameters, including the just defined one. After subscribing, the connection is indicated by the italic font of the _Sigma_ label.
### _Actions_
To support sharing of parameters as described above, but also to make it easy to capture the state of a plugin, configure the GUI and unify the look and feel between plugins, we have devised and implemented a number of building blocks we call _actions_ on top of the standard Qt GUI widges. These include simple actions for decimal and integral values as well as strings but also more complex elements such as colors, color maps, file-pickers, etc.. In addition to those standard GUI elements we implemented a number of custom actions targeting typical VA applications. These include a general-purpose selection action, that supports different modalities (brushing, rectangle, lasso, etc.) and Boolean combinations (replace, add, remove), and a dimension picker action that provides a consistent way to select one or multiple dimensions of a data set, e.g., to limit the input to a dimensionality reduction plugin. Although we believe that we provide large coverage of commonly required tasks with the built-in actions, we also provide an API for plugin developers to create custom actions.
By using our actions API, sharing of parameters as described in Sec. 4.4.3 is automatically available through the GUI. In addition, actions can also be attached to data objects, to expose their functionality to other plugins. A data producer plugin can, e.g., attach an action to trigger a calculation within the plugin. Other plugins can query these attached actions and provide the corresponding GUI elements within their scope. We showcase this in our Hierarchical Stochastic Neighbor Embedding (HSNE) [37] analytics plugin. The plugin creates a hierarchical embedding structure that can be refined interactively. We attach an action for triggering the refinement to the produced embedding data set. When viewing the embedding in a scatterplot, the scatterplot view plugin exposes the refine action and other attached actions through the context menu. The user can then trigger the refinement directly from the scatterplot visualization, even though the actual calculation is carried out by the HSNE plugin.
Besides serving as GUI building blocks, we have also implemented support for serialization in the action system. Each action can be serialized into a Qvariant object, including its complete current state, consisting of whether it is active, visible, writable, and the parameter itself. All actions that belong to a plugin form a hierarchy that can again be serialized into a QVariant object and from there into a JSON
Fig. 3: **Parameter sharing by connecting two actions of the same type in the GUI. Both, the Mean-Shift plugin and Scatterplot plugin use a DecimalAction to steer their computation and view respectively.**
object in memory or file on disk. As such, a plugin that has consistently been implemented with the actions API supports saving and loading of the state out-of-the-box. Currently, we use this to create presets of a plugin's configuration and to save the complete state of the application to a project file. In the future, we intend to extend this to a complete provenance mechanism.
An example of a simple decimal action is the implementation of the Sigma parameter discussed above and shown in Fig. 2(d). The GUI for this parameter consists of the label, a spinbox, and a slider. Rather than manually creating the GUI elements, the desired elements can be specified when creating the action. An example of a customization that we integrated in the decimal action is to show a spinbox or slider individually or both, as in this example. The action then creates the GUI elements on-the-fly and also makes sure they are synchronized by creating them as linked views on the parameter itself. The underlined label indicates that the parameter is publishable and/or ready to subscribe, while the italics font indicates that it is already linked. Clicking the label opens a GUI interface for setting up parameter linking.
### _Projects and Workspaces_
To save the entire state of the application and fully restore it at a later point in time ManiVault uses projects (**R5**). Projects extend the serialization of actions, described in Sec. 4.5, to the core framework, capturing settings and the layout of the CMV system. In addition, a project contains a complete snapshot of the data hierarchy. We implemented projects as self-contained, compressed archives that are a combination of human-readable JSON files and binary files. Two JSON files are used to save the entire state of the application. A workspace.json contains the CMV layout and actions state and a project.json saves the data hierarchy and additional project metadata. The actual data sets are saved as raw binary blobs, with unique identifiers referenced in project.json, to minimize load and save times. As such, a project is completely self-contained and can be easily distributed to share findings or simply used to come back to an analysis at a later point in time.
We split the description of the project into project.json and workspace.json to add an additional feature, i.e., the definition of user-defined workspaces. As described above, the workspace contains the complete spatial arrangement of views (layout configuration) and their complete state. A workspace is used to set up a complete tailor-made CMV VA application, including customized GUI elements, but without preset data, as a project would. To enable easy tailoring of layouts and cross-plugin connections directly in the application, even without programming, we designed the _Studio Mode_ for ManiVault.
### _Studio Mode_
For the configuration of actions, workspaces, and complete projects, ManiVault can be put into _Studio Mode_. This mode of operation allows application designers to create complete tailor-made applications and data viewers from within the GUI of ManiVault itself.
A plugin editor, shown in Fig. 3, enables fine-grained control over the user interface. It lists an overview of all actions that are currently available for opened plugins (Fig. 3(a)). Therein each action can be enabled or disabled as a whole, but also customized with respect to its visibility or whether it can be published, connected, or disconnected. Additionally, the editor lets a user configure general options like the name of a plugin instance, shown in its title bar, or whether the GUI of the plugin may be moved or closed (Fig. 3(b)).
The plugin editor is an essential tool for application designers, to create a completely customized user experience for a specific application. At the same time, it provides the possibility for advanced users of the system to create presets of views. Besides saving a complete project, users can adjust the interface of an individual plugin to their needs and save the resulting configuration as a template for future instances of that plugin. Using the serialization described above, these templates can be saved to disk, providing persistent access across sessions.
For a user-definable flexible layout of the application, we incorporate the Qt advanced docking system [22] into ManiVault. The system allows users and application designers to re-arrange the entire layout according to their needs and preferences.
## 5 ManiVault Implementation
The ManiVault core is implemented in C++ and the Qt [60] cross-platform application development framework. ManiVault provides a plugin API for data types, view, analytics, transformation, and writer/loader modules. For each of these types we provide template implementations to lower the entry barrier for developers. In addition, we have already implemented a number of plugins for various use cases, including some of the core functionality of ManiVault such as the basic data types, and the data hierarchy and data properties view plugins.
The **data hierarchy view** (Fig. 4(a)) functions as the central access point to any data loaded or created in ManiVault. It displays the data hierarchy in a searchable tree widget where derived data, such as a clustering, are added as children to the original data. A data set can be loaded into a viewer plugin by simply dragging it from the hierarchy onto the view (Fig. 4(c)). Alternatively, the user can also interact with each data set through a context menu providing access to all compatible data consumer plugins. For a fast setup of plugins that expect more than a single input, users can select multiple data sets in the hierarchy and open them through the same menu. The info panel shows additional information like an analytics progress bar, status messages from plugins or data group affiliation. If a data set is associated with an analytics plugin, selecting the hierarchy entry will open the analytics settings in the properties view.
The **data properties view** (Fig. 4(b)) provides information for a data set selected in the data hierarchy. For a loaded data set this can be additional metadata created by the loader, e.g., the extents of an image data set. More importantly, the data properties view also functions as the user interface for analytics and transformation plugins. These
Fig. 4: Example of the plugin **GUI configuration editor** which allows application designers to edit the properties of the plugin actions hierarchy from within the application.
Fig. 5: **Data hierarchy** (a) and **data properties** view (b) in ManiVault. Data sets can easily be shown in views via drag and drop (c).
plugins are instantiated through the context menu of a data set, which then functions as their input; their output data sets are then created as children of the input. Selecting an output data set provides access to the parameters of the analytics or transformation plugin. Fig. 4(b) shows the data properties view of an embedding data set, created with our t-SNE plugin. From here, the user can at any time interact with the t-SNE algorithm, e.g., to pause the calculation, change parameters or compute more iterations.
The data hierarchy and data properties views are integral parts of the system. More specific functionality is implemented in a number of further plugins. Dimensionality reduction, integral to high-dimensional data analysis, is provided by Principal Component Analysis (**PCA**), t-distributed Stochastic Neighbor Embedding (**t-SNE**) [63], and Hierarchical Stochastic Neighbor Embedding (**HSNE**) [37] plugins. The t-SNE and HSNE plugins wrap the high-performance HDI library [36] and as such scale to millions of data points using its GPU-based implementations [38]. For clustering, we provide an interactive **mean-shift clustering** plugin, based on real-time kernel density estimation [31].
For visualization, we provide a number of plugins for common plots, including a **scatterplot** (Fig. 5(a)), **parallel coordinates plot** (Fig. 5(b)), and **cluster heatmap** (Fig. 5(c)). If performance is not a major concern, developers can use web views in combination with Qt's webchannel API for communicating between the C++ back-end and web-technology-based front-end. This allows for easily integrating the vast amount of available visualizations in languages like D3 [7] and Vega-lite [50]. Our heatmap and parallel coordinates plot are based on this technology. While the webchannel introduces some overhead, such plugins are generally limited by the performance of the JavaScript rendering libraries. If the scalability of a visualization is of high priority, developers can implement custom high-performance views, e.g., using OpenGL. We have done so with our scatterplot and image view (Sec. 5.1) plugins. The scatterplot enables visualization and interaction with millions of points in real-time. In the default point rendering mode, the different visual channels (point size, color, opacity, etc.) are fully configurable either using fixed values or based on any fitting data available. Additionally, we implemented a density representation, to provide more visual scalability.
Finally, for data loading and writing, we currently provide support for basic formats in the form of a comma-separated value (**CSV**) load-er/writer and a **binary** loader/writer.
### High-Dimensional Imaging
Besides traditional abstract high-dimensional data analytics, we target a number of applications related to high-dimensional imaging (e.g., the workflow presented in Sec. 6.2). As such, we developed a number of plugins targeting such image data.
Central to these efforts is the **image data type** plugin. The image data type extends the point data type by the extent of the image. Consequently, the image data type is compatible with all data consumer plugins that take point data as input; e.g., this allows to calculate a t-SNE using the pixels of a high-dimensional image as input.
We implemented a sophisticated **image view** plugin (Fig. 5(d)). Inspired by widely used image editors, we opted for a layer-based approach. Users can simply drag multiple data sets into the view, where they are added as layers. From here, users can define the transparency, as well as the position of each layer, e.g., to stack multiple properties of a single data set as semi-transparent layers or arrange complementing data sets next to each other. These interactions are possible through standard navigation tools for zooming and panning, while selection is implemented using the action described in Sec. 4.5. The actual visualization of the image is fully configurable: One of two attributes can be displayed by using 1D and 2D color mapping, and three attributes by directly mapping them to the three channels of _RGB_, _HSL_, or _CIELAB_ color spaces.
Next to the image viewer, we also provide a **spectral view** plugin (Fig. 5(e)), specifically for hyperspectral images. The viewer is based on a simple D3 line plot and shows spectra of individual pixels or, in the case of groups (e.g., selections or clusters), a mean spectrum and a variation as a band around it.
To load image data into ManiVault, we currently provide two options. The first one is a versatile general **image loader** plugin. Hyperspectral image data is commonly available as a stack of grayscale images, where each image represents a specific wavelength, also interpreted as a dimension of a high-dimensional space. Our image loader detects such stack in a folder containing common image formats (including.png,.jpg,.tiff), and also allows direct loading of other common image formats (grascale, RGB, ARGB). Dimensions can be interactively included or excluded from the data set in the loading menu. We also support re-sampling of the data before loading and the creation of image pyramids to enable analysis at varying levels of detail, depending on the features of interest or time available for the analysis. Specific to hyperspectral images, we also provide an **ENVI loader** plugin compatible with L3Harris' geospatial analysis software ENVI [67].
## 3 Application Examples
ManiVault has already been used for several projects across four universities and several partners. Popa et al. [44] and Li et al. [33] describe the design of complete VA systems for analysis of cultural heritage and biological data, respectively. Vieth et al. [64] and Thijssen et al. [62] developed VA approaches for dimensionality reduction and explaining projections as ManiVault plugins. Here, we walk through exemplary usage scenarios for our framework from the perspective of our three target user groups (Sec. 3.2): software developers (Sec. 6.1), practitioners (Sec. 6.2) and application designers (Sec. 6.3).
### Writing ManiVault Plugins - Developer Perspective
ManiVault provides developers of VA modules with a comprehensive API for data set access, the event notification system, and the other core managers (Sec. 4.1). Extending the functionality of ManiVault through new plugins thus comes with minimal overhead. Example code for each plugin type is available at github.com/ManiVaultStudio/ExamplePlugins.
Here, we present two examples of the necessary steps for creating basic plugins (**R1**). First, we create an analytics plugin based on the high-performance t-SNE library HDI [36]. In addition, we discuss the implementation of a parallel coordinates plot (PCP) plugin using an existing D3 implementation. Together with the existing image viewer and scatterplot, these plugins combine into a complete GUI-based application shown in Fig. 6 that is usable by domain expert users without programming knowledge.
Figure 6: A selection of **viewer plugins** in ManiVault.
To implement the analytics plugin, we follow the steps laid out in Fig. 8. In step 1, we create the output data set by deriving a new data set from the input data, for which the plugin is opened in ManiVault. In this case, we will create a two-dimensional t-SNE embedding containing x- and y-coordinates for all of the points in the input data set. As such, the output data set will be a points data set that has the same number of points and two dimensions. Next, we add a settings action to the created data set and define GUI elements using ManiVault's action system. The actions are added to the output data and listed in the data properties view as shown in Fig. 6(a) (step 2). We create TriggerActions which add pushbuttons to the GUI, to start, pause, and resume the calculations and a number of categorical OptionActions and numerical DecimalActions, e.g., to expose t-SNE parameters like the distance metric (OptionAction) or perplexity (DecimalAction) (**R4**). Finally, in step 3, calls and reactions to library functions need to be defined. Here, we notify the core and thereby other plugins about updated output data, in particular, as the t-SNE optimization iteratively progresses, we notify the core after every iteration, such that the viewer plugins can show the progress live. The result is a lightweight wrapper with no notable performance overhead. Comparing the performance to running the HDI library using its own Python wrapper showed no performance regression (Supplemental Material S1), even when including progressive updates in ManiVault.
To implement the PCP viewer plugin, we need to set up a view widget that shows the PCP chart in addition to settings, like with the analytics plugin. Here, the settings are displayed in the same windows as the view widget (Fig. 6(b)). Since we build a JavaScript-driven plot, we derive this widget from ManiVault::WebWidget and introduce all HTML and JavaScript resources that are used for the PCP through a QT resource file, pcp.qrc (step 1, Fig. 9). Step 2 is to simply set the existing pcp.html file in the existing viewWidget. All JavaScript resources are automatically included through the HTML file. At this point, the viewer is only able to show the content of the provided HTML page. To establish interactions to and from the C++ side, we set up a ManiVault::WebCommunicationObject, which uses a QWebChannel. Within this communication object, we define
```
voidAnalyticsPlugin::init(){//1.DriveoutputfrominputdatasetsetoutputDataset(_core>createDerivedDataset("outData"));//2.AddsettingsactionstooutputdatasetoutDataset->addAction(_settings>getSettings());//3.ConnectGUIinteractions(e.g.buttonpress)//andlibraryallocks(e.g.progressorfinish)connect(_settings-getStartO),press,this,runTask);connect(_lib,finishedTask,this,updateCore); } [ViewWidget:ViewWidget():WebWidget(){//1.InitresourcesandcommunicationbridgeO_INIT_RESOURCE(pcp);init(_comObj);} } [ViewPlugin.cpp] voidViewPlugin::init(){//2.Initwebwidget(setHTMLcontents)viewWidget->setPage(":res/pcp.html","qrc:/res/");layout->addHidget(viewWidget); } [CommunicationObject.h] classComObj:publicWebCommunicationObject{//3.Initsignalsforcommunicationfromcpptojssignals: voidsetData(QvariantList&data);/5.Initslotsforcommunicationfromjstoccpppublicslots: voidupdateSelection(QvariantList&selection); } {quebchannel.tools.js} //4.Registersignalssentbytheviewwidgetbridge.setData.connect(function(){initPlot(arguments[@])})
```
### Data Exploration - Practitioner Perspective
Practitioners in various disciplines work with high-dimensional data sets. Here, we consider the exemplary case of exploring remote sensing data using ManiVault. Similar to other application areas, visual exploration of geospatial data is considered important but challenging [18]. While specific considerations and final insights will differ from domain to domain, we can follow the task abstraction by Lam et al. [30] to create a partial workflow that will be representative of many fields (**R2**).
We want to explore a hyperspectral image data set, the HYDICE image of the National Mall [32], showing 307 by 1280 pixels, each attached to 191 spectral bands covering the 0.4 um to 2.4 um region of the light spectrum reflected by the objects in view. Each band can be interpreted as an image channel. A major objective when exploring hyperspectral images is the identification of surface cover classes. It is typical to manually define class labels for a small subset of pixels that afterwards are used in semi-supervised automated classification for the rest of the data. Connecting any derived features from the spectrum back to the spatial image layout is essential during these analysis steps. More specifically, our goals are now to (I) explore the data, connected to the task of _discovering and describing observations_, and to (II) explain these observations by _identifying main causes_. These steps will yield well-justified classes that can be used in downstream analysis.
First, in ManiVault, we load the HYDICE data set using an _image loader plugin_. To inspect the loaded image we can open it in an _image viewer plugin_, which provides single-channel and false-coloring visualizations based on any three channels. We additionally open a _spectral view plugin_ which shows the full spectrum of a single pixel or the averaged spectrum of a selection that we define in the image viewer, resulting in the setup of Fig. 01(a). Then, to easily discover a hierarchical class structure, we use the _HSNE analytics plugin_ to create a hierarchical embedding of the data employing angular distance: we open
Fig. 8: Bare bone **analytics plugin setup** for wrapping a C++ library. Notifying of output data change (step 4) can be called progressively during the calculation of or on finishing a task.
Fig. 7: The **Spidr analysis and parallel coordinates plot** as implemented with the plugin setups from Figs. 8 and 9.
Fig. 9: Bare bone **power plugin setup** for wrapping a JavaScript library. Some boilerplate code is left out for brevity; complete implementation is available alongside other example plugins online.
the analysis through a context-menu of the data set entry in the _data hierarchy_, select the cosine distance metric, start the embedding and display it in a _scatterplot_ as seen in Fig. 9(b). Next, we manually outline three clusters that are apparent in the top-level HSNE embedding as shown in Fig. 1 (center top). To inspect their spectra, we drag and drop the new cluster data set from the data hierarchy into the spectral viewer, Fig. 1 (right). Additionally, we might inspect the cluster sizes in the _data properties_. Clicking on a specific cluster displayed in the data properties will select corresponding data points in the embedding and highlight corresponding pixels in the image (Fig. 9(c)). Thus, we can quickly relate the cluster spectra to image positions and define the main pixel classes water, vegetation, and buildings. We want to focus on a single cluster -- the one corresponding to buildings. Therefore, we refine the cluster of interest to a lower HSNE hierarchy level through a context menu opened by clicking inside the embedding -- the HSNE plugin added an action to the data set that is displayed there as well as in the data properties window. To establish a visual connection between the spatial data layout and embedding, we drag the new embedding data set to the image viewer, which automatically infers the proper image dimensions for the data subset from its parent in the data hierarchy and converts it into an additional image layer. Further, we can link the colormaps of this image layer and the embedding through the parameter-sharing system by publishing one and connecting the other to it (**R3**). Zooming into a spatial area of interest, Fig. 1 (left), we can discriminate between several building structures like houses and streets, and even create sub-classes of roofs that immediately stand out thanks to the embedding-based recoloring.
The above procedure intertwined the accomplishment of goals (I) and (II). ManiVault made it easy to connect various views on the data, i.e., a spatial layout, high-dimensional pixel attributes, and derived features in the form of embedding positions. We quickly discriminated between classes in the data and identified differing spectral characteristics as their cause. A video that walks through the full procedure can be found as supplemental material.
### Sharing Analysis Setups - Designer Perspective
ManiVault's workspace and project features can be used to save and continue an analysis session but also enable dissemination of results and complete workflows. To showcase this, we re-implemented the Cytoscape Viewer application [15] dedicated to sharing the results of Bakken et al. [5] in ManiVault, shown in Fig. 11. Instead of having to write an entire stand-alone application to share an interactive environment alongside data to explore related insights in, we can use a ManiVault project to bundle both views and data (**R5**).
The viewer application depicts RNA sequencing data on brain cells from three vertebrate species. The viewer aims to highlight differences in the expression of genes and cell types that are shared across the species as described in the original paper. The main elements of the viewer application are three scatterplots showing t-SNE embeddings of the gene data of each species, a hierarchical cluster viewer showing cell types, and a table view showing statistical properties of the expression data. To create the viewer, we configure ManiVault's GUI from within the GUI (**R4**). We start with loading all data sets and setting up a single scatterplot plugin. We link scatterplot parameters like its colormap to a global settings panel that lets users configure all three scatterplots, like in the original application. Its settings can be saved as a preset which we use for the other two scatterplot instances. Similarly, we populate the cluster hierarchy view and table viewer with data. Figure 11 shows a configuration in which a user-selected entry in the table view defines the data attributes (here a gene's expression) used to recolor the scatterplot data points (here tissue samples).
ManiVault's Studio Mode allows us to lock this setup of views and parameter connections. This is achieved by simply publishing the current view layout, loaded data, and parameter linkage through the "File" menu tab. We can now share the viewer with other parties.
## 6 Discussion and Conclusion
This paper describes the design considerations for and implementation of ManiVault, an extensible visual analytics framework for high-dimensional data. Due to its modular architecture and data-centric design, the software enables flexible exploration and analysis workflows. We presented various plugins that provide visualization and analytics functionalities to the system. To build upon these, we showed how existing libraries can be easily incorporated into the system. ManiVault's action and event systems allow users to adjust plugins and their interplay, enabling the creation of fully customized applications.
Currently, the system provides data plugins that cover a wide range of applications. New data types like multivariate graph data [26] can be introduced into the system as new data plugins without changes to the application's core. We plan to extend the current serialization mechanism, used for saving the state of the system, to handle information about interaction history and other kinds of provenance [45]. Finally, we would like to include analytics plugins that run code in interpreted languages like Python or R, to easily integrate the vast amount of data science tools available in those languages.
We believe that ManiVault has great potential in aiding with the creation and use of visual analytics applications for visualization developers, practitioners, and application designers.
Figure 11: Screenshot of a re-implementation of a **Cytosphere Viewer** for comparative cellular analysis of motor cortex in human, rammoseet, and mouse [5]. The viewer shows embeddings of cells from the three species in combination with a shared cluster hierarchy and the option to calculate differential gene expression. See Suppl. S2 for a larger figure version.
Figure 10: **A typical exploration workflow with ManiVault: A user can open and re-arrange views on the fly, derive new data sets using analytics plugins and connect parameters between plugins. Linked colormaps of the scatterplot and image viewer are shown in Fig. 1.**
## Supplemental Materials
All supplemental materials are available on OSF at [https://osf.io/9k6jw/](https://osf.io/9k6jw/), released under a CC BY SA 4.0 license. In particular, they include (1) benchmark results, S1, and a larger version of Fig. 11, S2, (2) Excel files containing the data presented in S1, (3) Python scripts to run the nptsne benchmark from S1, (4) two videos showcasing ManiVault and (5) a full version of this paper.
## Acknowledgments
Author contributions: Alexander Vieth: Writing and Plugin Development; Thomas Kroes: Lead Developer (Core and Plugins) & Architect; Julian Thijssen: Developer & Architect of initial core, Plugin Developer; Baldur van Lew: Build Infrastructure; Jeroen Eggermont and Sournyadep Basu: Viewer & Plugin Development; Elmar Eisenmann, Anna Vilanova, Thomas Hollt, and Boudewijn Lelieveldt: project conception, manuscript writing, general supervision.
This work received financial support from the NWO TTW project 3DOMCTS (NWO: 17126), the NWO Gravitation project BRAINSCAPES: A Roadmap from Neurogenetics to Neurobiology (NWO: 024.004.012), and the NIH Brain Initiative Cell Atlas Network (UM1MH130981).
|
2308.10884
|
Double black hole mergers in nuclear star clusters: eccentricities,
spins, masses, and the growth of massive seeds
|
We investigate the formation of intermediate mass black holes (IMBHs) through
hierarchical mergers of stellar origin black holes (BHs), as well as BH mergers
formed dynamically in nuclear star clusters. Using a semi-analytical approach
which incorporates probabilistic mass-function-dependent double BH (DBH)
pairing, binary-single encounters, and a mass-ratio-dependent prescription for
energy dissipation in hardening binaries, we find that IMBHs with masses of
$O(10^2)$-$O(10^4)\rm M_\odot$ can be formed solely through hierarchical
mergers in timescales of a few $100$ Myrs to a few Gyrs. Clusters with escape
velocities $\gtrsim400$ km s$^{-1}$ inevitably form high-mass IMBHs. The spin
distribution of IMBHs with masses $\gtrsim 10^3$ M$_\odot$ is strongly
clustered at $\chi\sim 0.15$; while for lower masses, it peaks at $\chi\sim
0.7$. Eccentric mergers are more frequent for equal-mass binaries containing
first-and/or second-generation BHs. Metal-rich, young, dense clusters can
produce up to $20\%$ of their DBH mergers with eccentricity $\geq0.1$ at
$10\,\rm Hz$, and $\sim2$-$9\%$ of all in-cluster mergers can form at $>10$ Hz.
Nuclear star clusters are therefore promising environments for the formation of
highly-eccentric DBH mergers, detectable with current gravitational-wave
detectors. Clusters of extreme mass ($\sim10^8$ M$_\odot$) and density
($\sim10^8$ M$_\odot$pc$^{-3}$) can have about half of all of their DBH mergers
with primary masses $\geq100$ M$_\odot$. The fraction of in-cluster mergers
increases rapidly with increasing cluster escape velocity, being nearly unity
for $v_{\rm esc}\gtrsim 200$ km s$^{-1}$. Cosmological merger rate of DBHs from
nuclear clusters varies $\lessapprox0.01-1$ Gpc$^{-3}$yr$^{-1}$.
|
Debatri Chattopadhyay, Jakob Stegmann, Fabio Antonini, Jordan Barber, Isobel M. Romero-Shaw
|
2023-08-21T17:33:39Z
|
http://arxiv.org/abs/2308.10884v2
|
Double black hole mergers in nuclear star clusters: eccentricities, spins, masses, and the growth of massive seeds
###### Abstract
We investigate the formation of intermediate mass black holes (IMBHs) through hierarchical mergers of stellar origin black holes (BHs), as well as BH mergers formed dynamically in nuclear star clusters. Using a semi-analytical approach which incorporates probabilistic mass-function-dependent double BH (DBH) pairing, binary-single encounters, and a mass-ratio-dependent prescription for energy dissipation in hardening binaries, we find that IMBHs with masses of \(\mathcal{O}(10^{2})-\mathcal{O}(10^{4})\,\mathrm{M}_{\odot}\) can be formed solely through hierarchical mergers in timescales of a few 100 Myrs to a few Gyrs. Clusters with escape velocities \(\gtrsim 400\,\mathrm{km}\,\mathrm{s}^{-1}\) inevitably form high-mass IMBHs. The spin distribution of IMBHs with masses \(\gtrsim 10^{3}\,\mathrm{M}_{\odot}\) is strongly clustered at \(\chi\sim 0.15\); while for lower masses, it peaks at \(\chi\sim 0.7\). Eccentric mergers are more frequent for equal-mass binaries containing first- and/or second-generation BHs. Metal-rich, young, dense clusters can produce up to 20% of their DBH mergers with eccentricity \(\geq 0.1\) at \(10\,\mathrm{Hz}\), and \(\sim 2-9\%\) of all in-cluster mergers can form at \(>10\,\mathrm{Hz}\). Nuclear star clusters are therefore promising environments for the formation of highly-eccentric DBH mergers, detectable with current gravitational-wave detectors. Clusters of extreme mass (\(\sim 10^{8}\,\mathrm{M}_{\odot}\)) and density (\(\sim 10^{8}\,\mathrm{M}_{\odot}\mathrm{pc}^{-3}\)) can have about half of all of their DBH mergers with primary masses \(\geq 100\,\mathrm{M}_{\odot}\). The fraction of in-cluster mergers increases rapidly with increasing cluster escape velocity, being nearly unity for \(v_{\mathrm{esc}}\gtrsim 200\,\mathrm{km}\,\mathrm{s}^{-1}\). Cosmological merger rate of DBHs from nuclear clusters varies \(\lesssim 0.01-1\,\mathrm{Gpc}^{-3}\mathrm{yr}^{-1}\), where the large error bars come from uncertainties in the cluster initial conditions, number density distribution, and redshift evolution of nucleated galaxies.
keywords: keyword1 - keyword2 - keyword3
## 1 Introduction
Since the first direct observation of gravitational waves through the merger of two black holes (BHs) (Abbott et al., 2016), the LIGO-Virgo-KaGRA (LVK) gravitational-wave detector network has recorded about 85 DBH coalescence candidates (Abbott et al., 2021, 2021). This ever-growing data-set of double BH (DBH) systems reveals that the observed BH mass spectrum ranges from \(>5\,\mathrm{M}_{\odot}\) to over \(\approx 140\,\mathrm{M}_{\odot}\)(Abbott et al., 2021), with a smooth transition between the stellar-mass BH range (tens of \(\mathrm{M}_{\odot}\)) to the intermediate-mass BH (IMBH) BH (tens \(-10^{2}\)-\(10^{5}\,\mathrm{M}_{\odot}\)). Indeed, the gravitational-wave event GW190521, produced by the merger of a \(\sim 85\,\mathrm{M}_{\odot}\) BH with a \(\sim 66\,\mathrm{M}_{\odot}\) BH (Abbott et al., 2020), has been cited as the first observation of the formation of an IMBH.
The LVK observations of DBH mergers are conjectured to be originating via two main formation channels: isolated binary mergers in areas of relatively low stellar density such as fields of galaxies (Belczynski et al., 2007; Belczynski et al., 2016; Stevenson et al., 2017; Giacobbo & Mapelli, 2018) and dynamically-driven DBH mergers in dense stellar systems such as star clusters (Banerjee et al., 2010; Rodriguez et al., 2016; Askar et al., 2017; Di Carlo et al., 2019; Chattopadhyay et al., 2022). The observed BH mass, spin, and eccentricity distributions are expected to be affected by the host environment. For instance, due to tidal effects, isolated binaries are thought to have component spins that are aligned or nearly-aligned with the orbital angular momentum of the binary (Stevenson et al., 2017). Isolated binaries may also be more likely to have low spin amplitudes (e.g., Bavera et al., 2020). On the other hand, dynamical environments are predicted to produce a population of DBHs with an isotropic spin-tilt distribution due to frequent spin-tilt-randomising interactions with other bodies (Rodriguez et al., 2016). Additionally, merger products that are retained in the dynamical environment and themselves undergo mergers should have higher dimensionless spin amplitudes \(\sim 0.7\) due to the conservation of angular momentum (Doctor et al., 2021). Dynamical encounters are also expected to lead to higher eccentricities in about \(1-10\%\) of the mergers (Wen, 2003; Antonini et al., 2014; Samsing et al., 2018; Gondan et al., 2018; Rodriguez et al., 2018), unlike efficiently-circularized mergers in isolated evolution (Peters, 1964). Finally, it is unexpected that isolated stars formed at sub-solar metallicities evolve into BHs \(\gtrsim 40\,\mathrm{M}_{\odot}\) due
to (pulsational) pair instability supernova ((P)PIISN), which results in massive stars \(\gtrsim 120\) M\({}_{\odot}\) leaving no remnants (or lower-mass remnants in the case of PPSN, which leads to a peak in the BH mass distribution at \(\lesssim 40\) M\({}_{\odot}\), as studied by Belczynski et al. 2016b; Woosley 2017; Spera & Mapelli 2017). All of the LVK observing runs have yielded detections of BHs above \(>40\) M\({}_{\odot}\), within the region often referred to as the (P)PISN mass-gap. Recently, evidence has been found for spin-induced precession in merging DBHs (Abbott et al. 2021d,e; Hannam et al. 2022), as well as debatable evidences of eccentric coalesences in the LVK data (e.g. Gayarthi et al. 2020; Romero-Shaw et al. 2022b), pointing towards the importance of investigating the dynamical origin of DBH mergers.
Over the past few years, in parallel to the gravitational wave observations, there have also been radio observations by the Event Horizon Telescope (EHT) of the supermassive (\(\gtrsim\)10\({}^{6}\) M\({}_{\odot}\)) BH (SMBH) at the centre of M87 (Akiyama et al. 2019), as well the SMBH Sagittarius A* (Akiyama et al. 2022) at the centre of our own Milky Way. These detections reaffirm the long-standing question of the "missing link": IMBHs, which can connect the stellar-mass BH observed in X-ray binaries (Tetarenko et al. 2016; Generozov et al. 2018; Chakraborty et al. 2020; Charles et al. 2022) and gravitational-wave events to the SMBHs at the centres of galaxies. Gravitational-wave detections of BHs within the (P)PISN mass gap suggest that the LVK-observed population contains BHs produced via a formation channel that subvents the mass restrictions of isolated evolution. As such, the beginning point of an investigation into the location and formation of IMBHs can start from the study of hierarchical mergers of stellar-origin BHs in star clusters (Rizzuto et al. 2021).
The idea that a DBH merger product can subsequently keep merging with other BHs or other BH merger products require the remnant to be subjected to small (less than the cluster escape velocity) recoil kicks, in order to be retained in the cluster (Mapelli 2016; Rodriguez et al. 2019). Within the star cluster population, young open clusters of mass \(<10^{4}\) M\({}_{\odot}\) and ordinary globular clusters (up to 10\({}^{6}\) M\({}_{\odot}\)) have low escape velocities of \(O\) (1-2)km s\({}^{-1}\). Very massive globular clusters and nuclear star clusters with a mass range of \(>10^{6}-10^{9}\) M\({}_{\odot}\) and density range of \(10^{5}-10^{8}\) M\({}_{\odot}\) pc\({}^{-3}\), however, have a higher likelihood of retaining subsequent generations of DBH merger remnants, leading to significant mass growth (Antonini & Rasio 2016; Antonini et al. 2019; Fragione & Silk 2020).
The modelling of such massive clusters with direct \(N-\)body or even Monte-Carlo simulations is extremely computationally expensive. Although there have been some recent trials of developing more efficient codes and utilising GPUs for this purpose (e.g. Wang et al. 2020; Kamlah et al. 2022), most simulations require significant computational time and supercomputing facilities to produce a statistically significant dataset. No simulations are yet sufficiently efficient to handle \(\gtrsim 10^{7}\) bodies with detailed stellar evolution. Using a semi-analytical approach to the problem can solve most of these issues, providing a flexible, user-friendly alternative that is much faster and can assist in understanding the internal dynamics of large-\(N\) systems. Basing on the works of Henon (1972) and Breen & Heggie (2013) showing the macroscopic cluster properties to be insensitive to the details of their microscopic structure, we use the updated semi-analytical code cBHbd(Antonini & Gieles 2020b; Antonini et al. 2023) to model massive clusters and study their DBH mergers.
In this work, we particularly focus on nuclear star clusters. These are found at the centre of most sufficiently well resolved low and intermediate-mass galaxies (Boker et al. 2004; Cote et al. 2006), including the Milky Way (Schodel et al. 2007). They are the densest and most massive star clusters observed in the local universe, and are often found to host a SMBH at their centre (e.g., Georgiev et al. 2016; Neumayer et al. 2020). Those nuclear star clusters might be the precursors of massive BHs in the galactic nuclei and that their might be a link between the two types of central objects has been suggested before (e.g., Neumayer & Walcher 2012; Stone et al. 2017; Atallah et al. 2023). Here, we consider whether a massive seed might be produced at the centre of a cluster through hierarchical mergers of BHs. The key questions that we address are:
1. Can we create IMBHs and SMBHs through hierarchical mergers in nuclear and massive globular clusters?
2. How do the host cluster properties affect its hierarchical mergers and hence the IMBH masses?
3. What are the mass, spin, and eccentricity signatures of the mergers in such massive clusters, and are they detectable by present and future gravitational wave detectors?
We discuss our methods and models in Sec. 2. Our results are described in Sec. 3. Sec. 4 describes our rate calculation, and, finally, Sec. 5 sums up.
Throughout this work \(G\) and \(c\) refer to the gravitational constant and the speed of light, respectively.
## 2 Methods
In our study we use the semi-analytical fast code cBHbd developed by Antonini & Gieles (2020b), with updated prescriptions for BH binary sampling and three-body encounters (Antonini et al. 2023). We have also adapted the mass sampling of binaries and triples, as is discussed further in this section.
Within cBHbd, we utilise Henon's principle (Henon 1972) of steady state or balanced evolution, after the initial evolution of the star cluster. During this state of equilibrium, the energy per unit relaxation time created at the cluster core (by BH binaries, for a BH-rich cluster) is a constant fraction of the net energy of the cluster. This links the host cluster's properties to its core binary (in our case, BH) population (Breen & Heggie 2013).
We sample the initial BH mass distribution by evolving the zero-age-main-sequence (ZAMS) stars following a Kroupa (2001) initial mass function (with the maximum ZAMS mass being 150 M\({}_{\odot}\)), using the single stellar evolution (SSE) prescriptions given by Hurley et al. (2000)1 with metallicity-dependent wind mass loss updates of Vink et al. (2001) and (P)PISN mass gap prescriptions from Spera & Mapelli (2017). The post-stellar evolution BH mass distribution is accounted for by computing the ejection of BHs due to natal kicks (Hobbs et al. 2005).
Footnote 1: It is to be noted that we do not consider the effect of primordial binaries in the BH mass function, and the initial BH mass function is solely produced through updated SSE. While binary stellar evolution (i.e. BSE, Hurley et al. 2002) may produce a slightly different initial BH mass spectrum, we do not consider its effect since most primordial binaries are expected to be disrupted by the core-collapse timescale. The effect of primordial binaries on these massive clusters remains a future course of study.
In cBHbd, after cluster core collapse (which occurs on the order of the initial half-mass relaxation time of the cluster, scaled by NBODY models; Antonini et al. 2019; Antonini & Gieles 2020b), balanced evolution is assumed. It is also assumed that there is only one DBH present at any given time in the cluster, producing the required energy at the cluster-core. This assumption is in agreement with theoretical expectations for the massive clusters we consider (Heggie & Hut 1993). The DBH is evolved through single BH encounters, which
might result in the merger of the binary and/or its ejection and/or ejection of the single BH.
BHs are paired following the power-law probability distribution as described in Antonini et al. (2023), with \(\mathrm{p}(m_{1})\propto m_{1}^{B_{1}}\) and \(\mathrm{p}(q)\propto q^{B_{2}}\), where \(q=m_{2}/m_{1}\) (with \(m_{1}>m_{2}\), such that \(q\leq 1\)) and \(\beta_{1}=8+2\alpha\), \(\beta_{2}=3.5+\alpha\). Here, \(\alpha\) is the power law index of the BH initial mass function. In reality, \(\alpha\) should be a function of time as the BH mass function evolves through ejections and mergers. For simplicity and computational convenience, however, we fix it to its initial value2. Each binary is then encountered by a third body of mass \(m_{3}\), drawn again from a power-law probability distribution \(\mathrm{p}(m_{3})\propto m_{3}^{B_{2}}\), with \(\beta_{3}=0.5+\alpha\). The exponent factor \(\alpha\) is obtained through fits from the initial BH mass distribution after natal kicks (described in details in Antonini et al. 2023). Since the initial BH mass spectrum is a strong function of metallicity, we extrapolate a polynomial fit for \(\alpha\) in metallicity (Z) as
Footnote 2: We have made a few tests where \(\alpha\) was updated at each timestep and the results were similar to those with a fixed \(\alpha\)
\[\alpha=c_{\mathrm{S}}2^{8}+c_{\mathrm{J}}Z^{7}+c_{\mathrm{G}}2^{6}+c_{\mathrm{ S}}2^{5}+c_{\mathrm{d}}Z^{4}+c_{\mathrm{J}}Z^{3}+c_{\mathrm{Z}}2^{2}+c_{\mathrm{ 1}}Z+c_{\mathrm{0}}\]
where, \(c_{\mathrm{S}}=8.5317\times 10^{16}\), \(c_{\mathrm{J}}=7.1772\times 10^{15}\), \(c_{\mathrm{G}}=2.3818\times 10^{14}\), \(c_{\mathrm{S}}=3.9582\times 10^{12}\), \(c_{\mathrm{d}}=3.4364\times 10^{10}\), \(c_{\mathrm{S}}=1.4564\times 10^{8}\), \(c_{\mathrm{2}}=2.2885\times 10^{5}\), \(c_{\mathrm{1}}=5.4322\times 10^{1}\) and \(c_{\mathrm{0}}=0.1954\).
The semi-major axis of the binary \(a\sim G\mu/\sigma^{2}\) is assumed to be initially in the hard-soft limit, where \(\mu=m_{1}m_{2}/(m_{1}+m_{2})\) is the reduced mass and the eccentricity \(e\) for each of the 20 resonant binary-single interaction is sampled from the thermal distribution (Samsing 2018).3 If \(\sqrt{1-e^{2}}<(2G(m_{1}+m_{2})/ac^{2})^{15/4}\), there is a gravitational-wave capture merger.
Footnote 3: The average number of intermediate states for binary-single encounters is determined to be \(\approx 20\) by Samsing (2018), although individually, it is dependent on the target binary initial separation and initial energy state of the single, which is ignored in our case.
For a hard DBH, the amount of energy lost (\(\Delta\mathrm{E}\)) from the binary due to an encounter with a single BH is usually assumed to be 20% of the initial binding energy (E) of the binary (Heggie & Hut 2003; Binney & Tremaine 2008). This is true for equal mass systems i.e. \(m_{1}\)=\(m_{2}\)=\(m_{3}\), averaged over all values of impact parameter. While this assumption is valid for most cases, inaccuracies may arise when the perturp \(m_{3}\) is several order of magnitude smaller than the binary. As such, both simulation (Hills & Fullerton 1980) and analytical calculation (Quinlan 1996) have shown that \(\Delta\mathrm{E}/\mathrm{E}\propto m_{3}/m_{1}\). Thus, for some of our models, we propose the functional form
\[\Delta\mathrm{E}/\mathrm{E}=0.4\frac{q_{3}}{(1-q_{3})}, \tag{1}\]
where \(q_{3}=m_{3}/(m_{1}+m_{2}+m_{3})\). Eq. (1) is normalized such that \(\Delta\mathrm{E}/\mathrm{E}\) reaches 0.2, when \(q_{3}=1/3\) so as to match the limiting condition with equal masses (Heggie & Hut 2003). When \(m_{1}\)=\(m_{2}\) and \(m_{1}>>m_{3}\), \(\Delta\mathrm{E}/\mathrm{E}\approx 0.2m_{3}/m_{1}\), which is smaller than predicted by Hills & Fullerton (1980) since that work only considered encounters with _zero_ impact parameter. The semi-major axis '\(a\)' of the binary becomes \(a\epsilon\), (where, \(0.83\leq\epsilon\leq 1\), the boundaries obtained for extremum cases of 20% and 0% of binary binding energy loss) after each binary-single interaction, as the single gains (\(1/\epsilon\)-1) of the binding energy of the binary (Antonini & Gieles 2020b).
Through a binary-single resonant encounter, the DBH can merge (following Peters 1964) and/or get ejected, or the merger remnant can be retained in the cluster, further merging with other BHs. The low-mass single perturp BH may also get ejected (Eqs. 3 and 4 of Antonini et al. 2023). The calculation then progresses to the next BH binary. The computational efficiency of the code is achieved by evolving the bulk properties of the cluster (mass, half-mass density, relaxation time etc.) independently, taking the mass loss through stellar evolution and BH ejections into account (described in details in Antonini & Gieles 2020b and Antonini & Gieles 2020a).
cBHbd does not account for binary-binary or higher-order chaotic encounters and accounts for only one DBH binary at a given time in the cluster. The presence of many-body encounters and several concurrent DBHs has been demonstrated in detailed N-body codes like NBODY6 (Banerjee 2018; Chattopadhyay et al. 2022) and MOCCA (Kamlah et al. 2022; Hong et al. 2020). However, these simulations refer to clusters with low masses, typically \(\lesssim 10^{5}M_{\odot}\). For the very massive clusters we consider here, higher-order interactions are expected to be strongly suppressed (Pina and Gieles in-prep.).
### Models
The set of 34 models (each run 100 times with different random seeds to account for statistical fluctuations) used for this project is tabulated in Table 1 and is described as follows.
#### 2.1.1 Fiducial
The base model Fiducial has an initial mass and half mass density of \(2\times 10^{7}\,\mathrm{M}_{\odot}\) and \(10^{7}\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-3}\) respectively. The only form of mass loss in this model (apart from BH ejections due to merger or binary-single recoils) is assumed to be due to stellar evolution (Hurley et al. 2000; Antonini & Gieles 2020b) i.e. mass loss due to stellar winds and supernovae (Lucy & Solomon 1970), and natal kicks (Lipunov et al. 1997). These parameters of the Fiducial model are chosen, such that after a Hubble Time, the cluster mass and density roughly aligns with that of the Milky-way nuclear cluster (Schodel et al. 2009, 2020). The Fiducial model metallicity is Z\(=1.5\times 10^{-4}\), the "rapid" supernova prescription is used for core-mass to BH-mass mapping (Fryer et al. 2012) and the BH natal kick is drawn from a Maxwellian distribution with \(\sigma_{\mathrm{Maxw}}=265\,\mathrm{km}\,\mathrm{s}^{-1}\)(Hobbs et al. 2005) scaled by fallback mass (Janka 2013). The initial BH spin is assumed to be zero. The BH binaries and binary-singles are paired according to the metallicity-dependant \(\alpha\) prescription described in Sec. 2. The amount of energy lost per binary-single encounter is assumed to be a function of the masses of the third-body perturber and the binary total mass, such that \(\Delta\mathrm{E}/\mathrm{E}=f(\mathrm{q}_{3})\), as given by equation (1).
#### 2.1.2 Other model variations
All other models have one or two specifications changed from the Fiducial model. Model variations are shown in Table 1. Different cluster initial masses and densities are explored with model serial numbers 2-11. The naming of each model reflects these changes. For example, model MSD8 has an initial mass and density of \(10^{8}\,\mathrm{M}_{\odot}\) and \(10^{8}\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-3}\) respectively, while MoD5 has an initial mass of \(10^{6}\,\mathrm{M}_{\odot}\) and an initial density of \(10^{5}\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-3}\). Models Z_10 and Z_100 have metallicities 10\(\times\) and 100\(\times\) that of Fiducial. Model SN_D uses the "delayed" supernova prescription, instead of Fiducial model's "rapid" prescription (Fryer et al. 2012).4 In the "Sd" models with serial numbers 15-18, we included in the cluster model a BH that is not produced through stellar evolution and with a mass which is
traditionally considered above the mass limit imposed by pulsational pair instabilities. We do note that the lower edge of the (P)PISN mass gap is rather uncertain and can be pushed to further higher masses if stellar rotation is included, nevertheless, \(\gtrsim 100\,\mathrm{M}_{\odot}\) is usually assumed not to be produced directly through stellar evolution from the Kroupa (2001) mass function (Marchant & Moriya, 2020). It is still possible that evolving massive stars in binaries or triples undergo mergers (pre-compact object formation), and then promptly collapses to very massive BHs (e.g. Stegmann et al., 2022; Arca Sedda et al., 2023). These seeds can also be considered as primordial BHs; however, we note that there is tremendous uncertainty in the existence of primordial BHs, their expected mass range, and the actual process of their seeding star clusters or early galaxies (Dolgov & Postnov, 2017; Yuan et al., 2023). Model MI_ev has added cluster mass loss due to cluster evaporation in addition to the standard mass loss via stellar evolution and BH ejections (see Fig. 1 and Section II.B in Antonini & Gieles (2020). Mass loss due to stellar evolution is neglected in models MI_0 and MI_0MJD5.
We explore an assumption that all BHs are born with zero natal kicks in the "Vk_0" models (serial numbers 22\(-\)25). We deviate from the BH binary and triple mass selection assumption in model "Ord_BH", where the two most massive BHs are selected to be in a binary, followed by the third most massive one making the triple perturber. The assumption of non-spinning initial BHs is varied in the "Sp" models (serial numbers 26\(-\)29), where the initial BH spin has different values. In one model we sample the initial BH spins from the distribution inferred from the GW data, i.e. Fig. 15 of Abbott et al. (2021b). The model group "DE" (serial numbers 31\(-\)34) changes the assumption of the mass-ratio dependent functional form of \(\Delta\mathrm{E}/\mathrm{E}\), and replaces it by a constant value of \(\Delta\mathrm{E}/\mathrm{E}=0.2\)(Heggie & Hut, 2003; Binney & Tremaine, 2008). The maximum integration time of the models is taken to be 13.5 Gyr, approximately a Hubble Time (e.g., Valcin et al., 2021).
## 3 Results
Specifically keeping the three primary questions of Sec. 1 in mind, we discuss the results obtained from our models some of which are summarised in Table 2. We divide this Section into three parts, depending on the location and mass of the DBH mergers that we discuss: (i) In-situ (in-cluster) mergers, (ii) Mass and spin evolution of IMBHs, and (iii) Ex-situ (ejected) mergers.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline Sl. & Model & Mass & Density & Metallicity & SN & BH seed & Mass Loss & Natal kick & BH spin & BH pairing & Delta E \\ & & \(M_{\mathrm{d,i}}\), & \(\rho_{\mathrm{h,i}}\) & & Z & prescription & & & \(\sigma_{\mathrm{Mav}}\) & \(\chi_{1,2}\) & & \(\Delta\mathrm{E}/\mathrm{E}\) \\ & & (M\({}_{\odot}\)) & (M\({}_{\odot}\) pc\({}^{-3}\)) & & & & & (km s\({}^{-1}\)) & & \\ \hline
[MISSING_PAGE_POST]
_200 & \(2\times 10^{7}\) & 10^{7}\) &
### In-situ mergers
In this Section, we discuss DBH mergers that occur inside the cluster. High cluster escape velocity plays a key role in retaining these DBHs, protecting them from ejection due to natal or recoil kicks. In fact, we shall see that almost all the DBH mergers formed in our models are in-cluster mergers. We split the discussion of the properties of the in-situ mergers in our cluster models by their masses, spins and eccentricities.
#### 3.1.1 Mass
The evolution of the DBHs that merge in our cluster model Fiducial, DE and Ord_BH is represented in Fig. 1. For models Fiducial and DE, in the first few hundred million years, there are a couple of BH primaries of \(\sim 100-300\) M\({}_{\odot}\) that compete until one emerges as the most massive IMBH and continues to grow in mass, following a near-logarithmic growth curve at later times. This near-logarithmic growth is due to the secular expansion of the cluster, which causes the energy generation rate by the binary to decrease and a significant drop in the DBH merger rate.
Since there is only one binary in the cluster at any one time and the most massive BH tends to pair with the second-most massive BH, as long as the merger remnant is retained in the cluster the formation of higher-generation BHs is suppressed. Hence, in the model Ord_BH, there can only be one \(>100\) M\({}_{\odot}\) BH at a time. A new \(>100\) M\({}_{\odot}\) BH, which eventually becomes the most massive IMBH of the model, is only produced after a \(\sim 400\) M\({}_{\odot}\) BH is ejected at \(\sim 350\) Myrs.
It is possible to identify the primaries in the first, second, third, and fourth DBH merger generations from the strata seen in Fig. 1. At the fourth generation, the most massive IMBH starts to completely dominate in-situ mergers. Recent studies have found multiple structures in the inferred BH mass spectrum (Tiwari and Fairhurst, 2021). There appears to be multiple peaks at \(\approx\)10,20,35,64 M\({}_{\odot}\) of the primary BH mass (Tiwari, 2023), and while their cause remains unknown owing to the difficulty in disentangling systematics, selection effects, unknown branching fraction between different types of environments of BH mergers and uncertainties in massive binary evolution as well as cosmological distribution and evolution of initial parameters of star clusters, different BH generations of in-cluster mergers in massive clusters could produce such features. We compare Fig. 5 of Tiwari and Fairhurst (2021) to our Fig. 2, illustrating that a combination of clusters with different escape velocities and metallicities (as well as isolated BH merger pathways, especially for the lower-mass end of the spectrum) can potentially help to explain some of the features in the mass distribution.
The merging DBH mass spectrum from massive clusters in which IMBHs form is remarkably different to that from young, open clusters. In open clusters, lower escape velocities ensure the ejection of the massive remnants formed from a DBH merger, while lower-mass first-generation BHs can still participate in dynamical mergers inside the cluster. This means that massive clusters such as nuclear and (the most massive) globular clusters are a much more probable formation environment for hierarchical mergers, and therefore for IMBH growth, than open clusters.
As Fig. 2 demonstrates, the low-mass cluster M6D7 has more low-mass BH primaries participating in DBH mergers than in the Fiducial model. For low escape velocity clusters (especially in metal-rich environments), less massive DBH mergers take precedence; thus, small and high-metallicity clusters are more probable environments for low-mass (\(\lesssim 15\) M\({}_{\odot}\)) DBH mergers (Abbott et al., 2021). Due to their high mass and density, clusters with a high escape velocity such as Fiducial form a massive IMBH at ease that hegemomizes the BH merger demographics, suppressing lower-mass mergers.
We emphasise that both BH primaries and secondaries can be merger remnants of previous generations that were retained in the cluster. In Fig. 3, we show the primary and secondary masses (\(m_{1,2}\)) of all mergers in the Fiducial, DE and M6D5 models. The presence of high-generation mergers in both \(m_{1}\) and \(m_{2}\) is clear from the build-up of BH masses above the maximum BH mass set within SSE (\(\sim 40\) M\({}_{\odot}\); Hurley et al., 2000); this build-up is more prevalent in the Fiducial and DE models than in the M6D5 model. The dominance of the IMBH in the in-cluster mergers is also illustrated by the plateau in secondary masses as the primary masses continue to rise.
Figure 1: The BH masses of the in-cluster DBH mergers in models Fiducial (in magenta and light pink), DE (in navy and light blue) and Ord_BH (in dark green and light green) across cluster evolution time (\(t_{\rm mer}\)). The dots of darker colours indicate the primary masses (\(m_{1}\)), while the secondary BHs (\(m_{2}\)) are depicted in the corresponding lighter shades of crosses.
Figure 2: The primary BH mass spectrum (\(<120\) M\({}_{\odot}\)) of the DBH mergers, showing structures for different merger generations shown through probability density function (PDF). The cluster escape velocity (M6D7 having a lower escape velocity than Fiducial) and initial BH mass distribution (via cluster metallicity—(Z\({}_{\sim}100\) has a metallicity \(O\) (2) greater than Fiducial) determine the location of the apparent peaks in the distribution. The grey dashed lines are plotted in accordance with Tiwari and Fairhurst (Fig. 5, 2021).
#### 3.1.2 Spin
The initial spin distribution affects the post-merger remnant spins (\(\chi_{\rm rem}\)) to a small extent. While \(\chi_{\rm rem}\) depends on the pre-merger component spins \(\chi_{1,2}\), it is also a function of the mass ratio \(q=m_{2}/m_{1}\) of the merger. If the symmetric mass ratio is \(\eta=q/(1+q)^{2}\), the two component BH vector spins are \(\vec{\chi}_{1,2}\) and the angular momentum vector is \(\vec{j}\), then the vector form of remnant spin (details in Lousto et al., 2010 and Lousto & Zlochower, 2009) is given by
\[\vec{\chi}_{\rm rem}=min(1..,\vec{\chi}_{1}+\frac{q}{(1+q)^{2}}l\vec{j}) \tag{2}\]
where,
\[l=2\sqrt{3}+t_{2}\eta+t_{3}\eta^{2}+s_{4}\frac{(1+q)^{4}}{(1+q^{2})^{2}}|\vec {\chi}_{1}^{2}|+\frac{(s_{5}\eta+t_{0}+2)(1+q)^{2}}{(1+q^{2})}\chi_{\rm p}, \tag{3}\]
and
\[\vec{\chi}_{1}=\frac{(q^{2}\vec{\chi}_{2}+\vec{\chi}_{1})}{(1+q)^{2}}. \tag{4}\]
The parallel component of \(\vec{\chi}_{1}\) is \(\chi_{\rm p}=\vec{\chi}_{1}\cdot\vec{j}\) and the numerical constants are \(t_{0}=-2.8904\), \(t_{2}=-3.51712\), \(t_{3}=2.5763\), \(s_{4}=-0.1229\) and \(s_{5}=0.4537\).
Because of the conservation of angular momentum during a merger, all models produce merger remnants with spins clustered around \(\chi_{\rm rem}\sim 0.7\). For example, we can compare the Fiducial and Sp_LVK models, where the former has initially non-spinning BHs and the latter has an initial BH spin distribution consistent with that observed in gravitational-wave events. After just the first generation of stellar-mass mergers (\(\lesssim 100\) M\({}_{\odot}\)), the remnant spin is already clustered around 0.7. Consequently, higher-generational mergers end up with similar \(\chi_{1,2}\) in both models (Fig. 4). The Fiducial model only shows a few lower \(\chi_{\rm rem}\) measurements compared to the Sp_LVK model, even though the initial distribution of \(\chi_{1,2}\) is very different between the two models.
Fig. 5 shows \(\chi_{\rm rem}\) as a function of \(m_{1}\) and \(q\). As \(m_{1}\) increases and \(q\) decreases, the spin of the IMBH goes down. This feature is present in all our models; e.g., the Sp_LVK with its first generational rotating BHs shows hardly any difference in the final IMBH spin compared to Fiducial. The reason why the IMBH spin goes down is because it grows by merging with smaller BHs coming from random directions. After many mergers, the angular momentum \(\vec{j}\) of Equation 2, averages out, leading to a net decrease in \(\chi_{\rm rem}\).
Hence, \(\chi_{\rm rem}\) reduces (\(\simeq 0.15\)) with increasing primary mass, as illustrated in the upper panel of Fig. 5. The effect of \(q\) on \(\chi_{\rm rem}\) is further illustrated in the lower panel of Fig. 5, showing more symmetric masses indeed produce more rapidly-spinning remnants.
Figure 4: Component spins \(\chi_{1,2}\) with respect to component masses \(m_{1,2}\) of the DBH mergers in models Fiducial (magenta and light pink) and Sp_LVK (dark green and light green). The dark-coloured dots and light-coloured crosses identify the primary and secondary of the binaries, respectively. The left-hand region with the white background signifies the initial distribution (hence first-generation BH) and the right-hand shaded region encompasses the mass range that can only be accessed via hierarchical mergers.
Figure 5: The distribution of remnant spin \(\chi_{\rm rem}\) with respect to the primary mass \(m_{1}\) (upper panel) and mass ratio \(q\) (lower panel). The continuous line in the middle of the lower plot for model M7D5 is the hierarchical merger of the same IMBH. For models Fiducial and Sp_LVK, IMBH formed are one-to-two orders-of-magnitude more massive (than M7D5), making their mergers with stellar-mass BH have very small values of \(q\).
Figure 3: The correlation of primary (\(m_{1}\)) and secondary masses (\(m_{2}\)) of in-cluster DBH mergers. The dashed lines show the BH maximum mass obtained from our stellar evolution model SSE, (Hurley et al., 2000).
In model M7D5, where the IMBHs are of low masses \(\lesssim 200\,\mathrm{M}_{\odot}\) and never reach \(>500\,\mathrm{M}_{\odot}\), the \(\chi_{\mathrm{rem}}\) distribution extends to higher values, as the mass ratio remains confined to relatively high values, within the range \(0.1-1\). In contrast, the massive \(\sim 10^{4}\,\mathrm{M}_{\odot}\) IMBH in the Fiducial model produces ratios as low as \(0.001\), resulting in lower \(\chi_{\mathrm{rem}}=0.13\). The result is a double-peaked distribution of \(\chi_{\mathrm{rem}}\). IMBH remnants with masses \(\lesssim 10^{3}M_{\odot}\) have spins clustered near \(\chi_{\mathrm{rem}}=0.7\), while the largest IMBHs formed in our models have \(\chi_{\mathrm{rem}}\simeq 0.15\) (see Table 2).
#### 3.1.3 Eccentricity
While most DBHs formed in clusters are circularized by the time they merge, a small fraction of them will still have a significant eccentricity (\(e>0.1\)) when they reach the frequency band of current detectors (e.g., Antonini et al., 2014; Samsing, 2018; Rodriguez et al., 2018, 2018). It has been argued that eccentric mergers are the most robust signature of DBH formation via the dynamical channel, and that a sub-population of eccentric binaries could help to resolve the branching fraction between isolated and dynamical DBH formation channels (e.g., Lower et al., 2018; Romero-Shaw et al., 2021; Zevin et al., 2021; Romero-Shaw et al., 2022; Dall'Amico et al., 2023).
Samsing (2018) (see Fig. 2) shows that while binary-single hardening can potentially harden a binary to ejection before it merges or to merge within the cluster, inclusion of 2.5PN terms in the orbital evolution can also lead to gravitation-wave capture during the binary-single resonant encounter. Eq. 2 of Antonini et al. (2023), which states the condition of a gravitational-wave capture, can be rewritten as
\[\sqrt{1-e^{2}}<h\left(\frac{2Gm_{1}(1+q)}{c^{2}a}\right)^{5/14}, \tag{5}\]
where \(e\), \(m_{1}\), \(q\) and \(a\) are the BH binary eccentricity, primary mass, mass ratio, separation respectively, while \(h\) is a normalisation constant of the order of unity. By this condition, higher \(e\), \(m_{1}\), \(q\) and smaller \(a\) ensure gravitational-wave capture.
The peak frequency \(f\) of a DBH of total mass \(M\), orbital separation \(a\) and eccentricity \(e\) is calculated by Wen (2003) as
\[f=\frac{1}{\pi}\sqrt{\frac{GM}{a^{3}}}\frac{(1+e)^{1.1954}}{(1-e^{2})^{1.5}}. \tag{6}\]
We define a BH binary merger to be eccentric if, at a gravitational-wave frequency of \(10\,\mathrm{Hz}\) (corresponding roughly to the low-frequency limit of the LVK band), \(e\geq 0.1\). All eccentric mergers are expected to be in-situ mergers, as ex-situ systems that are ejected from the influence of dynamical activity have larger time delays and circularize by the time they merge (e.g., Chattopadhyay et al., 2022). We find that about \(17\%\) of all mergers in the Fiducial model are eccentric, and about one-third of these mergers become bound within the LVK band. This sub-group of eccentric binaries that form at frequencies \(\geq 10\,\mathrm{Hz}\) at the source frame, will be called "high-frequency mergers" from here onward. All of the high-frequency mergers and about \(90-95\%\) of the eccentric mergers are gravitational-wave captures.
Younger clusters also provide lower-mass BHs and, when no IMBH is present to dominate the cluster dynamics, eccentric mergers become commonplace between nearly-equal-mass BHs. The top panel of Fig. 6 shows the mass ratio \(q\) distribution of the eccentric versus all mergers (marked "all" in the figure), illustrating that indeed eccentric mergers occur preferentially in more equal mass systems. Since a lower value for both \(m_{1}\) and \(m_{2}\) is preferred in captures, low mass primaries are more frequent in eccentric mergers than IMBHs. The lower panel of Fig. 6 demonstrates the trend for eccentric mergers to have less massive \(m_{1}\). In the inset plot, we compare the distribution for masses \(\leq 100\,\mathrm{M}_{\odot}\); there is barely any difference between the distributions for eccentric DBH mergers and the total population of DBH mergers, demonstrating that the difference arises due to circularised mergers involving the IMBH. The \(e\)
Figure 6: Cumulative density functions (CDFs) of eccentric mergers (dashed-line histograms) to all (both in-situ) DBH mergers (solid-line histograms) across mass ratio \(q\) (top panel) and primary mass \(m_{1}\) (bottom panel). Three different models are compared. The labels are identical for the two plots. The lower panel also shows the zoomed-in CDF of \(m_{1}\) for primary masses \(\leq 100\,\mathrm{M}_{\odot}\).
Figure 7: Eccentricity distribution of eccentric mergers with \(e\geq 0.1\) at a gravitational-wave frequency of \(10\,\mathrm{Hz}\) (calculated using Equation 6) for four different models. Binaries that become bound with gravitational-wave frequency \(\geq 10\,\mathrm{Hz}\) are represented in the shaded region. Typically, eccentric binaries form at lower (\(<10\,\mathrm{Hz}\)) frequencies with higher eccentricities and evolve to lower eccentricity at \(<10\,\mathrm{Hz}\), unless they form in the LVK band.
distribution at a gravitational-wave frequency of 10 Hz for eccentric binaries, together with the high-frequency mergers, is shown in Fig. 7, illustrating that the eccentricity distribution itself is nearly indistinguishable from model-to-model.
Eccentric DBHs are also expected to have shorter time delays between formation and merger, as a consequence of (i) gravitational-wave captures occurring early in the cluster's life when the velocity dispersion \(\sigma\) is large, causing the hard-soft binary separation limit (and hence the semi-major axes of the merging DBHs) \(a\) to be small (\(a\propto 1/\sigma^{2}\)); and (ii) high \(e\), significantly reducing the merger time of DBHs (Peters, 1964). In practice, robustly measuring the eccentricity of systems that form outside the LVK band but still retain \(e\geq 0.1\) at detection is challenging; this is largely due to a lack of waveform models containing the effects of both orbital eccentricity and spin-induced orbital precession, which can lead to confusion between these two parameters (Romero-Shaw et al., 2020), especially when only a few orbital cycles are visible in-band (Romero-Shaw et al., 2023). Very high-\(e\) systems that form inside the LVK band will produce gravitational-wave bursts, which are more likely to be visible in unmodelled burst searches than with pipelines that search based on templates of circularised compact binary coalescences (Gondan et al., 2018; Loutrel, 2020; Romero-Shaw et al., 2022).
Since gravitational-wave captures are more likely to occur in near-symmetric mass DBHs frequenting young massive clusters, eccentric DBHs are naturally biased towards first and second generation mergers. Thus we would expect to see a correlation between spin and eccentricity. This correlation is illustrated in Fig. 8 where we show for each bin of \(\chi_{\rm rem}\), the fraction of mergers that have an eccentricity \(e>0.1\) at 10Hz (top panel). In the figure, \(\mathcal{F}_{\rm occ}^{\rm X}=N_{\rm tot}/N_{\rm occ}\) with \(N_{\rm tot}\) the total number of mergers in a given bin and \(N_{\rm occ}\) the corresponding number of eccentric mergers. This distribution shows that eccentric binaries are relatively more common in mergers that result in a remnant with a high spin \(0.6\lesssim\chi_{\rm rem}\lesssim 0.9\). In this range of remnant spins, we expect that about 20% of the binaries are still eccentric within the 10 Hz frequency window.
The value of \(\mathcal{F}_{\rm occ}^{\rm X}\) varies from model to model, with densest clusters (M7D8, M8D8) and more massive clusters (M8D8, M8D7) producing more eccentric mergers, since dense and massive clusters have a higher \(v_{\rm esc}\) and hence \(\sigma\). Model Ml_0, which has no mass-loss apart from BH ejections, remains denser and more massive than Fiducial and hence has a higher \(\mathcal{F}_{\rm occ}^{\rm X}\). The initial spin distribution appears to have no strong impact or \(\mathcal{F}_{\rm occ}^{\rm X}\); the distributions from models Sp_11 and Sp_LVK follow the distribution from the Fiducial model closely in Fig. 8. Indeed, the general nature of the \(\mathcal{F}_{\rm occ}^{\rm X}\) curve does not change, implying that irrespective of models, eccentric mergers are mainly first- or sometimes second-generation mergers.
The general eccentric merger fraction \(\mathcal{F}_{\rm occ}\) is tabulated in the penultimate column of Tab. 2, demonstrating that denser clusters have more eccentric mergers. Metallicity also appears to play a role in determining \(\mathcal{F}_{\rm occ}\). Metal-rich model Z_100, in which nearly 20% of the DBH mergers are eccentric, has a narrower initial BH mass spectrum due to the increased metallicity; this increases the efficiency of eccentric DBH formation, since the condition of lower- and nearly-equal mass BH binaries is satisfied relatively easily and is aided by the delayed formation of MBHs (this is similar to the upper limit of eccentric mergers that Antonini et al., 2016, finds; albeit for Lidov-Kozai mechanism in globular clusters). This implies that eccentric mergers are more frequent in the metal-rich dynamical environments of the local universe, opening exciting avenues for future runs of current ground-based detectors.
High-frequency mergers-- those binaries that form with gravitational-wave frequencies \(>10\) Hz--obey the same Eq. 5. While having very similar \(e>0.99\) at formation, they have even smaller masses. For the Fiducial model, the median \(m_{1}\) for all eccentric mergers is\(\sim 51\) M\({}_{\odot}\). However, for high-frequency mergers the median \(m_{1}\sim 41\) M\({}_{\odot}\) is about 0.8 times lower. The preference for lower masses in these high-frequency, high-eccentricity mergers is also apparent when we compare the mass-binned high-frequency merger fraction (\(\mathcal{F}_{\rm freq}^{\rm M}\)) across models in Fig. 9, where we demonstrate the decline of high-frequency mergers as the remnant mass bin value increases. The fraction of high-frequency mergers \(\mathcal{F}_{\rm freq}\) scales from model to model as \(\mathcal{F}_{\rm occ}\), ranging from \(\approx 2-8\)% of all mergers, with Fiducial having \(\mathcal{F}_{\rm freq}\approx 6\)% (final column of Table 2).
We note, finally, that the fraction of eccentric and high-frequency mergers is a decreasing function of time. As the cluster expands and the IMBH grows in mass, the time \(t_{3}\) between individual binary-single encounters increases (see Equation 20 in Antonini & Gieles, 2020). The increased value of \(t_{3}\) means a higher probability of in-cluster inspirals and a lowered fraction of eccentric GW captures.
### Mass and spin evolution of IMBH
In this Section, we concentrate on the most massive IMBH formed in our cluster models. Primarily focusing on the mass of the IMBH, we discuss the contributing factors, such as global cluster properties, assumptions associated with stellar evolution, and BH binary/triple
Figure 8: Fraction of eccentric binaries (top panel) and high-frequency mergers (bottom panel) as a function of remnant spins \(\chi_{\rm rem}\). For each bin in \(\chi_{\rm rem}\), \(\mathcal{F}_{\rm occ}^{\rm X}\) (or \(\mathcal{F}_{\rm finq}^{\rm X}\)) gives the fraction of DBH mergers in that bin with an eccentricity \(e>0.1\) at a frequency \(>10\)Hz (or when the binary birth frequency is \(>10\) Hz). The distribution peaks around \(\chi_{\rm rem}\approx 0.6-0.9\), accounting for most eccentric mergers. An aggregate of 100 realizations per model is taken to improve over the otherwise small-number statistics influencing the distribution of \(\mathcal{F}_{\rm occ}^{\rm X}\) (and \(\mathcal{F}_{\rm freq}^{\rm X}\)). The very extreme ends of the distribution still suffer from small number statistics in some cases.
pairing. We also correlate the IMBH spin to its mass, as well as to the host cluster properties.
#### 3.2.1 Cluster initial mass and density
The maximum IMBH mass reachable through only hierarchical mergers is strongly affected by the initial mass and density of the host cluster. Models SL no. 2\(-\)11 of Table 2 (columns denoted by "\(M_{\rm IMBH}^{50}\)", "\(M_{\rm IMBH}^{10}\)", "\(M_{\rm IMBH}^{90}\)" showing the IMBH mass 50\({}^{\rm th}\), 10\({}^{\rm th}\) and 90\({}^{\rm th}\) percentiles respectively of 100 realizations of each model) clearly demonstrate this relationship. While this can be predicted from Eq. (4) of Antonini et al. (2019), we observe that the maximum BH mass reachable in a cluster of initial mass \(M_{cl,i}\) (in M\({}_{\odot}\)) and half-mass density \(\rho_{h,i}\) (in M\({}_{\odot}\) pc\({}^{-3}\)) is lower by up to an order of magnitude in our models. Although Antonini et al. (2019) estimates the upper limit on the maximum IMBH mass while ignoring recoil kicks, the incorporation of binary-single encounters that can potentially eject BHs (the single and/or the binary) lowers our maximum obtained IMBH mass.
Fig. 10 shows the variation of the maximum IMBH mass (\(M_{\rm IMBH}\)) in our models as a function of the host cluster initial mass, half-mass density and escape velocity. We see from the upper panel of Fig. 10 that an increase in initial cluster mass leads to the formation of more massive IMBHs. However, we also note that (i) there appears to be a transition in \(M_{\rm IMBH}\) with respect to cluster initial mass (\(M_{cl,i}\sim 10^{7}\) M\({}_{\odot}\) for \(\rho_{h,i}=10^{7}\) M\({}_{\odot}\) pc\({}^{-3}\) and \(M_{cl,i}\sim 3\times 10^{6}\)M\({}_{\odot}\) for \(\rho_{h,i}=10^{8}\) M\({}_{\odot}\) pc\({}^{-3}\)) where the median \(M_{\rm IMBH}\) makes an order-of-magnitude jump, and (ii) around the same transitory phase, the width in the \(M_{\rm IMBH}\) spectrum, i.e., the difference between the 90\({}^{\rm th}\) and 10\({}^{\rm th}\) percentile, calculated from 100 realisations of each model, is rather broad. Comparing the upper to the middle panel of Fig. 10, we observe that this transition occurs at lower \(M_{cl,i}\) for higher \(\rho_{h,i}\): denser clusters. This behaviour is explained by the combination of three velocities--cluster escape velocity (\(v_{\rm escc}\)), BH natal kick (\(v_{\rm kick}\)) and post-merger gravitational wave recoil kick (\(v_{\rm rec}\)) magnitudes.
The \(v_{\rm kick}\) distribution of BHs from our single stellar evolution prescription (Hurley et al., 2000) is depicted in Fig. 11. For clusters with initial \(v_{\rm escc}\gtrsim 400\) km s\({}^{-1}\), nearly all BHs are retained post-stellar evolution (losing only about 0.6% of the BH mass generated through stellar evolution), compared to host clusters with \(v_{\rm escc}\lesssim 100\) km s\({}^{-1}\)
Figure 10: Final mass of the IMBH (\(M_{\rm IMBH}\)) after a Hubble time of cluster evolution with respect to cluster initial mass (upper panel), initial density (middle panel) and escape velocity (lower panel). Solid lines represent the median values (\(M_{\rm IMBH}^{50}\) of Table 2) and the shaded region shows the boundary region between the 90\({}^{\rm th}\) – 10\({}^{\rm th}\) percentiles (\(M_{\rm IMBH}^{10}\) and \(M_{\rm IMBH}^{90}\) of Table 2) for 100 realizations of each model. The magenta star symbol in the upper and lower plots represents the Fiducial model.
Figure 9: The fraction of high-frequency (upper panel) and eccentric (lower panel) mergers binned \(\mathcal{F}_{\rm Im}^{M}\) (and \(\mathcal{F}_{\rm escc}^{\rm H}\)) by remnant mass \(M_{\rm rean}\), emphasizing the precedence of less massive BHs in high-frequency mergers. For eccentric mergers, \(M_{\rm min}\) slightly increases till 200 M\({}_{\odot}\) and then decreases. As with net eccentric mergers, even with 100 realizations of each model, the tail end of the distribution is impacted by low-number statistics.
that retains \(\approx 70\%\) of their BHs (losing about 18% of the BH mass through natal kicks).5
Footnote 5: For comparison, post-stellar evolution (with stellar evolution parameters as described in Sec. 2) \(\approx 10-20\%\) of the initial mass of the cluster is expected to be held in BHs.
For comparison with Antonini & Rasio (2016), we see that \(\lesssim 40\%\) of metal-poor cluster BHs receive natal kicks greater than \(50\,\mathrm{km\,s^{-1}}\), and it is only in metal-rich (Z\({}_{-}\)100) environments that more than \(60\%\) of BHs receive a natal kick of \(\gtrsim 50\,\mathrm{km\,s^{-1}}\). Since Z\({}_{-}\)100 is a rather high metallicity even for the local universe, the nuclear cluster DBH merger rate is unlikely to dominate over the merger rate from other dynamical environments, such as globular clusters, or isolated evolution.
The post-merger recoil kick \(v_{\mathrm{rec}}\) calculated with different three-dimensional spin magnitudes (Lousto et al., 2010) for all possible spin orientations is shown in Fig. 12. The median curve for all possible spin magnitudes for \(q\gtrsim 0.5\) is about \(400\,\mathrm{km\,s^{-1}}\) and its \(10^{\mathrm{th}}\) percentile is at about \(\approx 200\,\mathrm{km\,s^{-1}}\). This indicates that, after an in-cluster DBH merger, the remnant is nearly always retained in clusters with \(v_{\mathrm{esc}}\gtrsim 400\,\mathrm{km\,s^{-1}}\), and is likely to be ejected for clusters with \(v_{\mathrm{esc}}\lesssim 100\,\mathrm{km\,s^{-1}}\). For clusters with \(200\,\mathrm{km\,s^{-1}}\leq v_{\mathrm{esc}}\lesssim 400\,\mathrm{km\,s^{-1}}\) range, the retention fraction varies. It must be also noted that the \(v_{\mathrm{esc}}\) for models shown in the lower panel of Fig. 10 refers to the initial value, while cluster evolution (e.g., expansion, mass loss) tends to reduce \(v_{\mathrm{esc}}\) both in reality and in our simulations. Hence, it can be concluded that clusters with initial \(v_{\mathrm{esc}}\gtrsim 400\,\mathrm{km\,s^{-1}}\) with very high BH retention fraction are more likely to form IMBHs with masses \(\sim 10^{4}-10^{3}\) \(\,\mathrm{M_{\odot}}\), whereas clusters with \(v_{\mathrm{esc}}\lesssim 200\,\mathrm{km\,s^{-1}}\) lose most of their BHs through stellar evolution birth kicks as well as GW recoil kicks post-merger, prohibiting further growth of IMBHs greater than a few \(100\,\mathrm{M_{\odot}}\). Our results are lower in final IMBH mass estimate than Miller & Hamilton (2002), whose analytical limit of IMBH mass in a \(\approx 10^{6}\,\mathrm{M_{\odot}}\) cluster is about \(\mathcal{O}(3)\,\mathrm{M_{\odot}}\), an order of magnitude higher than "\(\mathrm{M^{6}}\)" models in Table 2. Undrerestimating binary-single ejections and the overall more simplistic model used in Miller & Hamilton (2002) is a possible cause of the discrepancy. It is also interesting to note that for \(\chi_{1,2}=0,0\), i.e. first generation mergers, remnants will always be retained for \(v_{\mathrm{esc}}\gtrsim 100\,\mathrm{km\,s^{-1}}\).
The Poisson oscillation \(\sigma_{\mathrm{P}}\) for \(M_{\mathrm{MBH}}^{50}\) can vary from \(0.04\) to \(0.65\). Clusters with \(v_{\mathrm{esc}}\lesssim 200\,\mathrm{km\,s^{-1}}\) or \(v_{\mathrm{esc}}\gtrsim 400\,\mathrm{km\,s^{-1}}\) corresponds to the lower values \(\sigma_{\mathrm{P}}\), while \(v_{\mathrm{esc}}\) in the mid-transitional region have higher \(\sigma_{\mathrm{P}}\). Fiducial model has \(\sigma_{\mathrm{P}}=0.15\).6 That \(v_{\mathrm{esc}}\) plays the key role in determining whether an IMBH will form is shown by Antonini et al. (2019); Fragione & Silk (2020); Mapelli et al. (2021) and by our Fig. 10.
Footnote 6: \(\sigma_{\mathrm{P}}\)=\(\sigma/M_{\mathrm{MBH}}^{50}\) where, \(\sigma=\sqrt{\frac{1}{h}\sum_{i=1}^{n}\left(\mathrm{M_{MBH}}[i]-M_{\mathrm{MBH }}^{50}\right)^{2}}\).
The rapidity with which DBH mergers occur and a massive BH remnant grows through hierarchical mergers depends on the host cluster's initial mass and density. This can be seen by inspection of the ninth and tenth columns of Table 2; these record the median time required to create a BH of \(100\,\mathrm{M_{\odot}}\) (\(t_{100}\)) and a BH of \(1000\,\mathrm{M_{\odot}}\) (\(t_{1000}\)) respectively, and show that more massive (more dense) clusters require more time (less time) to create an IMBH of \(1000\,\mathrm{M_{\odot}}\), with all other parameters remaining constant. The first DBH in our models is expected to form after cluster core collapse, when the dense core, formed through mass segregation, harbours interacting BHs. Equation 9 and 10 of Antonini & Gieles (2020b) show the dependence of the core-collapse time (\(t_{\mathrm{cc}}\)) on the cluster initial mass and density, that \(t_{\mathrm{cc}}\propto\mathrm{M_{cl}}_{i,l}/(\rho_{h,i})^{1/2}\). While \(M_{cl,i}\) is the most dominant factor in determining how rapidly the hierarchical mergers commence in a cluster, \(\rho_{h,i}\) does play a role too. The upper panel of Fig. 13 shows the growth history of what becomes the most massive IMBH, reflecting this strong dependence on the initial cluster mass. MSD7 has its initial BH mergers occurring around \(1.4\) Gyrs, and MeD7--a cluster two orders of magnitude lower in its initial mass--around \(0.01\,\mathrm{Gyrs}\). It is interesting to note that although MSD7 begins its hierarchical mergers later than the lower-mass clusters, it overtakes the others to form the most massive IMBH of all models compared in this plot. A cluster too massive, computed without relativistic treatments for its evolution, can have a \(t_{\mathrm{cc}}\) greater than a Hubble time and is therefore unsuitable for hierarchical mergers. Several models of Fragione & Silk (2020), under the assumptions of the mass spectrum factor \(\psi=6\) (Eq. (9) and (10) of Antonini & Gieles (2020b) and with \(\mathrm{N_{th}}=3.21\), \(\mathrm{In\Lambda}=10\), \(\langle m_{all}\rangle=0.6\)), yield a \(t_{\mathrm{cc}}\) greater than a Hubble time. Of course, \(\psi\) can be lower than its traditionally-assumed value of \(1\), further increasing \(t_{\mathrm{cc}}\). The lower panel of Fig. 13 shows the hierarchical history of the cluster IMBH with respect to cluster density, showing denser clusters to have a more rapid IMBH growth.
Figure 11: BH natal kick distribution for models Fiducial, Z\({}_{-}\)100, with metallicities \(1.5\times 10^{-4}\), \(1.5\times 10^{-3}\) and \(1.5\times 10^{-2}\) respectively. The dotted line emphasises where CDF= \(0.5\).
Figure 12: Post-merger gravitational wave recoil \(v_{\mathrm{rec}}\) vs binary mass ratio \(q\) for different binary spin magnitudes \(\chi_{1,2}\) integrated over all possible spin-orbit angles (Lousto et al., 2010). The solid lines show the median \(v_{\mathrm{rec}}\) magnitude for all possible spin angles, and the dashed lines depict the corresponding \(90^{\mathrm{th}}\)-\(10^{\mathrm{th}}\) percentile boundaries. The black lines characterize the same for all possible values and orientations of \(\chi_{1,2}\).
Indeed, in a cluster of too-low initial density, an IMBH never grows over a few hundred solar masses.
It is of no surprise that the spin of the IMBH, at the end of the cluster simulation (13.5 Gyrs; a Hubble Time), shows an inverse correlation to the host cluster's initial mass, as depicted in Fig. 14. Low-mass host clusters only have a couple of generations of hierarchical mergers, with the remnant spin reaching around 0.7, as discussed in Sec. 3.1.2. It is only in massive clusters with over 100 mergers (column "\(\rm{nl}\)" \(\rm{G}\)" of Table 2) that the repeated low mass ratio mergers cause the spin of the IMBH to become lower. This also translates as the higher-mass IMBHs have lower spins than their less-massive counterparts (columns "\(M_{\rm IMBH}^{50}\)" and "\(\rm{\sim}\)"\(\rm{\sim}\)"\(\rm{\sim}\)" of Table 2). Although our suite of models never reach the SMBH threshold mass of \(10^{5}\,\rm{M_{\odot}}\), with our maximum IMBH measuring \(O(4)\,\rm{M_{\odot}}\), it is intriguing that X-ray astronomy has shown SMBH spins to have a slight anti-correlation to their masses (Reynolds, 2013; Reynolds, 2021), although the spins of the SMBHs are mostly very high \(\gtrsim 0.5\)(Piotrovich et al., 2022) (however, there may be bias towards observing those with high spin Bonson & Gallo, 2016). There are occasional studies showing that SMBHs may have a retrograde spin effect (e.g. Wang et al., 2019)--where there is a possibility of the BH spin being lowered through anti-alignment with the accretion disc --there lacks conclusive observational evidence of retrograde spin (Garofalo, 2013) in SMBHs (Reynolds, 2013).
We acknowledge that the (likely most significant) impact of gas and accretion onto the massive BHs spin is not taken into account in our study. However, if we make the assumption that the observed SMBHs at the centre of the galaxy has formed through -- a) in-situ hierarchical mergers in the galaxy's nuclear cluster (note that the \(10^{3}\,\rm{M_{\odot}}\) IMBH seed is formed within the first or a couple of Gyrs of cluster evolution) and b) gas accretion (we ignore the possibility of IMBH mass growth through infalling of massive globular clusters and/or inter-galactic mergers), it would appear as if the SMBHs still followed the intrinsic mass-spin distribution obtained through hierarchical mergers.
It is also interesting to note that Sagittarius \(\rm{A^{*}}\) of the Milky Way (\(\mathcal{O}(6)\,\rm{M_{\odot}}\)) is estimated to have a spin \(\lesssim 0.1\)(Fragione & Loeb, 2020), and our Fiducial model, where the cluster mass and density after 13.5 Gyr evolution is similar to that of the Milky-Way nuclear cluster as the current time, has an IMBH of \(\rm{\propto}^{50}_{\rm IMBH}=0.13\). M87, with its very high mass (\(\mathcal{O}(9)\,\rm{M_{\odot}}\)), also has a lower spin measurement of \(\approx 0.2-0.3\)(Nokhrima et al., 2019). We reiterate here that we are not drawing any conclusions with regards to SMBH mass-spin correlation here, as our study is constrained to much lower masses. Instead, we draw attention to this phenomenon and highlight the opportunity for future studies focusing on the evolution of SMBH from IMBHs through hierarchical mergers and accretion.
#### 3.2.2 Metallicity
The metallicity (Z) impacts the mass of the cluster IMBH through determining the width of the initial BH mass spectrum. While the highest metallicity model Z\({}_{-}\)100 with \(Z=0.0158\) (this metallicity value is approximately similar to solar metallicity, \(Z_{\odot}\approx 0.0142\)**close to**
massive systems through hierarchical mergers), combined with the slower pace of hierarchical mergers, causes the Z_100 model to have a median IMBH mass of \(\sim 6.5\times 10^{3}\) M\({}_{\odot}\), only 0.6 times that of the Fiducial model.
#### 3.2.3 Delayed supernovae prescription
The "delayed" prescription of Fryer et al. (2012) is used in the SN_D model. Ordinarily, the "rapid" prescription of Fryer et al. (2012) enforces a mass-gap between 2-5 M\({}_{\odot}\) between neutron stars and BHs, while the "delayed" model allows for BHs of masses between \(\approx\)2.5-5 M\({}_{\odot}\) as well. The effect of metallicity through stellar winds is the most important parameter in determining BH masses, and though the slightly more numerous lower mass BHs in SN_D receive higher natal kicks (due to the dependence on fallback mass), the mass function of SN_D and Fiducial are not significantly different. However, due to the correlation between BH natal kick and fallback mass, about 99.7% of BHs in SN_D have \(v_{\rm kick}<400\) km s\({}^{-1}\), compared to 96.8% in Fiducial. Due to having a little more number of massive BHs retained initially, the IMBH mass in the SN_D model is increased very slightly, only by a few 100 M\({}_{\odot}\).
#### 3.2.4 BH seed
The effect of adding an initial seed BH of masses beyond that of the stellar evolution prescription is explored through models Sd_50, Sd_100, Sd_150, Sd_200, where we include at cluster initialisation BH of 50 M\({}_{\odot}\), 100 M\({}_{\odot}\), 150 M\({}_{\odot}\) and 200 M\({}_{\odot}\) respectively. We term these additional BHs as'seeds' since they are not directly produced through stellar evolution. Massive stars with Helium cores above \(\approx 50\) M\({}_{\odot}\) are expected to produce a remnant \(\approx 40\) M\({}_{\odot}\) and those with Helium cores larger than 60 M\({}_{\odot}\) are expected to fully disrupt due to thermonuclear eruptions and leave behind no remnant (Woosley, 2017; Belczynski et al., 2016; Spera & Mapelli, 2017; Farmer et al., 2019). This apparent gap in the BH mass spectrum (created through stellar evolution) is often termed as the (pulsational) pair instability or (PJPSN mass gap. It should be remembered with caution that the exact location of the (PJPSN gap in the BH mass spectrum is uncertain (e.g., Farmer et al., 2019; Belczynski, 2020; Sakstein et al., 2020; Woosley & Heger, 2021; Vink et al., 2021; Spera et al., 2022), and hence a 50 M\({}_{\odot}\) seed BH may even be a stellar evolution remnant that evolved under special circumstances of, say, high stellar rotation (Marchant & Moriya, 2020), or the unlikely event of suppressed stellar winds at high metallicity (Belczynski et al., 2020). Moreover, very massive stars (initial mass \(>\)200-1000 M\({}_{\odot}\)), can directly collapse to form IMBHs (Belkus et al., 2007; Yungelson et al., 2008; Sabhahit et al., 2023). Clusters with high primordial binary fractions may also have runaway stellar collisions and very efficient mass accretion from companions, creating stars as massive as \(\approx 200-600\) M\({}_{\odot}\) which easily form IMBHs through direct collapse (Di Carlo et al., 2021; Gonzalez et al., 2021).
The hierarchical evolution of the initial seeds of models Sd_50, Sd_100, Sd_150, and Sd_200 are compared in the upper panel of Fig. 15. While for the last three models with seed mass \(\geq 100\) M\({}_{\odot}\), the BH seeds themselves develop to become the final IMBH of the cluster, this may not be the case for Sd_50. The 50 M\({}_{\odot}\) seed grows through merger to enter its second generation merger as a 91 M\({}_{\odot}\) BH, which merges with a 42 M\({}_{\odot}\) BH and receives a recoil kick of \(\approx 950\) km s\({}^{-1}\) (almost double the cluster escape velocity) and is hence ejected. Meanwhile, an originally 41.8 M\({}_{\odot}\) stellar-origin BH grows to become the \(\approx 9\times 10^{3}\) M\({}_{\odot}\) IMBH in this model. However, in other realisations, the 50 M\({}_{\odot}\) seed survives, and so while \(M_{\rm IMBH}^{50}\) and \(M_{\rm IMBH}^{100}\) of the Sd_50 and Fiducial models are similar, \(M_{\rm IMBH}^{90}\) of the Sd_50 model is a few 100 M\({}_{\odot}\) more massive.
There are also instances of the more massive seeds getting ejected after a few generations. Ordinarily, in a non-seeded model such as the Fiducial model, the mass ratio \(q\) will be around 1 for the first generation and then gradually become lower through hierarchical mergers. For seeded models, specifically Sd_150 and Sd_200, the starting point of \(q\) is significantly lower, around 0.3 and 0.2 respectively. Fig. 12 shows that \(v_{\rm rec}\) actually increases between \(q=0.2\) and 0.5, which may cause more second- or third- generation mergers of seeded models to be ejected compared to the Fiducial model where \(q\) may be sufficiently higher in second- and third-generation mergers. Fig. 12 also illustrates that the median and 90\({}^{\rm th}\) percentile values for non-zero BH component spin magnitudes peaks around \(q=0.4-0.6\) and remains nearly constant. For the seeded models, at the time of the first few hierarchical mergers, \(v_{\rm rec}\approx 400\) km s\({}^{-1}\). If the seed has undergone a merger or two, the remnant obtains a spin magnitude \(\chi_{1}\approx 0.5-0.7\) (see Fig. 15, lower panel), while its non-merger remnant companion has spin magnitude \(\chi_{2}\approx 0.0\). Looking at the corresponding curves of Fig. 12 we see that the 10\({}^{\rm th}\) percentile peak of \(v_{\rm rec}\) is at \(q\approx 0.4\), and the \(v_{\rm rec}\) median reaches about a constant value from \(q\gtrsim 0.4\). Indeed, this also supports the observation that in
Figure 15: Hierarchical growth of BH seed in models Sd_50 (green), Sd_100 (orange), Sd_150 (magenta) and Sd_200 (purple). Out of 100 realizations of each model, only one is chosen per model for illustrative purposes. The upper panel shows the mass growth of the IMBH, where light green depicts the 50 M\({}_{\odot}\) seed (star mark) and its two consecutive mergers, and dark green shows the slightly lower-mass stellar-remnant BH of the Sd_50 model, which becomes the most massive IMBH. The lower panel displays the evolution of remnant spin \(\lambda_{\rm reim}\) of models Sd_100, Sd_150 and Sd_200.
models Sd_150 and Sd_200, we do obtain the \(M_{\rm IMBH}^{10}\) smaller than Sd_50. Indeed, in cluster realizations of the model with the most massive seed, Sd_200, we do obtain the \(M_{\rm IMBH}\) slightly less massive than in the Fiducial and Sd_50 models.
Even a 200 M\({}_{\odot}\) seed is not completely protected from ejection post-merger. The best way to ensure that the IMBH growth occurs solely through the seed BH, the cluster (with initial \(v_{\rm esc}\gtrsim 400-500\)) must have the seed BH at least about 10\(\times\) massive than the upper end of its BH initial mass function (i.e. the seed should be \(\geq 400\) M\({}_{\odot}\) in our case) such that \(q\lesssim 0.1\) since first merger. This choice restricts \(v_{\rm esc}\) to its lowest magnitude region in the parameter space.7 This case of retention vs non-retention of seed also illustrates the importance of having multiple realizations of each model, as statistical variations can change the fate of the BH seed.
Footnote 7: Gravitational wave merger recoil kicks are also illustrated in Le Tiec et al. 2010 (Fig.1,2) and for eccentric cases in Sopuerta et al. 2007 (Fig.1,2,3), with respect to the symmetric mass ratio \(\eta=q/(1+q)^{2}\). For \(\eta=0.2\) (corresponding to \(q=0.4\)), the recoil kick shows a clear peak.
#### 3.2.5 Host cluster mass evolution
Our Fiducial model has mass loss only through stellar evolution and BH recoil ejection (Eq. (16) of Antonini & Gieles 2020b). The cluster mass for the Fiducial model after a Hubble time is about \(10^{7}\) M\({}_{\odot}\), half of its initial mass.
We use model Ml_ev as a variation which allows mass loss due to evaporative expansion (Antonini & Gieles 2020a), but the difference in final cluster mass is negligibly small (\(\approx 9.8\times 10^{6}\) M\({}_{\odot}\)). Ml_ev has very similar median IMBH mass but the 10\({}^{\rm th}\) percentile is a few 100 M\({}_{\odot}\) less than that of the Fiducial model.
In the Ml_0 model, stellar mass loss is stopped, with the only BH ejections due to binary-single encounters and gravitational-wave recoils being sources of cluster mass reduction. However, since both Ml_0 and Fiducial have rather high \(v_{\rm esc}\) to begin with, IMBH masses are very similar for the two models. A lower mass density modification, represented by the Ml_0MPDs model, does result in more massive IMBHs compared to M7D5. The consequence of no stellar mass loss is also reflected in higher cluster density at Hubble Time, and in slightly shorter seed formation time.
#### 3.2.6 BH natal kick
In view of Fig. 11, it can be seen that host clusters with escape velocities \(\gtrsim 400\) km s\({}^{-1}\) will not be significantly affected by reducing the natal kick magnitude of the BHs; such clusters tend to retain over 90% of the BHs immediately after stellar evolution, making them participate in the cluster dynamics. Consequently, the Vk_0 model with mass and density settings as Fiducial and initial \(v_{\rm esc}\approx 450\) km s\({}^{-1}\) shows very similar IMBH masses as Fiducial (Table 2). Due to the broadening of the initial BH mass spectrum by keeping the less massive BHs, reducing the BH natal kick only slightly alters \(M_{\rm IMBH}^{10}\) and \(M_{\rm IMBH}^{90}\) to lower values.
The effect of no BH natal kick is insignificant even on the cluster model with lower escape velocities, Vk_0MPDs, which has the same mass density settings as model M7D5 with \(v_{\rm esc}\approx 150\) km s\({}^{-1}\). In the intermediate cluster model, Vk_0MPD with \(v_{\rm esc}\approx 350\) km s\({}^{-1}\), \(M_{\rm IMBH}^{50}\) is lowered to about 20% that of M7D7. This is due to the retention of more lower mass BHs and the cluster \(v_{\rm esc}\) being in the transitional region, as explained in Sec. 3.2.1 and shown in the lower panel of Fig. 10. We also run model Vk_0Z\({}_{100}\) with solar metallicity and zero BH birth kick; its change in the IMBH mass is only marginal compared to Z_100.
We hence conclude that the BH natal kick prescription is not an important factor in deciding the hierarchical IMBH growth for clusters, as long as cluster \(v_{\rm esc}\) is either sufficiently high (\(\gtrsim 400\) km s\({}^{-1}\)) or sufficiently low (\(\lesssim 150\) km s\({}^{-1}\)) because the lower-mass BHs that are retained get eventually ejected by either merger recoils or binary single encounters. It is only in the intermediate region of \(v_{\rm esc}\) (Fig. 10, lower panel) that altering \(v_{\rm kick}\) may significantly affect \(M_{\rm IMBH}^{50}\).
#### 3.2.7 initial BH spin
Models Sp_01, Sp_33, Sp_11 and Sp_LVK explore the effect of initial BH spin on the cluster IMBH mass. While the Fiducial model has initial BH spins set to 0, Sp_01, Sp_33 and Sp_11 set the initial spin combinations (for primary:secondary) to 0 : 1, 0.3 : 0.3 and 1 : 1 respectively. Sp_LVK has its initial BH spins drawn from the spin distribution inferred from current observations of DBH coalescences through gravitational waves.
We initiate our stellar evolution with only single stars and BH progenitor Helium stars, the latter of which (with their effective angular momentum transfer from core to envelope) are expected to become non-spinning BHs. In binaries, however, the companion of a compact object (neutron star or BH) can get tidally spun up (Qin et al. 2018; Bavera et al. 2020; Chattopadhyay et al. 2021, 2022b; Broekgaarden et al. 2022; Ma & Fuller 2023). Sp_01 hence allows the lower-mass secondary component to have higher spin. Although there are conflicting results on the efficacy of tidally spinning up BHs through dynamics (Le Tiec & Casals 2021; Chia 2021), models Sp_33 and Sp_11 can be thought of as intermediate and extremal cases of the effect of initial BH spin on hierarchical mergers. Given that we start our cluster models with only single stars and expect close dynamical encounters to onset after the core forms, the Fiducial model is the most realistic. The variation of \(v_{\rm rec}\) with respect to the primary spin \(\chi_{1}\) for different secondary spins \(\chi_{2}\) with a fixed mass ratio \(q=0.8\) (roughly representing the initial in-cluster mergers) is shown in Fig. 16. We concentrate particularly on the region with \(v_{\rm rec}<450\) km s\({}^{-1}\), which approximately corresponds to the \(v_{\rm esc}\) for the Fiducial and "Sp" models at the time of the first DBH mergers.
Figure 16: Gravitational wave recoils \(v_{\rm esc}\) vs primary spin \(\chi_{1}\) for different secondary spin magnitudes \(\chi_{2}=1\) (purple); 0.7 (pink); 0.3 (red); 0 (orange), at a fixed mass ratio \(q=0.8\) which roughly estimates the first generation mergers. The solid lines show the median for random orientations of \(\chi_{1,2}\), while dotted lines show the 10\({}^{\rm th}\) percentiles. The black dashed line marks \(v_{\rm esc}=450\) km s\({}^{-1}\), approximately the value of the escape velocity during first generation mergers for the Fiducial, Sp_01, Sp_33, Sp_11 and Sp_LVK models.
At the onset, high-spin BHs easily obtain large kicks, prohibiting the growth of the IMBH and resulting in a suppression of the value of \(M_{\rm IMBH}^{10}\) of Table 2 (models Sp\({}_{1}\)11 and Sp\({}_{-}\)LVK have \(M_{\rm IMBH}^{10}\) about \(0.04\times\) and \(0.58\times\) of Fiducial). For the model realizations where the initial mergers chance to remain in the cluster, after a couple of mergers with \(\chi_{1}\approx 0.7\) now, the \(v_{\rm esc}\) become very similar to each other for all models, thereby resulting in very similar values of \(M_{\rm IMBH}^{50}\). We conclude that the choice of initial spins has a secondary effect on the hierarchical growth of an IMBH in a star cluster.
#### 3.2.8 BH ordered pairing
In the Ord\(\_\)BH model, we change the (initial BH mass spectrum dependent) power-law probability distribution in pairing the BHs in binaries and triples (as described in Sec. 2 and Antonini et al. 2023) to complete ordered pairing. In other words, the most massive BH in the Ord\(\_\)BH model is paired with the second-most massive one, followed by the third-most massive as the single perturber.
For a binary of mass \(m_{1}+m_{2}\) and a single of mass \(m_{3}\), the recoil kick of the binary from this binary-single encounter is
\[v_{\rm bin} \sim\sqrt{(1/\epsilon-1)G\,\frac{m_{1}m_{2}}{m_{1}+m_{2}+m_{3}} \,\frac{q_{3}}{a}} \tag{7}\] \[=\sqrt{(1/\epsilon-1)G\,\frac{m_{1}m_{2}}{(m_{1}+m_{2})(1+\frac{ m_{1}+m_{2}}{m_{3}})}\,\frac{1}{a}},\]
where \(q_{3}=m_{3}/(m_{1}+m_{2})\), \(a\) is the semi-major axis, \(G\) is the gravitational constant, and \((1/\epsilon-1)\) is a function of \(q_{3}\) which is always \(\leq 0.2\) for all models but the set labelled "DE", where it is constant at \(0.2\). Equation 7 reveals that an increase in \(m_{3}\) increases \(v_{\rm bin}\). All other variables remaining constant, With \(m_{1}\), \(v_{\rm bin}\) reaches a local maximum and then decreases (although the variation is obviously lower than that with respect to \(m_{3}\)). The expression for \(v_{\rm bin}\) is symmetric in \(m_{1}\) and \(m_{2}\).
In the Fiducial model, the power-law probability distribution ensures the binary-single encounter is composed of massive BHs from the mass spectrum. Unlike in the Ord\(\_\)BH model, there is no guarantee that the three most massive BHs will be the objects engaging in the binary-single encounter. This results in substantially decreasing \(M_{\rm IMBH}^{10}\) in the Ord\(\_\)BH model to only about \(7\%\) of that of the Fiducial model: in some of the cluster realisations, the third BH can become massive enough to increase \(v_{\rm bin}\). \(M_{\rm IMBH}^{50}\) and \(M_{\rm IMBH}^{90}\) are also reduced significantly in the Ord\(\_\)BH model compared to the Fiducial model.
#### 3.2.9 Tertiary mass dependent energy loss
The set of models labelled with the "DE" moniker contains those with constant tertiary-induced binary energy loss fraction \(\Delta{\rm E}/{\rm E}=0.2\). In all other models this value is the maximum, valid only in equal-mass interactions. One outcome of this is that the IMBH mass is reduced by a few thousand M\({}_{\odot}\) in the DE models in comparison to those produced in the Fiducial model. Since there is more energy absorption per interaction in the DE models than the Fiducial model, the IMBH formation timescales (\(t_{100}\) and \(t_{1000}\)) are slightly shorter. It takes longer for a binary in the Fiducial model to reach the gravitational-wave driven regime from dynamically-driven regime, since \(t_{3}\) in Equation 20 of Antonini & Gieles 2020b is lowered. The energy loss per binary-single interaction, fixed at \(20\%\) in the DE models, is lower in the Fiducial model, in the case of \(m_{3}\sim m_{1}\).8 For a bound period of time (a Hubble time, in our case), the Fiducial model cluster hosts fewer mergers than the DE models, by a factor of \(\approx 0.8\). If we compare the ratio of mergers with \(m_{1}>100\) M\({}_{\odot}\) to the total number of mergers (\(\mathcal{F}_{100}\), listed in Table 2), the DE models still have about \(20\%\) more IMBH-regime (primary mass \(\geq 100\) M\({}_{\odot}\)) mergers than the Fiducial model.
Footnote 8: A binary of masses ‘\(m_{1,2}\)’, semi-major axis ‘\(a\)’ and generating energy ‘\(\dot{E}\)’ at the cluster core is expected to enter gravitational-wave driven regime when eccentricity \(e\geq 1.3\big{[}\frac{G^{4}(m_{1}m_{2})^{2}(m_{1}+m_{2})}{e^{5}E}\big{]}^{1/ \prime}a^{-5/7}\). For DE models, the semi-major axis is larger than that in the Fiducial model due to the perturbing single absorbing more energy (the maximum of \(20\%\), valid in other models only for equal-mass systems) from the binary, causing the binary separation to shrink more per interaction. However, \(\dot{E}\) is still set by Henon’s principle, so only depends on the cluster global properties of mass and density.
However, it is clear from Eq. (7) that the DE models will also have a larger \(v_{\rm bin}\), as well as larger tertiary kick \(v_{3}\)(Antonini et al. 2023). Hence, the DE models eject more binaries (and tertiaries) through binary-single encounters, reducing the mass growth of IMBHs in the long run. This lowering of IMBH mass is most strongly affected in the intermediate \(v_{\rm esc}\) cluster DE\({}_{\rm MTD7}\), where \(M_{\rm IMBH}^{50}\) is only \(12.5\%\) that of M7D7. Even lower-density cluster DE\({}_{\rm MTD5}\) and metal rich DE\({}_{\rm Z\,100}\) have less massive IMBHs compared to those that arise in M7D5 and \(Z_{\rm 100}\). It may therefore be argued that the functional form of \(\Delta{\rm E}/{\rm E}\) which takes into account the mass of the tertiary \(m_{3}\) is indeed an important parameter to be taken into account in the fast codes of rapid cluster evolution models.
#### 3.2.10 IMBH relative rates
The fraction of mergers that occur in the IMBH regime, i.e. having a primary mass of \(\geq 100\) M\({}_{\odot}\), can be expressed as the ratio of the total number of mergers with \(m_{1}\geq 120\) M\({}_{\odot}\) to the total number of mergers per cluster; this is shown in the column marked \(\mathcal{F}_{100}\) of Table 2. Denser and more massive clusters have a much higher \(\mathcal{F}_{100}\), and within our models, it varies largely \(9\times 10^{-5}\leq\mathcal{F}_{100}\leq 0.53\). In the Fiducial model, \(\sim 16\%\) of all mergers have a \(\geq 100\) M\({}_{\odot}\) primary, while in the MSDS model \(\approx 53\%\) of all mergers are in the IMBH regime.
### Ex-situ mergers
Binaries can get ejected and merge ex-situ, i.e., outside the cluster. A binary is expelled through binary-single interaction when \(v_{\rm bin}>v_{\rm esc}\), where \(v_{\rm bin}\) is obtained from Equation 7. Under the ejection condition, we can expect in the ex-situ mergers less massive binaries, triples with a more massive third body interloper, smaller binary semi-major axes, and lower \(\epsilon\). In the DE set of models, \((1/\epsilon-1)=0.2\) always (making \(\epsilon=0.83\)), while in all other models including the Fiducial model, \((1/\epsilon-1)\) is mass-dependent, and is lowered as the mass ratio between the interloper and the binary becomes more asymmetric with the formation of an IMBH.
The fraction of ejected DBH mergers to in-cluster mergers (\(\mathcal{F}_{\rm ej}\)) for varying initial cluster escape velocity is shown in Fig. 17. Cluster \(v_{\rm esc}\) plays the key role in deciding \(\mathcal{F}_{\rm ej}\), with nearly all of the mergers being ex-situ for sufficiently low \(v_{\rm esc}<10\) km s\({}^{-1}\). Such clusters, if evolved with direct NBODY models, may still show in-situ mergers due to a high primordial binary fraction and higher multiplicity interactions (Banerjee 2018; Banerjee 2021; Chattopadhyay et al. 2022a).
The initial mass and density of the cluster, independently from
\(v_{\rm esc}\), also govern \(\mathcal{F}_{\rm ej}\). For the same value of initial \(v_{\rm esc}\), a smaller-mass cluster (hence, higher-density) results in fewer ex-situ mergers, as shown in the M5 and M6 models in Fig. 17.
The initial spin distribution of the BHs also determine \(\mathcal{F}_{\rm ej}\), with high initial spin models having more ejected mergers (see the inset plot of Fig. 17). This is because high-spins translate into high recoil kicks, as explained in Sec. 3.2.7). This means that \(\mathcal{F}_{\rm ej}\) increases, even though the number of ex-situ mergers does not vary much between models with different initial spin distributions.
For the 'DE' set of models, \((1/\epsilon-1)=0.2\), unlike in all other models where it is \(\leq 0.2\), causing more binaries to be ejected out of the cluster in the DE models. The effect, however, is small; \(\mathcal{F}_{\rm ej}\) for models M7D5 and \(\rm DE_{M7D5}\) (both with initial cluster mass of \(10^{7}\) M\({}_{\odot}\) and density \(10^{5}\) M\({}_{\odot}\) pc\({}^{-3}\)) are 0.04 and 0.07, respectively. For M6D5, with initial mass and density \(10^{6}\) M\({}_{\odot}\) and \(10^{5}\) M\({}_{\odot}\) pc\({}^{-3}\), the ejected mergers make up a fraction of 0.46 of all mergers, compared to a constant DE model of same initial mass and density where \(\mathcal{F}_{\rm ej}=0.52\). The difference disappears for clusters of high \(v_{\rm esc}\) because there are fewer ejections.
We compare the ejected mergers to in-cluster ones in Fig. 18, with models M6D5 and M6D6 (which have ejected merger fractions of about 0.5 and 0.3 respectively, with each having the ability to form an IMBH just above \(100\) M\({}_{\odot}\)). The left panel and top middle panel show that lower-mass binaries and equal mass-ratios are preferred within ejected mergers. This results in a smaller chirp mass \(M_{\rm c}\) for these ex-situ mergers (lower middle panel of Fig. 18). Though most ex-situ mergers are first generation, a fraction of them can also belong to a higher generation, typically a second-generation merger remnant BH merging with a primordial first generation BH, as reflected in the second peak of \(\sim 0.8\) in the distribution of primary spin \(\chi_{1}\) (upper right panel of Fig. 18). The eccentricity at the time of ejection/formation (\(e_{\rm i}\)) for ex-situ/in-situ mergers is shown in the lower right panel of Fig. 18. The ex-situ binaries roughly follow a thermal distribution, albeit slightly shifted to lower values, since highly eccentric binaries tend to merge very rapidly.
## 4 Observational Implications: Detectability and Merger Rates
We face a couple of challenges in calculating the merger rates from nuclear clusters: the lack of comprehensive data on the number density evolution of nuclear clusters with redshift (also, cluster birth redshift), and the uncertainty in birth parameters (mass, half-mass density and metallicity). We therefore make a few simplifying assumptions (and then vary some of these as different models), as explained step-by-step below:
1. Create \(N\) clusters with different values of initial cluster mass (\(M_{\rm NSC,i}\)) and half-mass radius (\(R_{\rm NSC,i}\)). To find unique \(M_{\rm NSC,i}\) and \(R_{\rm NSC,i}\) for each individual cluster, there are two models groupings obtained: 1. ModelA group: Host galaxy masses (\(M_{\rm gal}\)) are drawn uniformly from a flat-in-the-log distribution between \(10^{8}-10^{12}\) M\({}_{\odot}\). Each host galaxy, depending on its mass, has a normalized relative weight \(w_{\rm N}\) (such that the area under the curve of number density of galaxies per unit volume per unit dex is scaled to unity). \(w_{\rm N}\) is calculated using the normalized weight associated with each host galaxy of the cluster, and is given by the Schechter best-fit parameters (Schechter, 1976) given by Mortlock et al. (2015) and Song et al. (2016). ModelA\({}_{1}\) and ModelA\({}_{2}\) are selected from Mortlock et al. (2015) for redshifts \(0.3<z<0.5\) and \(2.5<z<3\), while ModelA\({}_{3}\) is from Song et al. (2016) for \(z=5\).9 These selections are made to represent three regions of the redshift parameter space. Each host galaxy is then associated with a nuclear cluster, whose \(M_{\rm NSC,i}\) is obtained from fitting the fitting formulae for late-type galaxies through \(M_{\rm gal}\) and then a \(R_{\rm NSC,i}\) through \(M_{\rm NSC,i}\), from the third and first row of Table 1 of Georgiev et al. (2016) (with the most likely values of the function). The nuclear clusters are all assumed to have the same metallicity of Z\(=1.5\times 10^{-3}\), and born uniformly between the redshifts of 0 to 8. Footnote 9: Schechter function: \(\phi({\rm M})=\phi^{*}\ln(10)\)\([10^{({\rm M}-{\rm M}^{*})}]^{1+\alpha}\)exp\([-10]^{({\rm M}-{\rm M}^{*})}\). We have a turn-over mass index in units of dex M\({}^{*}=10.9\); \(11.04\); \(10.97\), normalization of Schechter function in logarithm log\(\phi^{*}=-2.54\); \(-4.03\); \(-4.28\), slope \(\alpha=-1.59\); \(-1.69\); \(-1.70\) for ModelA\({}_{1,2}\)(Mortlock et al., 2015) and ModelA\({}_{3}\)(Song et al., 2016) respectively.
(b) ModelB group: Since each fit for late-type galaxies in Georgiev et al. (2016) (Table 1) has error margins, we devise two further sub-models. ModelB\({}_{1}\) (with the upper error margin in obtaining \(M_{\rm NSC,i}\) and lower error margin in obtaining \(M_{\rm NSC,i}\), such that the least massive and most sparse possible cluster is built). Everything else in ModelB\({}_{1,2}\) is identical to ModelA\({}_{1}\).
(c) ModelC group: ModelC\({}_{1}\) and ModelC\({}_{2}\) are identical to ModelB\({}_{1}\) and ModelB\({}_{2}\) respectively, apart from the Georgiev et al. (2016) best-fit, which for the C group of models is obtained from the sub-sample of nucleated early-type galaxies (see their Table 1).
(d) ModelD: The James Webb Space Telescope is now confidently detecting galaxies at redshifts at high as 12 (Castellano et al., 2022). We therefore make another variation to ModelA\({}_{3}\), incorporating up to a volume of redshift 12 (and making the uniform birth of nuclear clusters between redshifts 0 to 12).
(e) ModelE group: The models in the ModelE\({}_{1,2}\) group are the same as ModelA\({}_{1,3}\) respectively, but with metallicities Z\(=1.5\times 10^{-4}\).
Figure 17: Fraction of ejected binary mergers out of all mergers (\(\mathcal{F}_{\rm ej}\)) with respect to escape velocity for different cluster models. ‘M5’ (coral), ‘M6’ (blue) and ‘M7’ (pink) denote clusters with initial masses \(10^{5}\) M\({}_{\odot}\), \(10^{6}\) M\({}_{\odot}\) and \(10^{7}\) M\({}_{\odot}\) respectively, but with different initial half-mass radii. The fiducial model is identified with a star. The zoomed-in inset plot shows M6D7 (initial mass and density \(10^{6}\) M\({}_{\odot}\) and \(10^{7}\) M\({}_{\odot}\) pc\({}^{-3}\) respectively) with non-spinning initial BHs and clusters with same initial mass-density but initial BH spins of \(0:1\) (Sp.\(01_{\rm M6D7}\)), \(0.3:0.3\) (Sp.\(3_{\rm M6D7}\)), \(1:1\) (Sp.\(1_{\rm M6D7}\)) and the spin distribution inferred by the LVK from gravitational-wave observations (Sp.\(\rm LVK_{\rm M6D7}\)).
* ModelF group: The masses and half-mass radii of the nuclear clusters of corresponding galaxies are obtained from observations of their present-day properties (Georgiev et al., 2016). While for all of our previous models we stick to an assumption of steady-state, such that these present-day properties are the initial cluster properties of some other galaxies and the mass-radius relations of the nuclear clusters are identical at each time evolution snap-shot, this assumption is likely incorrect. We therefore create two sets of nuclear clusters with all properties identical to ModelA\({}_{1}\) but with initial radii of 1 pc for one set of clusters (ModelF\({}_{1}\)) and initial radii reduced from the median Georgiev et al. (2016) fit by \(\times 0.01\) for the other set (ModelF\({}_{2}\)).
* ModelG group: In model ModelG\({}_{1}\) the cluster initial properties are identical to the 228 clusters selected from Georgiev et al. (2016), also utilized in the study by Antonini and Rasio (2016). The clusters are equi-weighted, meaning each is given a weight of \(1/228\). In this scenario, we only select the mergers at a merger time cut-off of 1 Gyr (ModelG\({}_{1}\)) and 1 Gyr post-core collapse (ModelG\({}_{2}\)). No cosmological evolution is accounted for in this case. The metallicity remains the same as in ModelA\({}_{1}\).
* ModelH group: We explore the effects of a non-uniform Madau and Dickinson (2014) cluster birth redshift distribution through ModelH\({}_{1}\). ModelH\({}_{2}\) uses a fixed birth redshift of 2, approximately close to the peak of the Madau and Dickinson (2014). All other parameters remain exactly identical to ModelA\({}_{1}\).
Once the batch of clusters is created for different models, we evolve them using cBHBd as explained below.
* Each cluster is assigned a birth redshift, depending on model type -- either uniform (between 0-8 or 0-12) or Madau and Dickinson (2014) distribution or fixed at a redshift of 2, irrespective of their initial cluster properties.
* Volumetric shells are created, such that they join to enclose a volume with radius with endpoints corresponding to redshift 0 and 8. The width of each shell is taken to be 0.2 as step-size (although our calculation becomes independent of the step-size, as long as \(N\) is large enough).
* A flat \(\Lambda\)CDM cosmology with a Hubble constant of \(70\,\mathrm{km\,s^{-1}Mpc^{-1}}\) and \(\Omega_{0}=0.3\) is assumed. For each merger in each cluster (at the given birth redshift), the true merger redshift and look-back times are computed. Only the mergers that occur within the lower limit of the grid redshift is taken (i.e., mergers that happen
Figure 19: 20 cluster models with different initial masses and densities (circles) evolved using cBHBd for a Hubble time to their final masses and densities (stars) joined by dotted green arrows. All other settings of these models are the same as the Fiducial model, which is denoted by a circle around the star (for final mass-density values, similar to that of the Milky-way nuclear cluster). The colour bar for the circular points show the 90\({}^{\mathrm{th}}\) percentile of the IMBH mass that forms, while that of the stars show the 50\({}^{\mathrm{th}}\) percentile. This is a visual representation of the variance in the upper IMBH mass formed solely through hierarchical mergers.
Figure 18: Comparison of DBH parameters in ejected (_c_i) vs in-cluster (_ini) mergers. The left panel shows both the primary (upper plot) and secondary (lower plot) in ejected mergers are less massive, making the chirp mass (M\({}_{\mathrm{c}}\)) smaller for ejected DBHs (lower middle panel). Equal mass-ratio \(q\) is slightly more preferred in ejected systems (upper middle panel), as are first generation mergers (upper right panel). Away from further dynamical binary-single encounters, the initial (at formation) eccentricity \(e_{i}\) distribution of ejected mergers are marginally more circular than the thermal distribution, while in-cluster DBH mergers typically have much steeper \(e_{i}\).
in the future of the cluster are rejected). Here, we also calculate the signal-to-noise ratio (SNR) of the gravitational-wave emission of the coalescing binaries for a range of different detectors. The SNR is a standard quantification of the detectability of a gravitational-wave signal for a given instrument, and can be calculated for a compact binary as (Cutler and Flanagan, 1994)
\[{\rm SNR}^{2}=\frac{5}{6}\frac{1}{\pi^{4/3}}\frac{c^{2}}{r^{2}}\left(\frac{GM_{ \rm c}}{c^{3}}\right)^{5/3}|Q(\theta,\phi;\iota)|^{2}\int_{f_{\rm min}}^{f_{\rm max }}{\rm d}f\,\frac{f^{-7/3}}{S_{n}(f)}, \tag{8}\]
where \(r\) is the luminosity distance to the binary and \(M_{\rm c}=(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\) its chirp mass. The function \(Q(\theta,\phi;\iota)\) describes the antenna response of the detector to the cross and plus polarization of the gravitational wave; it depends on the polar angles \(\theta\) and \(\phi\) of the binary position on the sky and the inclination \(\iota\) of its orbital axis with respect to the line-of-sight. In this work, we marginalise \(|Q(\theta,\phi;\iota)|^{2}\) over all angles, which yields \(\langle|Q(\theta,\phi;\iota)|^{2}\rangle=4/25\) for an interferometric detector design (Maggiore, 2008). The function \(S_{n}(f)\) is the noise power spectral density of a given instrument. Here, we use the noise power spectral densities for the currently operating Advanced LIGO (aLIGO) detectors and the planned Cosmic Explorer (CE) and Einstein Telescope (ET) detectors. 10
Footnote 10: For aLIGO we adopt \(S_{n}(f)\) from released data of the collaboration: [https://dcc.ligo.org/LIGO-T1800044/public](https://dcc.ligo.org/LIGO-T1800044/public), last accessed 16 May 2023. For CE we adopt \(S_{n}(f)\) from released data of the collaboration:[https://cosmiceplorer.org/sensitivity.html](https://cosmiceplorer.org/sensitivity.html), last accessed 16 May 2023. For ET we adopt \(S_{n}(f)\) from released data of the collaboration: [https://www.et-gu.eu/index.php/etessimivities](https://www.et-gu.eu/index.php/etessimivities), last accessed 16 May 2023.(see main text).
The frequency minimum \(f_{\rm min}\) and maximum \(f_{\rm max}\) of the integration in Eq. (8) depend on the detector. Ground-based detectors like aLIGO, CE, and ET are sensitive to relatively high gravitational-wave frequencies, \(\sim{\cal O}(10^{1}-10^{3})\) Hz, which are emitted by binaries during their final orbits before merger. Hence, for ground-based detectors we set the \(f_{\rm max}=c^{3}/[6\sqrt{6}\pi G(m_{1}+m_{2})]\), corresponding to the frequency of the binary's Innermost Stable Circular Orbit (ISCO). For practical purposes, we can set the lower limit of the integration to \(f_{\rm min}=0\), because for the noise power spectral densities of ground-based detectors only frequencies \(f\gtrsim 10\) Hz significantly contribute to the integral in Eq. (8).
If there are "\(\Delta\kappa\)" selected mergers within the interval of \(\Delta t_{\rm k}\), in a particular "\(N\)th" cluster associated normalized weight "\(w_{\rm N}\)", the weighted contribution of that cluster in merger rate becomes
\[\left(\frac{\Delta\kappa}{\Delta t_{\rm k}}\right)w_{\rm N}, \tag{9}\]
in units of \({\rm yr}^{-1}\) (since \(w_{\rm N}\) is dimensionless). Averaging the contribution for \(N\) cluster for each model set, we get
\[\sum_{N}\left(\frac{\Delta\kappa}{\Delta t_{\rm k}}\right)w_{\rm N}, \tag{10}\]
since \(w_{\rm N}\) is factorized such that \(\sum w_{\rm N}=1\).
(v) If the number density of galaxies is \(\rho_{\rm s}\,{\rm Gpc}^{-3}\) for the volume grid \(v_{\rm s}\), we need to sum over "\(s\)" volume grids that gives the total volume \(V\), such that the total merger rates \({\cal R}\) becomes
\[{\cal R}=\frac{f_{\rm nc}}{V}\sum_{s}\rho_{\rm s}v_{s}\sum_{N}\left\{\left(\frac {\Delta k}{\Delta t_{\rm k}}\right)w_{\rm N}\right\}, \tag{11}\]
where \(f_{\rm nc}\) is the fraction of galaxies in the mass range of \(10^{8}-10^{12}\) M\({}_{\odot}\) that have a nuclear cluster. Observationally, this fraction varies with galaxy mass, with \(\lesssim 20\%\) for galaxies with mass around \(10^{6}\) M\({}_{\odot}\), to as high as 90% for galaxy mass of \(10^{9}-10^{10}\) M\({}_{\odot}\). However, we simplify the matter by taking \(f_{\rm nc}=0.8\) for all galaxies, which is a rough estimate for Late-type galaxies in our mass range (Neumayer et al., 2020, Fig 3). If a more generalized condition is desired, such that \(f_{\rm nc}\) becomes a function of the host galaxy mass (\(f_{\rm nc}\), \(N\)), this term can be added inside the summation over \(N\). If \(\rho_{\rm s}\) is independent of galaxy properties and redshift (and hence constant), the expression can be simplified to
\[{\cal R}=f_{\rm nc}\rho_{\rm s}\sum_{N}\left\{\left(\frac{\Delta k}{\Delta t_{ \rm k}}\right)w_{\rm N}\right\}. \tag{12}\]
The choice of \(\rho_{\rm s}\) is a tricky one. Fletcher (1946) estimated the high value of \(\rho_{\rm s}\approx 12\) Mpc\({}^{-3}\). More recent works have lowered this number significantly (Poggianti et al., 2013; Leja et al., 2013; Ownsworth et al., 2016), but with different studies resulting in different estimates for \(\rho_{\rm s}\), we have taken the upper limit of \(\rho_{\rm s}\approx 0.01\) Mpc\({}^{-3}\)(Conselice et al., 2005, through the Hubble Space telescope), as used by Antonini et al. (2019). Conselice et al. (2016) predicts \(\approx 2\times 10^{12}\) galaxies within the redshift of 8 (making \(\rho_{\rm s}=0.001\) Mpc\({}^{-3}\)), while Lauer et al. (2021) with New Horizons shows the sky to be 10\(\times\) less bright. JWST data may further alter \(\rho_{\rm s}\) the near future.
Detectable rates \({\cal R}_{\rm LVK,CE,ET}\) for aLVK, CE, ET are also calculated in a similar way, but only by counting the mergers with SNR\(>8\) correspondingly.
Finally, We note that our nuclear cluster models do not have a central SMBH, while observations show that at least in some galaxies they coexist (e.g., Seth et al., 2008; Neumayer and Walcher, 2012). Thus, our merger rates should be most likely intended as upper limits.
The calculated rates for different models are tabulated in Table 3. We find intrinsic rates between \({\cal R}=0.01-0.80\) Gpc\({}^{-3}\)yr\({}^{-1}\), while \({\cal R}_{\rm LVK}\lesssim 0.7{\cal R}\), \({\cal R}_{\rm CE}\approx 0.8-0.97\)R and \({\cal R}_{\rm ET}\gtrsim 0.97\)R. Lower-redshift, late-type host galaxies appear to have higher merger rates. Comparing ModelA\({}_{1}\) to ModelD\({}_{1}\), we observe that extending the redshift to 12 from 8 does not change \({\cal R}\) significantly, since peak of our detectable mergers merge from \(z\approx\)1-2, a trend similar to Fragione et al. (2022). The lack of contribution from clusters born at higher redshifts is also apparent in models ModelH\({}_{1,2}\), where \({\cal R}\) remain identical, and negligibly lower than ModelA\({}_{1}\). As the initial cluster densities are made higher in ModelF\({}_{1,2}\), the rates increases by 5 to 8 times. The spread in cluster mass-radius is only partially encapsulated in ModelB\({}_{1,2}\) and ModelC\({}_{1,2}\), resulting in ModelG\({}_{1,2}\) with simplistic equi-weight assumptions produce slightly higher \({\cal R}\). The upper limit of our merger rate is about an order-of-magnitude lower than the upper limit obtained by Fragione et al. (2022) (Fig. 1 showing random initial seed model roughly matches ours), possibly due to their assumption of a pre-existent seed BH.
Qualitatively, while the LVK and ET appear to be able to observe (SNR\(>8\)) for primary masses up to a few hundred M\({}_{\odot}\)s and \(q\sim{\cal O}(10^{-1})\), CE pushes this threshold to about 800 M\({}_{\odot}\) and \(q\sim{\cal O}(10^{-2})\).
Evolving the 228 Georgiev et al. (2016) models under two assumptions of initial conditions--(a) mass and radius are the same as in current observations, and (b) mass is the same as in current observations and radius is 1 pc--we find that after a Hubble Time, in model (a) \(\approx 6\) of the clusters host IMBHs \(>1000\) M\({}_{\odot}\) and \(\approx 168\) of them host an IMBH of mass 100 to 400 M\({}_{\odot}\); in model (b), \(\approx 20\) of the clusters host IMBHs \(>1000\) M\({}_{\odot}\) and \(\approx 176\) of them host an IMBH of mass 100 to 400 M\({}_{\odot}\).
## 5 Conclusions
In this study we have studied \(3,400\) massive clusters within the initial mass range of \(10^{6}-10^{8}\,\mathrm{M}_{\odot}\) and initial density range of \(10^{5}-10^{8}\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-3}\), which can be considered in the parameter space of nuclear clusters and the most massive globular clusters with the updated fast code eBHBH that incorporates initial mass function (hence metallicity) dependent probabilistic DBH pairing, binary-single encounters (including mass-dependent energy loss \(\Delta\mathrm{E}/\mathrm{E}\)). In reference to the main questions we asked in Sec. 1, we find that --
1. IMBHs ranging from \(\mathcal{O}(2)-\mathcal{O}(4)\,\mathrm{M}_{\odot}\) can be created solely through in-cluster hierarchical mergers (Fig. 19). This mass range is roughly one or two magnitudes lower than the least massive SMBHs. To reach to the mass range of SMBHs from the hierarchically-created IMBHs, there must therefore be subsequent mass accretion through other processes.
2. The initial cluster escape velocity is the most important parameter in determining IMBH formation. The corresponding values of cluster's mass and density determine the final mass of the IMBH. For \(v_{\mathrm{esc}}\gtrsim 400\,\mathrm{km}\,\mathrm{s}^{-1}\), an IMBH with mass up to \(\mathcal{O}(4)\,\mathrm{M}_{\odot}\) can form for sufficiently high cluster masses and densities (see the lower panel of Fig. 10). This cut-off escape velocity results from a combination of increased BH retention post-natal kick (Fig. 11) and gravitational wave recoils, which average to around \(400\,\mathrm{km}\,\mathrm{s}^{-1}\) for all spin magnitudes and orientations (see Fig. 12). Other secondary factors that play a role in determining the mass of the IMBH are cluster metallicity (which alters the width of the initial mass function), DBH pairing prescription (\(\alpha\) parameter, see Sec. 2) and the mass-dependent functional form of \(\Delta\mathrm{E}/\mathrm{E}\) (see Sec. 2). On the other hand, the initial BH spin distribution, unless extremely high (i.e. \(\chi 1_{2}=1;1\)), does not significantly affect the final IMBH mass nor its spin.
3. An in initial BH seed of at least \(10\)x the most massive (stellar-evolution originated) in-cluster initial BH is required to maximise the probability of the seed BH's retention after a few hierarchical mergers (Sec. 3.2.4), which can be \(\sim 200\,\mathrm{M}_{\odot}\) for metal-rich clusters to \(\sim 400\,\mathrm{M}_{\odot}\) for metal-poor clusters.
4. The spin of the final IMBHs shows a double peak distribution and a clear mass dependence (upper right panel of Fig. 18; and Fig. 14). For masses \(\gtrsim 10^{3}\mathrm{M}_{\odot}\) the IMBH spin is \(\chi_{\mathrm{MBH}}\sim 0.15\), while for lower masses the spin distribution peaks at \(\sim 0.7\).
5. \(\approx 6-20\%\) of all in-cluster mergers in our set of models are expected to have an eccentricity \(e\geq 0.1\) at \(10\,\mathrm{Hz}\). We find that eccentric mergers are particularly favoured in equal-mass binaries. This means that metal-rich and younger (age \(\sim 0.5-1.5\,\mathrm{Gyrs}\)) clusters, which do not form IMBHs of \(>100\,\mathrm{M}_{\odot}\), are ideal formation grounds for eccentric mergers that may be detected by the current generation of gravitational-wave detectors.
About \(2-9\%\) of all in-cluster mergers are also expected to be formed at a frequency greater than \(10\,\mathrm{Hz}\) (this is a subset of the eccentric merger fraction). Such extreme cases may appear in the LVK burst searches (see Fig. 7).
1. The number of mergers involving an IMBH (i.e. \(\geq 100\,\mathrm{M}_{\odot}\)) as a fraction of the total number of mergers is expressed by \(\mathcal{F}_{100}\). Very dense clusters (\(\rho_{i}=10^{8}\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-3}\)) that rapidly form a BH of \(100\,\mathrm{M}_{\odot}\) can have \(0.09\lesssim\mathcal{F}_{100}\lesssim 0.53\). For the Fiducial model, \(\mathcal{F}_{100}\approx 0.16\) (Table 2).
2. The fraction of ejected mergers, \(\mathcal{F}_{\mathrm{ej}}\), is a function of the cluster mass and escape velocity, with \(\approx 20-100\%\) of the DBH mergers being ejected mergers in clusters with \(v_{\mathrm{esc}}<100\,\mathrm{km}\,\mathrm{s}^{-1}\). However, \(\mathcal{F}_{\mathrm{ej}}\) becomes negligible when \(v_{\mathrm{esc}}>200\,\mathrm{km}\,\mathrm{s}^{-1}\) (see Sec. 3.3 and Fig. 17). Ex-situ mergers are typically less massive, more symmetric in mass, and have a longer delay time than their in-situ counterparts.
3. The rates of DBH mergers from nuclear clusters can be rounded to \(\mathcal{R}\lessapprox 0.01-1\)\(\mathrm{G}\mathrm{pc}^{-3}\mathrm{yr}^{-1}\) remembering it to be the upper limit due to not including an SMBH in our calculations. The orders-of-magnitude uncertainty on the rates arises predominantly due to the uncertain number density distribution of nucleated galaxies (we assume \(\rho_{\mathrm{s}}=0.01\,\mathrm{Mpc}^{-3}\) and \(f_{\mathrm{esc}}=0.8\)). Uncertainties in the initial nuclear cluster mass-density distribution, the nuclear cluster mass scaling relation with respect to host galaxy mass, and the metallicity distribution of nuclear clusters have lower impact on \(\mathcal{R}\).
4. At SNR\(>8\), we expect ET and CE to detect about \(80\%\) and at least \(90\%\) respectively of the intrinsic DBH mergers from nuclear clusters. Out of the 228 nuclear clusters with well-measured masses and radii (Georgiev et al., 2016), we predict that up to \(80\%\) of the clusters host hierarchically-formed IMBHs with masses \(\lesssim 400\,\mathrm{M}_{\odot}\), and that up to \(20\) host IMBHs with masses \(>1000\,\mathrm{M}_{\odot}\). We also highlight that, while the current generation of gravitational-wave detectors can only observe IMBHs of up to a few hundred \(\mathrm{M}_{\odot}\), future detectors such as CE will have improved lower-frequency sensitivity, enabling detection of more massive (\(\sim 500-800\,\mathrm{M}_{\odot}\)) IMBHs.
Future improvements to the study presented here will include binary-binary interactions, BH mergers with other objects (e.g., neutron stars, white dwarfs, non-compact objects), both globular and nuclear clusters, mass gain through infalling globular clusters, and the wet component of gas accretion for nuclear clusters (Guillard et al., 2016; Bourne & Power, 2016).
## Acknowledgements
We thank Christopher Berry, Mark Gieles, Daniel Marin Pina, Simon Stevenson and Tom Wagg for useful discussions and comments. DC and JB are supported by the STFC grant ST/V005618/1, and FA is supported by an STFC Rutherford fellowship (ST/P00492X/2). IMR-S acknowledges support received from the Herchel Smith Postdoctoral Fellowship Fund. This work made use of the OzSTAR high performance computer at Swinburne University of Technology. OzSTAR is funded by Swinburne University of Technology and the National Collaborative Research Infrastructure Strategy (NCRIS).
## Data Availability
The data utilized for this work will be freely available upon reasonable request to the corresponding author.
|
2307.09588
|
Automating Wood Species Detection and Classification in Microscopic
Images of Fibrous Materials with Deep Learning
|
We have developed a methodology for the systematic generation of a large
image dataset of macerated wood references, which we used to generate image
data for nine hardwood genera. This is the basis for a substantial approach to
automate, for the first time, the identification of hardwood species in
microscopic images of fibrous materials by deep learning. Our methodology
includes a flexible pipeline for easy annotation of vessel elements. We compare
the performance of different neural network architectures and hyperparameters.
Our proposed method performs similarly well to human experts. In the future,
this will improve controls on global wood fiber product flows to protect
forests.
|
Lars Nieradzik, Jördis Sieburg-Rockel, Stephanie Helmling, Janis Keuper, Thomas Weibel, Andrea Olbrich, Henrike Stephani
|
2023-07-18T19:51:28Z
|
http://arxiv.org/abs/2307.09588v2
|
Automating Wood Species Detection and Classification in Microscopic Images of Fibrous Materials with Deep Learning
###### Abstract
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
deep learning wood identification maceration vessel elements EU Timber Regulation
## 1 Introduction
In order to reduce illegal logging, the European Union (EU) has, since 2013, required documentation on the origin and species of wood contained in every wood product that is placed on the EU market (EU Timber Regulation, EUTR, No. 995/2010). At the end of 2024, a new regulation is planned to be introduced to avoid global deforestation. The new
EU regulation will target a wider range of products to ensure that these products are "deforestation-free" [European Commission, 2021].
To control the compliance with these laws, the demand for wood species identification is already high and expected to increase.
Various methods have been developed to identify wood species, such as genetic analysis, near-infrared (NIR) spectroscopy, stable isotopes and wood anatomy [Schmitz et al., 2020]. When it comes to identifying wood in non-solid samples, such as paper or pulp, microscopic analysis of the wood anatomy is the method of choice. 163 structural features have been defined by the International Association of Wood Anatomists [Wheeler et al., 1989] and have been used for microscopic descriptions of about 8700 timers collected in various databases [Richter and Dallwitz, 2000-onwards, Wheeler, 2004-onwards, Koch and Koch, 2022]. Only a few of these structural features can still be utilized for the analysis of fibrous materials. Nevertheless, there are very good descriptions of the references available for the woods mainly used in paper production [Illessalo-Pfaffli, 1995, Helmling et al., 2018]. While macroscopic wood analysis already requires extensive expert knowledge of wood anatomists [Ruffinatto and Crivellaro, 2019], microscopic analysis requires more effort, as solid wood samples have to be prepared into thin cuts or the cells of a paper sample have to be individualized. The main challenge in the analysis of paper is that the structural features relating to the three-dimensional arrangement in the tissue cannot be detected from the individual cells of the fibrous materials. Therefore, the anatomical identification of woods is performed at most at the genus level. In addition, paper usually consists of a mixture of different genera. Therefore, two slides per sample must be systematically examined by wood anatomists in order to detect all wood genus present, even those in lower concentrations.
Due to the considerable amount of time required (and due to the limited number of competent scientists in this field), the experts are currently not able to fulfill the need for this analysis. For this reason, the economic and ecological impact of improving and facilitating the analysis of these microscopic images is huge.
For more than 20 years, very good computer-aided wood species identification systems have been available, such as Commercial Timbers [Richter and Dallwitz, 2000-onwards], Inside Wood [Wheeler, 2004-onwards] or CITESwoodID [Richter et al., 2014-onwards]. Large databases, also available online. There have been endeavors in recent years to automatize macroscopic solid wood identification by using image based machine learning techniques. In particular, field-ready wood identification systems, such as MyWood-Premium [UTAR and FRIM, 2018], XyloTron [Ravindran et al., 2020], or XyloPhone [Wiedenhoeft, 2020] could contribute greatly to the fight against the illegal wood trade. This artificial intelligence (AI) is producing promising results and is currently under fast development. [Silva et al., 2022]. In microscopic wood identification especially for the analysis of fibrous material, on the other hand, the process is still mostly manual, and no automatic approach is yet known to the authors.
In this paper, we present how a large number of reference specimens for hardwood fibers can be imaged across the entire slide in five focal planes to build the database. To facilitate the expert process of wood species identification, these data are used to train a deep-learning based system to analyze unknown samples automatically. With this automatization, more samples can be analyzed and protection of forests will be improved.
As any data-driven method relies on a solid, well annotated database, we present a methodology that enables easy annotation and reannotation to increase annotation throughput and minimizing this tedious work for biological experts. Furthermore, as we want to make use of the plethora of deep learning algorithms now and in the future, we use a pipeline with interchangeable single processing steps, that also have measurable performance.
We will show that our proposed method is able to solve the given task, as well as illustrate the influence, different networks and parametrizations have on the single processing steps. Hence, the key contributions of this paper are:
* An initial, comprehensive microscopic image reference data-set for hardwood fiber material is available and can be easily extended using this method.
* We are the first to automate microscopic wood detection of hardwood fibers using neural networks. Our automated method performs similarly well to human experts.
* A flexible pipeline is presented on how the dataset was generated. This data is then used for training the networks. This pipeline can be used for additional hardwood samples/species in the future, and be extended to be used on mixed samples.
* We perform in-depth comparison of state-of-the-art neural network architectures and hyperparameters guided by biological domain expertise.
* With our tests, we also show how large microscopic data of this and similar types must be preprocessed to train neural networks. In particular, we show the effect of color channels, different focal planes, and image sizes on accuracy.
## 2 Materials and Methods
We first discuss the methods applied for sample preparation, the microscope that is used and the respective influences on image generation and variation.
We will then describe the process pipeline for automatic image analysis with state-of-the-art deep neural networks and its individual methods and parameters.
### Sample, Optics and Image Generation
Sample preparation is very complex and requires biological and technical expertise. For reasons of feasibility, we focus on hardwood samples. They can be identified by vessel elements and their anatomical structure, as described in Ilyessalo-Pfaffli (1995) and Helmling et al. (2018). Within those samples, we mainly selected commonly processed timbers that are cultivated in plantations for pulp and paper and fiber board production. To test the limitations of the method, morphologically very different as well as very similar species were selected. Voudhered specimens of the wood collection of the _Thinen institute_ and other documented sources served as reference material for training and testing. Analogously to pulp production, the cell compound of wooden tissue is dissolved into individual cells by maceration according to the method of (Franklin, 1945). Maceration and staining are described in Helmling et al. (2016) and Helmling et al. (2018). Alexander Herzberg solution and nigrosin (1 wt%) were used for staining. As Alexander Herzberg staining is not durable, the slides must be examined without delay.
In a later real-life application scenario, the cell density on the slides will vary depending on the preparator. To consider this variance in the training data, the preparations were made by different people.
Figure 1 shows four different overview images of macerated samples of different species. Density and color as well as shape of vessel elements show a high variance. Influencing factors are: choice of species, staining method, preparing technician, preparation agents, density and microscope. To record the data and automatically digitize a large number of samples, we use the microscope slide scanner Axioscan 7 (Zeiss, Germany). With Objective N-Achroplan 5x/0,15 five focal levels per slide were recorded over an area of approximately 8 cm\({}^{2}\) with a scale per pixel of 0,69 x 0,69 x 16,33 \(\upmu\)m\({}^{3}\) (Software, ZEN slidescan 3.5, Zeiss, Germany). Depending on the material, the vessel elements and cell types can have different levels of destruction. They may still be completely intact, but may also be torn into smaller fragments.
Figure 1: Overview images of differently stained macerated samples of A _Acacia_, B _Populus_, C _Hvea_, D _Salix_. Scale bar=5mm.
### Overview of Algorithm Pipeline
Convolutional neural networks (CNNs) are widely applied to all kinds of image processing problems for the last 15 years. They have proven to be well generalizable and generally outperform traditional computer vision methods [14]. However, as in most real world applications, we are not presented with a fixed data set and a well-defined classification or segmentation task, but rather have to model and build the dataset ourselves. One of the key problems is that annotation of microscopic overview images of these samples is tedious. Additionally, it can only be performed by biological wood identification experts (wood anatomists) that are familiar with each species' characteristic cell structure. We annotated the vessel elements using ZEN blue 3.4 software from Zeiss, Germany.
All modern object detection networks such as Faster R-CNN [13], DINO [1], DETR [15] or YOLOv7 [20] perform detection and classification in one step. We, however, use a two-step algorithm approach, as illustrated in Figure 2. We first detect the vessel elements as bounding boxes and then perform classification on these bounding boxes as a second step. There are several reasons for doing this. The first reason is that it imitates the process that is done by visual-manual analysis. This improves interpretability and comparability with the previous human method. Secondly, we can assume that we do not require the same image resolution for detection of vessel elements as for their classification. Reducing the resolution, which can be up to \(50000^{2}\) pixels, is the easiest mean of reducing computational cost and increasing flexibility. Last but not least, it makes the generation of a database easier, as we explain in the next section.
#### 2.2.1 Database Generation
In order to develop a robust and generalizable deep learning procedure systematically, we want to build a large database quickly. A two-step procedure is very advantageous for that purpose. Illustrated in fig. 3, one can see how the databases for the detector and classifier are trained iteratively to minimize effort for the annotators as much as possible: That is why we are starting to build up a database of overview images using only pure species samples, rather than directly using mixed species' materials and annotating them.
Figure 3: Database building and learning pipeline. Wood expert annotation effort in yellow, databases are in orange and deep networks in blue
Figure 2: Two-step procedure where first vessel elements are detected and then classified
The advantage is two-fold: first of all, only location annotation must be performed, and classification annotation can be avoided. Secondly, pure samples are easier to come by than systematically mixed samples would be. Detection is then performed on the mono-fraction overview images, while classification is done on full resolution mostly centered cut-outs. The detection is trained with several genera to ensure the recognition of a greater habitus diversity of the vessel elements. These genera are _Acacia_ (Acacia), _Betula_ (Birch), _Eucalyptusptus_ (Eucalyptuspt), _Fagus_ (Beech), _Heyea_ (Rubberwood), _Liquidambar_ (Sweet gum), _Populus_ (Poplar), _Salix_ (Willow) and _Schima_ (Chinese guger tree). _Schima_ and _Populus_ exemplarily shown in fig. 4. However, one important aspect that has to be considered with special care is the so-called data leakage (Hannun et al., 2021).
#### 2.2.2 Data leakage and dataset splitting
When training a model, it is always necessary to split the data in a way that avoids training "wrong" features. Learning wrong features instead of relevant ones is called data leakage. One typical example of data leakage is the brightness of an image. If images of some classes tend to be darker than images of other classes, this can lead to the network only paying attention to the brightness of the image and not learning relevant features.
The maceration process as described above, produces samples that should have a variance that is characteristic for the respective species, we want to detect and classify. However, as maceration is a manual process, it can also include variance in the data that is characteristic for the individual human preparation instead of the species. Therefore, it is important not to learn these preparation differences. This is done by making sure that each genus is represented by at least three independent macerates. The respective images are then split to generate independent datasets for training, validation and testing.
The test dataset is used only for a final check, while the training and validation datasets are used for training and optimizing hyperparameters. We keep the same ratio of classes in both the training and validation splits (stratified).
### Detection
The detection step consists of locating the vessel elements in the images. The final vessel element classification is based on fine structures within the vessel elements and therefore will only be possible with a high enough resolution. At the same time, the detection of the vessel elements can be performed on downscaled images in a more efficient way. Hence, even if modern object detectors such as YOLOv7 (Wang et al., 2022) or DINO (Zhang et al., 2022) are capable of a direct classification step, we will not use the classification results.
As the goal of the paper is to gain insight into the applicability of convolutional neural networks to this application domain, we will compare different parameter settings.
For the model, we restrict ourselves to YOLOv7 (Wang et al., 2022). While there are other types of detectors, they do not necessarily work for different datasets and are limited by the image resolution. Since our images have an image size of up to 50,000 pixels, we need detectors that scale well to higher resolutions. YOLO-type (one-stage detectors) are
Figure 4: Image cutouts of vessel elements from overviews of A _Schima_ and B _Populus_. Scale bars: overview=1mm, cutout=100μm.
widely used in real-world applications (Kaggle, 2021a,b,c, 2023) and outperform other approaches such as keypoint detection (Law and Deng, 2018; Duan et al., 2019; Zhou et al., 2019). They also tend to have a faster convergence than DETR (Zhang et al., 2022b) and work for non-photography images.
Apart from detectors, segmentation networks like U-Net (Ronneberger et al., 2015) are also common in the microscopy domain. However, a disadvantage is that a pixel-wise annotation is much more time-consuming. Furthermore, segmentation does not scale well to higher resolutions because the input is as big as the output. For detectors, the outputs are only coordinates and not pixels. Consider a 1000x1000 segmentation mask as an example. Storing this mask in GPU memory requires keeping \(1000^{2}\) floating-point numbers, in addition to the lower-dimensional images generated by the feature pyramid. Consequently, the memory required for this exceeds the memory needed for storing multiple vectors with bounding box coordinates.
#### 2.3.1 Preprocessing
While YOLOv7 scales well to higher resolutions, there is also a limit regarding the available GPU memory.
However, with an image of size \(50000\times 50000\), vessel elements can still be identified with 10% of the original resolution in most cases. Only to determine to which genus a vessel element belongs, one needs the full resolution to see all details. For example, _Hvea_ has a size of around 1400 pixels (when considering the height/width of the bounding box).
#### 2.3.2 Measuring the quality of object detection
The standard metric in object detection is mean Average Precision (mAP) (Everingham et al., 2010).
\[\text{AP}=\int_{0}^{1}p(r)dr\,,\]
where \(r\) is recall and \(p(r)\) is the corresponding precision. Recall is the number of correctly found objects in relation to the total number of objects present in an image. Precision is the number of correctly found objects in relation to all found objects in an image. We only have one class that we want to detect, namely vessel elements of any type. However, when there are multiple classes, the mean over all single classes' AP is taken and called mean Average Precision (mAP). Some additional remarks:
* we have to define the term "correctly found". For each bounding box, we compute the overlap between the prediction and the ground truth (intersection over union or short IOU). When the IOU is greater than some threshold, a found box is defined as a true positive.
Figure 5: Mosaic data augmentation, where the blue boxes denote vessel elements
* For AP, we use an IOU threshold of \(0.5\).
* Besides mAP, we also consider precision and recall at the fixed IOU threshold \(0.5\). A high recall is more important than high precision because false positives can be removed in the classification step.
* In practice, the integral of AP is approximated by a set of eleven equally spaced recall levels [13].
#### 2.3.3 Data augmentation
Data augmentations are important for training YOLOv7. One example is the mosaic augmentation. Newer versions of YOLO use this augmentation because it allows to create a large amount of new training images. Three images are randomly sampled from the dataset and combined with a fourth image. Fig. 5 illustrates how the mosaic data augmentation works. This augmentation can be used in conjunction with image or color shifts to increase the number of possibilities for creating four images even more.
Since too many augmentations can also affect the training negatively, we restricted ourselves to a few common ones. Apart from mosaic augmentation, we used color jittering in HSV space (hue, saturation, value), image shifts, scaling and left-right flips.
### Classification
The task of classification is using the full resolution vessel element crop-outs from the detection step and classifying them according to their wood genus. Figure 6 shows an example of a vessel element for five different focal planes. Here, the structural differences of the individual planes, such as the vessel-ray pits or the intervessel pits, are visible. The output for each vessel element candidate is the confidence for it to belong to a specific class. For the prediction, we chose the class with the highest likelihood.
Different **architectures** are evaluated to see how the architecture influences the result. Architectures tested are ConvNeXt [15], EfficientNet [15], ResNet [14] and DenseNet [16]. The dataset ImageNet is usually used for evaluating the accuracy of classification architectures. While there are many more architectures, it was shown by [17] that an increase of accuracy on datasets such as ImageNet does not necessarily translate to an improvement on real-world datasets. The chosen architectures are state-of-the-art approaches that were developed in recent years.
#### 2.4.1 Preprocessing
One important contribution of this paper is to provide quantitative information on how to adapt/preprocess microscopic images for hardwood identification in a way that deep neural networks can successfully be applied. We therefore identified a number of image parameters that will be analyzed with respect to their influence on performance and accuracy.
Above all, **resolution** is a key feature. However, similar to object detection, hardware limitations are an important issue. Higher resolution means that a smaller batch size must be used. However, a small batch size may inhibit the convergence of a neural network. While biologically the highest image size would be best, we need to test if this is also true for neural networks.
Another parameter is how to **handle the variance in the size** of the detected vessel elements, i.e. the variance in image size. The neural networks expect the same image size for each vessel element. For datasets with natural images (dogs,
Figure 6: Comparison of focal planes for one vessel element of the genus _Eucalyptus_. Depending on the focal plane, different areas (pits) of the vessel elements are in focus and therefore better visible. Scale bar=100μm.
cats, etc.), the solution is to resize the images, with the most common size being 224x224. The reason for this is that the classes can be distinguished even if the objects in question are distorted. However, in the case of wood identification, this could destroy important features for genus determination. An alternative approach is therefore to pad the image with zeros. The disadvantage of padding is that many pixels of the image do not provide any information. For a big vessel element, most of the image is the vessel element itself. But for a small vessel element, the image is mostly black.
As the vessel elements are three-dimensional objects, another preprocessing parameter is the **number of focal plane used** for classification. The microscope used to generate the images provides multiple focal planes that allow us to see specific areas of a region in more detail. To simulate the same behavior with a neural network, we try to use the focal planes as channels. Alternatively, we can input all the focus planes as individual images and combine the results.
Finally, we also have to consider the **importance of color** of the image. The samples were prepared with the Alexander Herzberg and nigrosin solution. These solutions affect the color of the vessel elements. It is possible that the neural networks are too biased towards the image color, while in reality it is not an important feature. Therefore, we also try grayscale images as input.
#### 2.4.2 Measuring the quality of vessel element classification
The usual metric for evaluating classification is accuracy. However, this metric is biased towards the classes with the most samples. Therefore, it would be a problem here:
_Hevera_ usually has only a few vessel elements per image, whereas _Populus_ and _Fagus_ can have hundreds of vessel elements per image. Maximizing accuracy would mean ignoring _Hevera_, since it does not significantly affect the objective function.
One solution to deal with class imbalance is to use macro F1 (averaged F1) because it highlights the performance of rare labels (Lipton et al., 2014; Opitz and Burst, 2019). It is defined as
\[\text{Macro F1}=\frac{1}{n}\sum_{i=1}^{n}\frac{2\cdot\text{precision}_{i}\cdot \text{recall}_{i}}{\text{precision}_{i}+\text{recall}_{i}}\,,\]
where \(n\) is the number of genera. Similar to accuracy, it summarizes how the classifier performs overall. In the equation, precision and recall have the same weight. F2 is often used to give more weight to recall.
In addition to that, we will also use the so-called confusion matrix. In a confusion matrix, then the results are reported in more detail. Especially in a classification task, this matrix shows not only true positives (in the diagonal), but also which classes are typically confused with which other classes.
#### 2.4.3 Data augmentation
As with object detection, data augmentation is also important for classification. One can generate more data and influence the variance of the data. One typical example in case of vessel element classification is to rotate the available instances, as they can appear in arbitrary orientation.
It is important, however, to apply so-called class-preserving augmentations: some augmentations might actually change the instance in a way that leads to it no longer being representative for the respective class. For simplicity, we apply the following data augmentations as baseline: vertical flips, brightness, contrast, saturation and hue.
None of these augmentations destroy important biological features such as vessel element size. While other augmentations such as Gaussian noise or horizontal flips would also be class-preserving, this does not mean that more regularization would always lead to better results.
An example is ImageNet, architectures trained on this dataset apply only horizontal flips and no vertical flips. Therefore, we need to test which additional augmentations really improve the results.
It is not enough for an augmentation to be only class-preserving. If the neural network is able to learn a particular relationship based on the data alone, then the augmentation is unnecessary and may actually degrade the performance of the classifier. In addition, there may be a distributional shift since the real data may not contain vessel elements with e.g. Gaussian noise.
## 3 Results
### The dataset, annotation speedup
We chose the dataset in such a way that it is sufficiently generalizable on the one hand and on the other hand defines a tackle sub-problem.
We decided to use samples from the following hardwood genera: _Salix, Populus, Hevea, Fagus, Eucalyptus, Betula, Acacia, Liquidambar_ and _Schima_. These genera are cultivated or processed worldwide for pulp production and are commonly identified in fiber products or in case of _Salix_ a look-alike. With these genera, we produced an image dataset for fiber references for the first time.
Figure 7 shows the number of images and vessel elements of the dataset. Predictions were generated with the Axioscan 7 microscope. The predictions were corrected, a new model was trained, and new predictions were made using the updated model. After repeating this procedure a couple of times, we obtained a relatively large dataset.
This iterative procedure facilitates annotation as described above. For each new species, some initial images were annotated without any prior prediction to make sure the detection algorithm is adapted to the specific characteristics of each species. Further annotation was done by using predicted vessel elements and only checking them. The annotation of the predicted images takes significantly less time than the full manual annotation takes.
One image including the five focal planes has a size of approx. 3.8 GiB. For this dataset, we have therefore around 1.2 TB of images. In order to train neural networks on this large amount of data, the images are preprocessed in different ways for both detection and classification, as was described before.
Figure 7: Number of annotated images and vessels, ordered by genus
### Detection results
The smallest YOLOv7 architecture W6 has 70.4 million parameters, while the biggest one E6E has 151.7 million. We did not find that bigger models improved the detection results. Instead, it made the training more unstable. It also makes it harder to train and run the models on GPUs with small memory.
A more important parameter is image size, as seen in fig. 8. Increasing the image size up from 2560 to 6400 leads to 7% higher mAP. This is a consequence of the model finding more vessel elements (higher recall).
After 5184 pixels, increasing the image size resulted in only minor improvements but at a considerable computational cost. Both training and prediction speeds slow down with increasing image size.
Other hyperparameters such as learning rate, number of epochs, gradient accumulation and more epochs only resulted in minor decreases or increases of mAP (\(<0.5\%\)).
After having determined the optimal hyperparameters, the trained model is tested on the test dataset. The results remain stable between the validation and test dataset. On the test dataset, we achieve an mAP of 71.85% with 77.63% precision and 72.98% recall.
Some genera produce more errors than others. This can be seen in table 1. Notably, _Hevea_ tends to have low precision and high recall. With _Liquidambar_ and _Salix_, it is the other way around.
Low precision can be improved by increasing the confidence threshold. Similarly, low recall can be improved by decreasing the confidence threshold.
We mainly find three types of errors, as seen in fig. 9. Some genera such as _Fagus_ or _Liquidambar_ have many vessel elements. The detector is not always able to find the vessel elements when they are too close to each other (a). Additionally, some images have low brightness (b). This leads to a low recall for certain images. Finally, there are also false positives because fibers are similar to vessel elements (c).
\begin{table}
\begin{tabular}{l l l l} \hline \hline Genus & Precision & Recall & F2 \\ \hline Liquidambar & 0.8885 & 0.6145 & 0.6549 \\ Salix & 0.9109 & 0.6317 & 0.6730 \\ Fagus & 0.9357 & 0.6799 & 0.7192 \\ Populus & 0.9578 & 0.6855 & 0.7268 \\ Eucalyptus & 0.8125 & 0.7629 & 0.7723 \\ Hevea & 0.5060 & 0.9037 & 0.7809 \\ Schima & 0.8736 & 0.8537 & 0.8576 \\ Betula & 0.8961 & 0.8581 & 0.8654 \\ Acacia & 0.8753 & 0.8950 & 0.8910 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Detection results for individual genera, ordered by F2
Figure 8: Effect of image size on mAP on the validation dataset
Real-world wood identification does not rely on single vessel element classification, as the identifiability is not always given. Therefore, to mimic real-world performance, we only need to ensure that a sufficiently high number of vessel elements is correctly detected. Then the predicted vessel element distribution is approximately correct. In other words, it is more important to know that certain genera are in the image. It does not matter if some vessel elements are not found correctly or even classified incorrectly. Only overall, the result has to be correct.
With a larger dataset, we also expect fewer errors to occur. While we have almost 30,000 images for classification, we have 100 times fewer for detection.
### Classification results
For all tests, we used the Adam optimizer (Kingma and Ba, 2014) with a batch size of 32 and a learning rate of either \(10^{-3}\) or \(10^{-5}\).
First, we determined the required image **resolution**. As fig. 10 shows, increasing the size from 224x224 to 800x800 improves the macro F1 score by about 12%.
If the target image size is \(800\times 800\) pixels, we resize all images above that number so that both sides are \(\leq\) 800 pixels. For example, with \(1241\times 766\) pixels, we would have a new image of size \(1241\cdot r=800\) and \(800\cdot r\approx 516\) for \(r=\min\left(\frac{800}{1241},\frac{800}{766}\right)\). The remaining \(800-516\) pixels are padded from both sides with zeros. When for a vessel element both sides are \(\leq 800\), no resizing is required, and we only pad the vessel element with zeros.
Figure 10: Effect of image size on F1 on the validation dataset
Figure 9: Typical errors. A (_Fagus_): A large number of vessel elements leads to true positive (green), but also to false negative (blue). B (_Liquidambar_): Low brightness or contrast causes false negative (blue), too. C (_Eucalyptus_): Cohesive fibers have some similarity with vessel elements and lead to false positive (red). Scale bars=200μm
From the plot, we can see that higher sizes than \(800\times 800\) did not lead to better scores. A possible explanation is that the vessel element size of many genera is on average lower than 1000 pixels. This means that many vessel elements that have a lower resolution would be padded by black pixels that do not provide any information.
We tested different strategies, how to handle this **difference in image size**. One possibility is to replace the prior padding strategy by resizing. Following the previous example, the image would be resized from \(800\times 516\) to \(800\times 800\) instead of being padded. However, this leads to a distortion of the features. We find that this strategy actually decreases the macro F1 score by 1.2%.
After image size, we tested how **coloring** with the Alexander Herzberg and nigrosin solutions affected the classification. We found that converting the images to grayscale improved the score by 1%. From a biological point of view, this result is reasonable because staining is not an important feature. From a computational point of view, it means that we can save disk space by having only one channel.
Another advantage is that for the **focal plane** tests, we can work with \(5\) instead of \(5\cdot 3\) channels, where \(5\) is the number of focal planes. In table 2, we tested multiple configurations. As can be seen, there is only a marginal difference between using \(1\) or \(3\) channels. Using the first, second and third plane is almost as good as using only the third one.
Therefore, the better approach is to consider only a single channel and do multiple predictions. In the table, there are two combination strategies for the probabilities. Either averaging the five probability vectors, or taking element-wise the maximum of the values. Both approaches perform similarly.
It can be argued that the focal planes represent a kind of test-time augmentation (TTA). Some regions are sharpened, but the overall image distribution remains the same. TTA is often used in microscopy and other fields to improve results (Moshkov et al., 2020).
After general tests with respect to preprocessing, we tried various architectures to see how it affects the macro f1 score.
As can be seen from table 3, ConvNeXt-tiny leads to the best results. This architecture also performed best on other real-world datasets, as was shown by Fang et al. (2023). We found that ConvNeXt-tiny has problems converging, with a learning rate of \(10^{-3}\). Therefore, we used for this architecture \(10^{-5}\).
Finally, we tested adding more class-preserving data augmentations like horizontal flips or Gaussian noise. Applying two types of flips reduces macro F1 by about 1%. This means that if the model is already capable of learning the rotation based on the data alone, then adding more data augmentation may degrade performance. Gaussian noise also did not show any increase of macro F1.
After having determined the best hyperparameters, we run the models on the test dataset. We achieve a macro F1 score of 64.61% on the test dataset. It is slightly lower than that of the validation dataset. The reason is that the vessel element distribution is not exactly the same as the one of the validation dataset. Some classes are underrepresented.
Finally, we look at the confusion matrix to see the performance across different classes. Figure 11 shows that the following genera are often confused: _Liquidambar-Schima-Fagus_ and _Populus-Salix_. These genera are characterized by great similarities, and not every vessel element shows all structural features. Thus, even for wood anatomists, it
\begin{table}
\begin{tabular}{l l} \hline \hline Focal plane & Macro F1 \\ \hline
1st, 2nd, 3rd & 0.6669 \\
1st, 3rd, 4th & 0.6615 \\
3rd & 0.6602 \\ Average & **0.7017** \\ Maximum & 0.7014 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Focal plane tests
\begin{table}
\begin{tabular}{l l} \hline \hline Architecture & Macro F1 \\ \hline ConvNeXt-tiny & **0.7017** \\ DenseNet-121 & 0.6441 \\ ResNet-34 & 0.5958 \\ EfficientNet-B0 & 0.6472 \\ EfficientNet-B1 & 0.6698 \\ EfficientNet-B2 & 0.6632 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Architecture experiments
is difficult to make a clear classification for each individual cell. Therefore, our model behaves similarly to a human expert.
When only considering the diagonal, we see that most genera are above 80%. The only problematic genera are _Liquidambar, Salix_ and _Schima_. But as these classes are even difficult to classify for each vessel element by humans, this confirms that our model is working as intended. If it is not possible to classify these genera even as a human, then the genera can also be combined into one class. This reduces the number of classes and increases the F1 score. In terms of accuracy, it is therefore possible to obtain a classifier with an accuracy of over 90%.
## 4 Discussion
### Pipeline
In general, the pipeline is well adapted to be used on such a problem. We have been able to generate a for now big data basis for further analysis. We can use a plethora of different detection and classification methods - e.g. various neural networks - and directly compare their results.
It might be argued that by mimicking the visual manual process, we limit the capacity of the solution to detect species by using other cell structures than vessel elements. However, the database as we build it, could also be used in the future to be combined with mixed overview images, as the vessel element detector works on different vessel element types. Furthermore, we now have a first version of a mixed species image detector/classifier that could again be used in a later stage for assisted annotation of mixed samples.
### Detection
For detection, we compared how image size and larger architectures affect detection results. It is better to use higher resolutions but smaller architectures.
Although large models like YOLOv7-E6E are available, they do not improve the mean average precision (mAP) of identifying vessel elements as this task does not require complicated features. The depth of a neural network, which is associated with the receptive field, can impact its ability to "see" more of an image (Araujo et al., 2019; Luo et al., 2017). A higher receptive field can be achieved by stacking more convolutions, which leads to a higher effective kernel size and allows for the processing of more pixels at once. This can be useful when detecting small details.
However, in this case, using a large model was not necessary as the vessel element identification task does not require big complicated features. Therefore, a simpler model is sufficient for accurately identifying vessel elements, and using a larger model would not provide any significant benefits.
Figure 11: Confusion matrix
The data itself has a big effect on what the neural network is able to "see". If the vessel elements are difficult to see with the human eye (e.g. due to low brightness), the detection performance of the detector also decreases significantly. Therefore, it is important to make sure that manual preparation process as well as the images produced by the microscope are of good quality. In contrast to recall ("how many objects are found?"), precision ("is the object found really an object?") is usually good, regardless of how dark or light the vessel elements are.
Furthermore, even if an area is misidentified as a vessel element, it is possible that the classifier can still determine the genus. For now, we have only modeled our pipeline based on current biological domain knowledge. Other unknown features could also indicate the genus of a vessel element.
Therefore, it is more important to have a high recall. The recall rates of certain genera are quite low at 60%. However, one must always keep in mind that hundreds of vessel elements have already been found correctly for these genera. Only when an image contains few vessel elements and the detector misses them, a low recall is problematic.
While annotating the data, we found that having multiple vessel elements on top of each other was a problem. It would be necessary for the bounding boxes to be able to be rotated as well. Otherwise, we would have only one bounding box covering both bounding boxes. The other option is a segmentation mask or vertex-based detection. However, this problem only affects genera like _Liquidambar_ where we already have a large amount of vessel elements.
In general, the object detector already provides sufficiently good results. We expect recall and precision to improve with more labeled data.
### Classification
For classification, we found that data leakage was initially a major problem. Instead of focusing on vessel elements, the neural networks paid attention to brightness. By carefully splitting the data based on maceration ID and maintaining the same data distribution between the training and validation datasets, we were able to solve this problem. In addition, as the size of the dataset increased, the generalization performance of the classifier also improved. Data augmentations such as brightness or contrast also prevents the network on just focusing on the background.
Since, from a biological perspective, the information leading to a particular classification is clear, one could also consider modeling the methods to focus on specific areas of the vessel elements (such as the vessel-ray pits). However, as already previously mentioned, it is possible to find additional features that could indicate the genus.
The current worst performing genera are also those that are confused by human experts. Therefore, the classifier produces the expected results. We did, however, not surpass human performance on vessel element classification based on our preliminary tests.
Our extensive hyperparameter tuning was guided by biological domain expertise. We found that focal planes and high image resolution were important for the classification. Converting the images from RGB to grayscale also improved macro F1.
Apart from the preprocessing, the chosen architecture also makes a difference. ConvNeXt produced the best results, while smaller architectures such as EfficientNet-B0 performed slightly worse. This is unlike detection, where the smaller models tended to work better.
Since we use a high resolution of 800 pixels, the architecture also has to be deeper to be able to see all the important features. We need therefore a higher receptive field such that the network is able to learn features which are also important for humans. Experts also require high-resolution images to have a sufficient level of detail to find the relevant features for distinguishing genera. For object detection, shallower networks worked better because a high level of detail is not required to determine if an area contains a vessel element.
## 5 Conclusions
We have shown that wood detection in microscopy images can be automatized with neural networks. We have performed extensive evaluation of hyperparameters to ensure the representativity and robustness of our results. Our method achieved similar results to human experts. Extensive tests have shown that the genera that are usually confused by humans are also problematic for neural networks. The object detection sometimes misses vessel elements, but the performance is already good enough to produce a large candidate list for the classifier.
In future work, we want to see whether the performance can be extended to more genera.
Additionally, we do not know yet which regions the network really focuses on. A more in-depth analysis of the neural networks areas of focus on these microscopic images would be of interest.
It might be possible to discover new features important for the classification by looking at class activation maps or saliency maps [Sundararajan et al., 2017].
Finally, we will produce and evaluate mixed samples containing multiple genera. We also intend to perform a blind test, AI versus anatomists.
## 6 Competing interests
No competing interest is declared.
## 7 Author contributions statement
L.N., H.S. conceived the experiment(s), L.N. conducted the experiment(s), A.O., J.S.-R. and S.H. conceived data generation of references. All authors analyzed the results. All authors wrote and reviewed the manuscript.
## 8 Acknowledgments
The authors would like to thank all colleagues who participated in the preparation of the numerous samples, helped with annotation and made the project happen: P.Gospodnetic, L.Gradert, J.Heddier, D.Helm, S.Kaschuro, G.Koch, C.Piehl, M.Rauhut, L.Wenrich, A. Wettich, S.Wrage (all Fraunhofer Institute for Industrial Mathematics ITWM or Thunen Institute of Wood Research). This work is supported by funds from the Fachagentur Nachwachsende Rohstoffe e.V. (FNR - FKZ 2220HV063A and 2220HV063B)
|
2303.13027
|
Weighted Pressure and Mode Matching for Sound Field Reproduction:
Theoretical and Experimental Comparisons
|
Two sound field reproduction methods, weighted pressure matching and weighted
mode matching, are theoretically and experimentally compared. The weighted
pressure and mode matching are a generalization of conventional pressure and
mode matching, respectively. Both methods are derived by introducing a
weighting matrix in the pressure and mode matching. The weighting matrix in the
weighted pressure matching is defined on the basis of the kernel interpolation
of the sound field from pressure at a discrete set of control points. In the
weighted mode matching, the weighting matrix is defined by a regional
integration of spherical wavefunctions. It is theoretically shown that the
weighted pressure matching is a special case of the weighted mode matching by
infinite-dimensional harmonic analysis for estimating expansion coefficients
from pressure observations. The difference between the two methods are
discussed through experiments.
|
Shoichi Koyama, Keisuke Kimura, Natsuki Ueno
|
2023-03-23T04:26:06Z
|
http://arxiv.org/abs/2303.13027v1
|
# Weighted Pressure and Mode Matching
###### Abstract
Two sound field reproduction methods, weighted pressure matching and weighted mode matching, are theoretically and experimentally compared. The weighted pressure and mode matching are a generalization of conventional pressure and mode matching, respectively. Both methods are derived by introducing a weighting matrix in the pressure and mode matching. The weighting matrix in the weighted pressure matching is defined on the basis of the kernel interpolation of the sound field from pressure at a discrete set of control points. In the weighted mode matching, the weighting matrix is defined by a regional integration of spherical wavefunctions. It is theoretically shown that the weighted pressure matching is a special case of the weighted mode matching by infinite-dimensional harmonic analysis for estimating expansion coefficients from pressure observations. The difference between the two methods are discussed through experiments.
## 0 Introduction
The aim of sound field reproduction is to synthesize spatial sound using multiple loudspeakers (or secondary sources), which has various applications such as virtual/augmented reality audio, generation of multiple sound zones for personal audio, and noise cancellation in a spatial region. In some applications, the desired sound field to be reproduced is estimated using multiple microphones, which is called sound field capturing or estimation.
There are two major categories of sound field reproduction methods. One category includes analytical methods based on the boundary integral representations derived from the Helmholtz equation, such as _wave field synthesis_ and _higher-order ambisonics_[1; 2; 3; 4; 5; 6; 7]. The other category includes numerical methods based on the minimization of a certain cost function defined for synthesized and desired sound fields inside a target region, such as _pressure matching_ and _mode matching_[8; 9; 10; 11; 12; 3]. Many analytical methods require the array geometry of loudspeakers to have a simple shape, such as a sphere, plane, circle, or line, and driving signals are obtained from a discrete approximation of an integral equation. In numerical methods, the loudspeaker placement can be arbitrary, and driving signals are generally derived as a closed-form least-squares solution. Pressure matching is based on synthesizing the desired pressure at a discrete set of control points placed over the target region. In mode matching, driving signals are derived so that the expansion coefficients of the spherical wavefunctions of the synthesized and desired sound fields are equivalent. Since the region in which the loudspeakers can be placed is limited in practical situations, a flexible loudspeaker array geometry in numerical methods will be preferable.
In this study, we theoretically and experimentally compare two numerical methods for sound field reproduction: _weighted pressure matching_[14] and _weighted mode matching_[12]. These two methods are derived by introducing a weighting matrix in the pressure and mode matching, respectively; therefore, they can be regarded as a generalization of the pressure and mode matching. The weighting matrix for the weighted pressure matching is derived on the basis of the kernel interpolation of the sound field [15; 16] from pressure at control points. In the weighted mode matching, the weighting matrix is defined as a regional integration of spherical wavefunctions. The relationship between pressure and mode matching has not been sufficiently elucidated from a theoretical perspective. We show that the weighted pressure matching is a special case of the weighted mode matching by combining with an infinite
-dimensional harmonic analysis for sound field capturing [16; 17], starting with a common optimization problem. Experimental evaluation comparing pressure/mode matching and weighted pressure/mode matching is carried out. The codes for reproducing the results are publicly available at [https://sh0lk.github.io/MeshRIR/](https://sh0lk.github.io/MeshRIR/).
The rest of this paper is organized as follows. In Section 1, notations and basic theories on the sound field representation used throughout the paper are presented. The infinite-dimensional harmonic analysis for sound field capturing is also introduced. In Section 2, the sound field reproduction problem is described. The weighted pressure and mode matching is formulated and theoretically compared in Section 3. Experimental comparisons are shown in Section 4. In Section 5, differences between the two methods are discussed. Finally, Section 6 concludes this paper.
## 1 Notations and preliminaries
First, we provide several basic notations. Then, a sound field representation by spherical wavefunction expansion is introduced. We also briefly introduce a sound field capturing method based on infinite-dimensional harmonic analysis, which plays an important role in sound field reproduction methods.
### Notations
Italic letters denote scalars, lowercase boldface italic letters denote vectors, and uppercase boldface italic letters denote matrices. The sets of real and complex numbers are denoted by \(\mathbb{R}\) and \(\mathbb{C}\), respectively. Subscripts of scalars, vectors, and matrices indicate their indexes. To illustrate, the \((i,j)\)th entry of the matrix \(\mathbf{X}\) is represented as \(x_{i,j}\). The imaginary unit and Napier's constant are denoted by j and e, respectively. The complex conjugate, transpose, conjugate transpose, and inverse are denoted by \(\mathrm{s}(\cdot)^{*}\), \((\cdot)^{\mathsf{T}}\), \((\cdot)^{\mathsf{H}}\), and \((\cdot)^{-1}\), respectively. The absolute value of a scalar \(x\) and the Euclidean norm of a vector \(\mathbf{x}\) are denoted by \(|x|\) and \(\|\mathbf{x}\|\), respectively. The absolute value for each element of matrix \(\mathbf{X}\) is also denoted by \(|\mathbf{X}|\).
The angular frequency, sound velocity, and wavenumber are denoted by \(\mathbf{\omega}\), \(c\), and \(k=\mathbf{\omega}/c\), respectively. The harmonic time dependence \(\mathrm{e}^{-\mathrm{j}\mathbf{\omega}t}\) with the time \(t\) is assumed according to conventions.
### Expansion representation of sound field
A solution of the homogeneous Helmholtz equation \(\mathbf{u}(\mathbf{r},\mathbf{\omega})\) of angular frequency \(\mathbf{\omega}\) at position \(\mathbf{r}\in\mathbb{R}^{3}\) can be expanded around \(\mathbf{r}_{0}\) by using spherical wavefunctions [18; 19] as
\[\mathbf{u}(\mathbf{r},\mathbf{\omega}) =\sum_{\nu=0}^{\infty}\sum_{\mu=-\nu}^{\nu}\hat{u}_{\nu,\mu}(\bm {r}_{\mathrm{o}},\mathbf{\omega})\mathbf{\varphi}_{\nu,\mu}(\mathbf{r}-\mathbf{r}_{\mathrm{o}}, \mathbf{\omega})\] \[=\mathbf{\varphi}(\mathbf{r}-\mathbf{r}_{\mathrm{o}},\mathbf{\omega})^{\mathsf{T} }\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{o}},\mathbf{\omega}), \tag{1}\]
where \(\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{o}},\mathbf{\omega})\in\mathbb{C}^{\infty}\) and \(\mathbf{\varphi}(\mathbf{r}-\mathbf{r}_{\mathrm{o}},\mathbf{\omega})\in\mathbb{C}^{\infty}\) are the infinite-dimensional vectors of expansion coefficients and spherical wavefunctions, respectively. The spherical wavefunction of the order \(\nu\) and the degree \(\mu\), \(\Phi_{\nu,\mu}(\mathbf{r},\mathbf{\omega})\), is defined as
\[\varphi_{\nu,\mu}(\mathbf{r},\mathbf{\omega})=\sqrt{4\pi}j_{\nu}(k\|\mathbf{r}\|)Y_{\nu, \mu}\left(\frac{\mathbf{r}}{\|\mathbf{r}\|}\right), \tag{2}\]
where \(j_{\nu}(\cdot)\) is the \(\nu\)th-order spherical Bessel function and \(Y_{\nu,\mu}(\cdot)\) is the spherical harmonic function of order \(\nu\) and degree \(\mu\)[19]. The function \(\varphi_{\nu,\mu}\) is scaled by the factor \(\sqrt{4\pi}\) so that \(\hat{u}_{0,0}(\mathbf{r},\mathbf{\omega})\) corresponds to the pressure \(u(\mathbf{r},\mathbf{\omega})\). Note that this scaling factor is not included in the standard definition of the spherical wavefunction. Hereafter, \(\mathbf{\omega}\) is omitted for notational simplicity.
The translation operator \(\mathbf{T}(\mathbf{r}_{\mathrm{o}}-\mathbf{r}_{\mathrm{o}}^{\prime})\in\mathbb{C}^{\infty \times\infty}\) relates the expansion coefficients about two different expansion centers \(\mathbf{r}_{\mathrm{o}}\) and \(\mathbf{r}_{\mathrm{o}}^{\prime}\), i.e., \(\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{o}})\) and \(\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{o}}^{\prime})\), respectively, as [19]
\[\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{o}}^{\prime})=\mathbf{T}(\mathbf{r}_{\mathrm{o}}^{\prime}- \mathbf{r}_{\mathrm{o}})\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{o}}), \tag{3}\]
where the element corresponding to the order \(\nu\) and the degree \(\mu\) of \(\mathbf{T}(\mathbf{r})\hat{\mathbf{u}}\), denoted as \([\mathbf{T}(\mathbf{r})\hat{\mathbf{u}}]_{\nu,\mu}\), is defined as
\[[\mathbf{T}(\mathbf{r})\hat{\mathbf{u}}]_{\nu,\mu}=\sum_{\nu^{\prime}=0}^{ \infty}\sum_{\mu^{\prime}=-\nu^{\prime}}^{\nu^{\prime}}\left[4\pi(-1)^{\mu^{ \prime}}\mathrm{j}^{\nu-\nu^{\prime}}\right.\] \[\left.\cdot\sum_{l=0}^{\nu+\nu^{\prime}}\mathrm{j}^{l}j_{l}(k\| \mathbf{r}\|)Y_{l,\mu-\mu^{\prime}}\left(\frac{\mathbf{r}}{\|\mathbf{r}\|}\right)\mathcal{G }(\nu^{\prime},\mu^{\prime};\nu,-\mu,l)\right]\hat{u}_{\nu^{\prime},\mu^{ \prime}}. \tag{4}\]
Here, \(\mathcal{G}(\cdot)\) is the Gaunt coefficient. The translation operation is derived from the addition theorem of the spherical wavefunction [19; 20]. The translation operator \(\mathbf{T}(\mathbf{r}-\mathbf{r}^{\prime})\) has the following important properties:
\[\mathbf{T}(-\mathbf{r})=\mathbf{T}(\mathbf{r})^{-1}=\mathbf{T}(\mathbf{r})^{\mathsf{H}} \tag{5}\] \[\mathbf{T}(\mathbf{r}+\mathbf{r}^{\prime})=\mathbf{T}(\mathbf{r})\mathbf{T}(\mathbf{r}^{\prime})\] (6) \[\mathbf{\varphi}(\mathbf{r}-\mathbf{r}^{\prime})^{\mathsf{T}}\mathbf{T}(\mathbf{r}^{ \prime}-\mathbf{r}^{\prime\prime})=\mathbf{\varphi}(\mathbf{r}-\mathbf{r}^{\prime\prime}). \tag{7}\]
### Sound field capturing based on infinite-dimensional harmonic analysis
Here, we briefly introduce a method of estimating expansion coefficients of spherical wavefunctions of a sound field from microphone measurements [17], i.e., sound field capturing/estimation method. Let \(D\subseteq\mathbb{R}^{3}\) be a source-free target capturing region, and \(M\) microphones are arbitrarily placed in \(D\). The sound field capturing problem is to estimate the expansion coefficients at the position \(\mathbf{r}\in D\), \(\hat{\mathbf{u}}(\mathbf{r})\), using the observed signal of the microphones \(s_{m}\) at the positions \(\mathbf{r}_{\mathrm{m},m}\in D\) (\(m\in\{1,\ldots,M\}\)).
The microphone directivity patterns are assumed to be given as their expansion coefficients \(c_{m,\nu,\mu}\) of spherical harmonic functions. By denoting the infinite-dimensional vector of the expansion coefficients \(c_{m,\nu,\mu}\) by \(\mathbf{c}_{m}\in\mathbb{C}^{\infty}\), we describe the observed signal \(s_{m}\) as the inner product of \(\mathbf{c}_{m}\) and \(\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{m},m})\) as
\[s_{m} =\sum_{\nu=0}^{\infty}\sum_{\mu=-\nu}^{\nu}c_{m,\nu,\mu}^{*}\hat{u} _{\nu,\mu}(\mathbf{r}_{\mathrm{m},m})\] \[=\mathbf{c}_{m}^{\mathsf{H}}\hat{\mathbf{u}}(\mathbf{r}_{\mathrm{m},m})\] \[=\mathbf{c}_{m}^{\mathsf{H}}\mathbf{T}(\mathbf{r}_{\mathrm{m},m}-\mathbf{r})\hat{ \mathbf{u}}(\mathbf{r}), \tag{8}\]
where the translation operator is used in the last line to relate \(s_{m}\) with \(\hat{\mathbf{u}}(\mathbf{r})\). See Appendix for the derivation of the first line. Equation (8) can be rewritten as
\[\mathbf{s}=\mathbf{\Xi}(\mathbf{r})^{\mathrm{H}}\hat{\mathbf{u}}(\mathbf{r}), \tag{9}\]
where \(\mathbf{s}=[s_{1},\ldots,s_{M}]^{\mathrm{T}}\in\mathbb{C}^{M}\) and \(\mathbf{\Xi}(\mathbf{r})\in\mathbb{C}^{\infty\times M}\) is described as
\[\mathbf{\Xi}(\mathbf{r}) =\left[(\mathbf{c}_{1}^{\mathrm{H}}\mathbf{T}(\mathbf{r}_{\mathrm{m},1}-\mathbf{ r}))^{\mathrm{H}},\,\ldots,\,(\mathbf{c}_{M}^{\mathrm{H}}\mathbf{T}(\mathbf{r}_{m,M}-\mathbf{ r}))^{\mathrm{H}}]\right.\] \[=\left[\mathbf{T}(\mathbf{r}-\mathbf{r}_{\mathrm{m},1})\mathbf{c}_{1},\,\ldots,\, \mathbf{T}(\mathbf{r}-\mathbf{r}_{\mathrm{m},M})\mathbf{c}_{M}\right]. \tag{10}\]
Here, the property of the translation operator (5) is used. The expansion coefficient \(\hat{\mathbf{u}}(\mathbf{r})\) is estimated as
\[\hat{\mathbf{u}}(\mathbf{r})=\mathbf{\Xi}(\mathbf{r})\left(\mathbf{\Psi}+\xi\mathbf{I}\right)^{-1}\bm {s}, \tag{11}\]
where \(\xi\) is a constant parameter and \(\mathbf{\Psi}:=\mathbf{\Xi}(\mathbf{r})^{\mathrm{H}}\mathbf{\Xi}(\mathbf{r})\in\mathbb{C}^{M \times M}\). From the property in Eq. (6), the \((m,m^{\prime})\)th element of \(\mathbf{\Psi}\) becomes
\[(\mathbf{\Psi})_{m,m^{\prime}} =\mathbf{c}_{m}^{\mathrm{H}}\mathbf{T}(\mathbf{r}_{\mathrm{m},m}-\mathbf{r})\mathbf{ T}(\mathbf{r}-\mathbf{r}_{\mathrm{m},m^{\prime}})\mathbf{c}_{m^{\prime}}\] \[=\mathbf{c}_{m}^{\mathrm{H}}\mathbf{T}(\mathbf{r}_{\mathrm{m},m}-\mathbf{r}_{ \mathrm{m},m^{\prime}})\mathbf{c}_{m^{\prime}}. \tag{12}\]
Therefore, \(\mathbf{\Psi}\) does not depend on the position \(\mathbf{r}\) and depends only on the microphones' positions and directivities. Since the microphone directivity \(c_{m,\mathrm{v},\mu}\) is typically modeled by low-order coefficients, Eq. (12) can be simply computed in practice.
Next, we consider estimating the pressure distribution \(u(\mathbf{r})=\hat{u}_{0,0}(\mathbf{r})\) using pressure microphones. The expansion coefficient of the directivity, \(c_{m,\mathrm{v},\mu}\), is written as
\[c_{m,\mathrm{v},\mu}=\begin{cases}1,&\mathbf{\nu}=0,\mu=0\\ 0,&\text{otherwise}\end{cases}. \tag{13}\]
Then, estimation Eq. (11) can be simplified as
\[u(\mathbf{r})=\mathbf{\kappa}(\mathbf{r})^{\mathrm{T}}\left(\mathbf{K}+\xi\mathbf{I}\right)^{-1} \mathbf{s}, \tag{14}\]
where
\[\mathbf{K}=\begin{bmatrix}j_{0}(k\|\mathbf{r}_{1}-\mathbf{r}_{1}\|)&\cdots&j_{0}(k\|\mathbf{r }_{1}-\mathbf{r}_{M}\|)\\ \vdots&\ddots&\vdots\\ j_{0}(k\|\mathbf{r}_{M}-\mathbf{r}_{1}\|)&\cdots&j_{0}(k\|\mathbf{r}_{M}-\mathbf{r}_{M}\|)\\ \end{bmatrix} \tag{15}\]
\[\mathbf{\kappa}(\mathbf{r})=\begin{bmatrix}j_{0}(k\|\mathbf{r}-\mathbf{r}_{1}\|)&\cdots&j_{ 0}(k\|\mathbf{r}-\mathbf{r}_{M}\|)\\ \end{bmatrix}^{\mathrm{T}}. \tag{16}\]
This equation can be regarded as kernel ridge regression with the kernel function of the 0th-order spherical Bessel function, which enables us to interpolate pressure distribution in a three-dimensional (3D) space with the constraint that \(u(\mathbf{r})\) satisfies the Helmholtz equation [15]. In a two-dimensional (2D) sound field, the kernel function is replaced with the 0th-order Bessel function.
In the sound field capturing, it is frequently impractical to capture the sound field in a large region using a single large microphone array, such as a spherical array. The estimation method described above enables us to use arbitrarily placed microphones, for example, distributed microphones [21]. Such a sound field capturing system will be useful in practical situations because of its flexibility and scalability.
## 2 Sound field reproduction problem
Suppose that \(L\) secondary sources (loudspeakers) are placed around a target reproduction region \(\Omega\subset\mathbb{R}^{3}\) as shown in Fig. 1. The desired sound field at \(\mathbf{r}\in\Omega\) is denoted by \(u_{\mathrm{des}}(\mathbf{r})\) in the frequency domain. The sound field \(u_{\mathrm{syn}}(\mathbf{r})\) synthesized using the secondary sources is represented as
\[u_{\mathrm{syn}}(\mathbf{r})=\sum_{l=1}^{L}d_{l}g_{l}(\mathbf{r}), \tag{17}\]
where \(d_{l}\) is the driving signal of the \(l\)th secondary source, and \(g_{l}(\mathbf{r})\) is the transfer function from the \(l\)th secondary source to the position \(\mathbf{r}\) (\(l\in\{1,\ldots,L\}\)). The transfer functions \(g_{l}(\mathbf{r})\) are assumed to be known by measuring or modeling them in advance. The goal of sound field reproduction is to obtain \(d_{l}\) of the \(L\) secondary sources so that \(u_{\mathrm{syn}}(\mathbf{r})\) coincides with \(u_{\mathrm{des}}(\mathbf{r})\) inside \(\Omega\).
We define the cost function to determine the driving signal \(d_{l}\) for \(l\in\{1,\ldots,L\}\) as
\[J =\int_{\Omega}\left|\sum_{l=1}^{L}d_{l}g_{l}(\mathbf{r})-u_{\mathrm{ des}}(\mathbf{r})\right|^{2}\mathrm{d}\mathbf{r}\] \[=\int_{\Omega}\left|\mathbf{g}(\mathbf{r})^{\mathrm{T}}\mathbf{d}-u_{\mathrm{ des}}(\mathbf{r})\right|^{2}\mathrm{d}\mathbf{r}, \tag{18}\]
where \(\mathbf{g}(\mathbf{r})=[g_{1}(\mathbf{r}),\ldots,g_{L}(\mathbf{r})]^{\mathrm{T}}\in\mathbb{C}^{L}\) and \(\mathbf{d}=[d_{1},\ldots,d_{L}]^{\mathrm{T}}\in\mathbb{C}^{L}\) are the vectors of the transfer functions and driving signals, respectively. The optimal driving signal \(\mathbf{d}\) can be obtained by solving the minimization problem of \(J\). The cost function \(J\) is formulated as the mean square error of the reproduction over the region \(\Omega\). To incorporate the expected regional accuracy, a weighting function \(\rho(\mathbf{r})\) (\(\mathbf{r}\in\Omega\)) is sometimes used as [12]
\[J_{\rho}=\int_{\Omega}\rho(\mathbf{r})\left|\mathbf{g}(\mathbf{r})^{\mathrm{T}}\mathbf{d}-u_{ \mathrm{des}}(\mathbf{r})\right|^{2}\mathrm{d}\mathbf{r}. \tag{19}\]
The function \(\rho(\mathbf{r})\) is designed on the basis of the regional importance of the reproduction accuracy. However, in this study, we focus on the case of a uniform distribution, i.e., \(\rho(\mathbf{r})=1\), for simplicity.
Figure 1: The desired sound field is synthesized inside the target region \(\Omega\) using multiple secondary sources.
## 3 Weighted pressure and mode matching
Several methods of approximately solving the minimization problem of Eq. (18) have been proposed. We introduce two sound field reproduction methods, weighted pressure matching and weighted mode matching.
### Weighted pressure matching
A simple strategy to solve the minimization problem of Eq. (18) is to discretize the target region \(\Omega\) into multiple control points, which is referred to as the pressure-matching method. Assume that \(N\) control points are placed over \(\Omega\) and their positions are denoted by \(\mathbf{r}_{\mathrm{c},n}\) (\(n\in\{1,\ldots,N\}\)). The cost function \(J\) is approximated as the error between the synthesized and desired pressures at the control points. The optimization problem of pressure matching is described as
\[\underset{\mathbf{d}\in\mathbb{C}^{L}}{\mathrm{minimize}}\,\|\mathbf{Gd}-\mathbf{u}^{ \mathrm{des}}\|^{2}+\eta\|\mathbf{d}\|^{2}, \tag{20}\]
where \(\mathbf{u}^{\mathrm{des}}=[u_{\mathrm{des}}(\mathbf{r}_{\mathrm{c},1}),\ldots,u_{ \mathrm{des}}(\mathbf{r}_{\mathrm{c},N})]^{\mathsf{T}}\in\mathbb{C}^{N}\) is the vector \(\mathbf{\mathrm{d}}\) of the desired sound pressures and \(\mathbf{G}=[\mathbf{g}(\mathbf{r}_{\mathrm{c},1}),\ldots,\mathbf{g}(\mathbf{r}_{\mathrm{c},N})]^{ \mathsf{T}}\in\mathbb{C}^{N\times L}\) is the transfer function matrix between \(L\) secondary sources and \(N\) control points. The second term is the regularization term to prevent an excessively large amplitude of \(\mathbf{d}\), and \(\eta\) is a constant parameter. The solution of Eq. (20) is obtained as
\[\mathbf{d}_{\mathrm{PM}}=\left(\mathbf{G}^{\mathsf{H}}\mathbf{G}+\eta\mathbf{I}\right)^{-1} \mathbf{G}^{\mathsf{H}}\mathbf{u}^{\mathrm{des}}. \tag{21}\]
Owing to the discrete approximation, the cost function of pressure matching is formulated so that the synthesized pressure corresponds to the desired pressure only at the control points. Therefore, the region between the control points is not taken into consideration. When the distribution of the control points is sufficiently dense, the pressure values at the control points are sufficient to represent the sound field in the target region. However, since the pressures at the control points are measured by microphones in practice, small number of control points is preferable. Therefore, we consider approximating the cost function \(\mathbf{J}\) by interpolating the sound field from the pressures at the control points. On the basis of the kernel interpolation introduced in Section 1.3, \(g_{l}(\mathbf{r})\) and \(u_{\mathrm{des}}(\mathbf{r})\) are interpolated from those at the control points as
\[\hat{g}_{l}(\mathbf{r}) =\mathbf{\kappa}_{\mathrm{c}}(\mathbf{r})^{\mathsf{T}}\left(\mathbf{K}_{ \mathrm{c}}+\xi\mathbf{I}\right)^{-1}\mathbf{g}_{\mathrm{c},l} \tag{22}\] \[\hat{u}_{\mathrm{des}}(\mathbf{r}) =\mathbf{\kappa}_{\mathrm{c}}(\mathbf{r})^{\mathsf{T}}\left(\mathbf{K}_{ \mathrm{c}}+\xi\mathbf{I}\right)^{-1}\mathbf{u}^{\mathrm{des}}, \tag{23}\]
where \(\mathbf{g}_{\mathrm{c},l}\) (\(\in\mathbb{C}^{N}\)) is the \(l\)th column vector of \(\mathbf{G}\), and \(\mathbf{K}_{\mathrm{c}}\in\mathbb{C}^{N\times N}\) and \(\mathbf{\kappa}_{\mathrm{c}}\in\mathbb{C}^{N}\) are respectively the matrix and vector consisting of the kernel function defined with the positions \(\{\mathbf{r}_{\mathrm{c},n}\}_{n=1}^{N}\). Then, the cost function \(J\) can be approximated as
\[J \approx\int_{\Omega}\left|\sum_{l=1}^{L}d_{l}\hat{g}_{l}(\mathbf{r})- \hat{u}_{\mathrm{des}}(\mathbf{r})\right|^{2}\mathrm{d}\mathbf{r}\] \[=\int_{\Omega}\left|\mathbf{\kappa}_{\mathrm{c}}(\mathbf{r})^{\mathsf{T} }\left(\mathbf{K}_{\mathrm{c}}+\xi\mathbf{I}\right)^{-1}\left(\mathbf{Gd}-\mathbf{u}^{ \mathrm{des}}\right)\right|^{2}\mathrm{d}\mathbf{r}\] \[=\left(\mathbf{Gd}-\mathbf{u}^{\mathrm{des}}\right)^{\mathsf{H}}\mathbf{W}_{ \mathrm{PM}}\left(\mathbf{Gd}-\mathbf{u}^{\mathrm{des}}\right), \tag{24}\]
where \(\mathbf{W}_{\mathrm{PM}}\) is defined as
\[\mathbf{W}_{\mathrm{PM}}:=\mathbf{P}^{\mathsf{H}}\int_{\Omega}\mathbf{\kappa}_{\mathrm{c}} (\mathbf{r})^{\mathsf{T}}\mathbf{\kappa}_{\mathrm{c}}(\mathbf{r})^{\mathsf{T}}\mathrm{d} \mathbf{r}\mathbf{P} \tag{25}\]
with
\[\mathbf{P}:=\left(\mathbf{K}_{\mathrm{c}}+\xi\mathbf{I}\right)^{-1}. \tag{26}\]
The resulting cost function can be regarded as the weighted mean square error between the synthesized and desired pressures at the control points. Note that the weighting matrix \(\mathbf{W}_{\mathrm{PM}}\) can be computed only with the positions of the control points and the target region \(\Omega\).
The optimization problem of the weighted pressure matching is formulated using the approximated cost function (24) as
\[\underset{\mathbf{d}\in\mathbb{C}^{L}}{\mathrm{minimize}}\left(\mathbf{Gd}-\mathbf{u}^{ \mathrm{des}}\right)^{\mathsf{H}}\mathbf{W}_{\mathrm{PM}}\left(\mathbf{Gd}-\mathbf{u}^{ \mathrm{des}}\right)+\lambda\|\mathbf{d}\|^{2}, \tag{27}\]
where \(\lambda\) is the regularization parameter. This weighted least squares problem also has the closed-form solution as
\[\mathbf{d}_{\mathrm{WPM}}=\left(\mathbf{G}^{\mathsf{H}}\mathbf{W}_{\mathrm{PM}}\mathbf{G}+ \lambda\mathbf{I}\right)^{-1}\mathbf{G}^{\mathsf{H}}\mathbf{W}_{\mathrm{PM}}\mathbf{u}_{ \mathrm{des}}. \tag{28}\]
The weighted pressure matching enables the enhancement of the reproduction accuracy of pressure matching only by introducing the weighting matrix \(\mathbf{W}_{\mathrm{PM}}\). This idea has already been applied in the context of the spatial active noise control [22, 23]. This interpolation-based sound field reproduction method is particularly effective when the region that the control points can be placed is limited.
### Weighted mode matching
Weighted mode matching is a method of solving the minimization problem of Eq. (18) on the basis of the spherical wavefunction expansion of the sound field. The desired sound field \(u_{\mathrm{des}}(\mathbf{r})\) and transfer function of the \(l\)th secondary source \(g_{l}(\mathbf{r})\) are expanded around the expansion center \(\mathbf{r}_{\mathrm{o}}\) as
\[u_{\mathrm{des}}(\mathbf{r}) =\sum_{\mathbf{v}=0}^{\infty}\sum_{h=-\mathbf{v}}^{\mathsf{v}}\hat{u}_{ \mathrm{des},v,\mu}(\mathbf{r}_{\mathrm{o}})\mathbf{\phi}_{\mathrm{v},\mu}(\mathbf{r}-\mathbf{ r}_{\mathrm{o}}) \tag{29}\] \[g_{l}(\mathbf{r}) =\sum_{\mathbf{v}=0}^{\infty}\sum_{h=-\mathbf{v}}^{\mathsf{v}}\hat{g}_{l,v, \mu}(\mathbf{r}_{\mathrm{o}})\mathbf{\phi}_{\mathrm{v},\mu}(\mathbf{r}-\mathbf{r}_{\mathrm{o}}). \tag{30}\]
By truncating the maximum order of the expansion in Eq. (30) up to \(N_{\mathrm{tr}}\), we can approximate \(u_{\mathrm{des}}\) and \(\mathbf{g}(\mathbf{r})^{\mathsf{T}}\) as
\[u_{\mathrm{des}}(\mathbf{r}) \approx\mathbf{\phi}(\mathbf{r})^{\mathsf{T}}\hat{\mathbf{u}}^{\mathrm{des}} \tag{31}\] \[\mathbf{g}(\mathbf{r})^{\mathsf{T}} \approx\mathbf{\phi}(\mathbf{r})^{\mathsf{T}}\hat{\mathbf{G}}, \tag{32}\]
where \(\mathbf{\bar{\phi}}(\mathbf{r})\in\mathbb{C}^{(N_{\mathrm{tr}}+1)^{2}}\), \(\hat{\mathbf{u}}^{\mathrm{des}}\in\mathbb{C}^{(N_{\mathrm{tr}}+1)^{2}\times L}\) are the vectors and matrix consisting of
\(\varphi_{\nu,u}(\mathbf{r}-\mathbf{r}_{\mathrm{o}})\), \(\hat{u}_{\mathrm{des,v},\mu}(\mathbf{r}_{\mathrm{o}})\), and \(\hat{g}_{l,\nu,\mu}(\mathbf{r}_{\mathrm{o}})\), respectively. Thus, the cost function \(J\) is approximated as
\[J \approx\int_{\Omega}\left|\hat{\mathbf{\varphi}}(\mathbf{r})^{\mathrm{T}} \left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{des}}\right)\right|^{2}\mathrm{ d}\mathbf{r}\] \[=\left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{des}}\right)^{ \mathrm{H}}\mathbf{W}_{\mathrm{MM}}\left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{ des}}\right), \tag{33}\]
where \(\mathbf{W}_{\mathrm{MM}}\in\mathbb{C}^{\left(N_{\mathrm{tr}}+1\right)^{2}\times \left(N_{\mathrm{tr}}+1\right)^{2}}\) is defined as
\[\mathbf{W}_{\mathrm{MM}}:=\int_{\Omega}\mathbf{\varphi}(\mathbf{r})^{\mathrm{T}}\mathbf{ \varphi}(\mathbf{r})^{\mathrm{T}}\mathrm{d}\mathbf{r}. \tag{34}\]
As in the weighted pressure matching, the resulting cost function can be regarded as the weighted mean square error between synthesized and desired expansion coefficients around \(\mathbf{r}_{\mathrm{o}}\). The weighting matrix \(\mathbf{W}_{\mathrm{MM}}\) can be computed only by using the spherical wavefunctions and target region \(\Omega\). In a 2D sound field, the spherical wavefunctions in the integrand are replaced with the cylindrical wavefunctions [17]. When \(\hat{\mathbf{u}}^{\mathrm{des}}\) and \(\hat{\mathbf{G}}\) are obtained from measurements, for example, to reproduce a captured sound field and/or to compensate for reverberation in the transfer functions of secondary sources, sound field capturing methods such as the infinite-dimensional harmonic analysis introduced in Section 1.3 can be applied.
The optimization problem of the weighted mode matching is formulated using the approximated cost function \(J\) in Eq. (33) as
\[\underset{\mathbf{d}\in\mathbb{C}^{\mathrm{L}}}{\mathrm{minimize}} \left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{des}}\right)^{\mathrm{H}}\mathbf{ W}_{\mathrm{MM}}\left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{des}}\right)+ \gamma\|\mathbf{d}\|^{2}, \tag{35}\]
where \(\gamma\) is the regularization parameter. Again, this weighted least squares problem can be solved as
\[\mathbf{d}_{\mathrm{WMM}}=\left(\hat{\mathbf{G}}^{\mathrm{H}}\mathbf{W}_{\mathrm{MM}}\hat {\mathbf{G}}+\gamma\mathbf{d}\right)^{-1}\hat{\mathbf{G}}^{\mathrm{H}}\mathbf{W}_{\mathrm{MM}} \hat{\mathbf{u}}^{\mathrm{des}}. \tag{36}\]
The weights for each expansion coefficient are determined by the weighting matrix \(\mathbf{W}_{\mathrm{MM}}\). When \(\mathbf{W}_{\mathrm{MM}}\) is the identity matrix, Eq. (36) corresponds to the driving signal of standard mode matching.
\[\mathbf{d}_{\mathrm{MM}}=\left(\hat{\mathbf{G}}^{\mathrm{H}}\hat{\mathbf{G}}+\gamma\mathbf{d} \right)^{-1}\hat{\mathbf{G}}^{\mathrm{H}}\hat{\mathbf{u}}^{\mathrm{des}} \tag{37}\]
In the mode matching, the appropriate setting of the truncation order \(N_{\mathrm{tr}}\) for the spherical wavefunction expansion is necessary. When the target region \(\Omega\) is a spherical region of radius \(R\), \(N_{\mathrm{tr}}=\left\lceil kR\right\rceil\) is empirically known to be a proper truncation criterion; however, when \(\Omega\) is not spherical, the appropriate setting of \(N_{\mathrm{tr}}\) is not simple. In particular, the target region of the sound field reproduction is sometimes set to be around a horizontal plane because listeners can be considered not to move largely in the vertical directions.
### Relationship between weighted pressure and mode matching
As discussed in Sections 3.1 and 3.2, the weighted pressure and mode matching can be regarded as a generalization of pressure and mode matching. Furthermore, the weighted pressure matching can be regarded as a special case of the weighted mode matching. Suppose that the expansion coefficients \(\hat{\mathbf{u}}^{\mathrm{des}}\) and \(\hat{\mathbf{G}}\) are estimated from the pressure observations at the control points \(\left\{\mathbf{r}_{\mathrm{c},n}\right\}_{n=1}^{N}\). On the basis of infinite-dimensional harmonic analysis in Section 1.3, \(\hat{\mathbf{u}}^{\mathrm{des}}\) and \(\hat{\mathbf{G}}\) are estimated as
\[\hat{\mathbf{u}}^{\mathrm{des}} =\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})(\mathbf{\Psi}_{\mathrm{c}} +\xi\mathbf{I})^{-1}\mathbf{u}^{\mathrm{des}}\] \[\hat{\mathbf{G}} =\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})(\mathbf{\Psi}_{\mathrm{c}} +\xi\mathbf{I})^{-1}\mathbf{G}, \tag{38}\]
where \(\mathbf{\Xi}_{\mathrm{c}}\) and \(\mathbf{\Psi}_{\mathrm{c}}\) are the matrices defined in Eqs. (10) and (12) with the control positions \(\left\{\mathbf{r}_{\mathrm{c},n}\right\}_{n=1}^{N}\), respectively. Therefore, the cost function \(J\) of the weighted mode matching becomes
\[J \approx\left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{des}}\right)^ {\mathrm{H}}\mathbf{W}_{\mathrm{MM}}\left(\hat{\mathbf{G}}\mathbf{d}-\hat{\mathbf{u}}^{\mathrm{ des}}\right)\] \[=\left(\mathbf{G}\mathbf{d}-\mathbf{u}^{\mathrm{des}}\right)^{\mathrm{H}}\mathbf{ Q}^{\mathrm{H}}\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})^{\mathrm{H}}\mathbf{W}_{ \mathrm{MM}}\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})\mathbf{Q}\left(\mathbf{G}\mathbf{d} -\mathbf{u}^{\mathrm{des}}\right), \tag{39}\]
where
\[\mathbf{Q}:=(\mathbf{\Psi}_{\mathrm{c}}+\xi\mathbf{I})^{-1}. \tag{40}\]
Since the observations at the control points are assumed to be pressure, i.e., omnidirectional microphone measurements, \(\mathbf{\Psi}_{\mathrm{c}}\) is equivalent to \(\mathbf{\mathcal{K}}_{\mathrm{c}}\), thus \(\mathbf{Q}=\mathbf{P}\). Moreover, \(\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})^{\mathrm{H}}\mathbf{W}_{\mathrm{MM}}\mathbf{ \Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})\) is calculated as
\[\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})^{\mathrm{H}}\mathbf{W}_{ \mathrm{MM}}\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})\] \[= \int_{\Omega}\left(\mathbf{\varphi}(\mathbf{r}-\mathbf{r}_{\mathrm{o}})^{ \mathrm{T}}\mathbf{\Xi}_{\mathrm{c}}(\mathbf{r}_{\mathrm{o}})\right)^{\mathrm{H}} \left(\mathbf{\varphi}(\mathbf{r}-\mathbf{r}_{\mathrm{o}})^{\mathrm{T}}\mathbf{\Xi}_{\mathrm{c}} (\mathbf{r}_{\mathrm{o}})\right)\mathrm{d}\mathbf{r}\] \[= \int_{\Omega}\mathbf{\kappa}_{\mathrm{c}}(\mathbf{r})^{*}\mathbf{\kappa}_{ \mathrm{c}}(\mathbf{r})^{\mathrm{T}}\mathrm{d}\mathbf{r}, \tag{41}\]
because
\[\mathbf{\varphi}(\mathbf{r}-\mathbf{r}_{\mathrm{o}})^{\mathrm{T}}\mathbf{\Xi}_{ \mathrm{c}}(\mathbf{r}_{\mathrm{o}}) =\] \[= \left[j_{0}(k\|\mathbf{r}-\mathbf{r}_{1}\|),\,\ldots,\,j_{0}(k\|\mathbf{r}-\mathbf{r }_{N}\|)\right]. \tag{42}\]
Here, property (7) is used. Note that \(\left\{\mathbf{c}_{n}\right\}_{n=1}^{N}\) is obtained as Eq. (13). In summary, when the expansion coefficients \(\hat{\mathbf{u}}^{\mathrm{des}}\) and \(\hat{\mathbf{G}}\) in the weighted mode matching are obtained by infinite-dimensional harmonic analysis from the pressure observations at the control points \(\mathbf{u}^{\mathrm{des}}\) and \(\mathbf{G}\), the weighted mode matching corresponds to the weighted pressure matching.
## 4 Experiments
We conducted experiments to compare pressure matching, weighted pressure matching, mode matching, and weighted mode matching, which are hereafter denoted as PM, WPM, MM, and WMM, respectively. First, we show numerical simulation results. Then, experimental results obtained using real data are demonstrated.
### Numerical simulation
The reproduction performances of the four methods are evaluated by numerical simulation in a 3D free field. Figure 2 shows the experimental setup. A target region, loudspeakers, and control points were set on the \(x\)-\(y\)-plane at \(z=0\). Forty-eight loudspeakers were regularly placed
along the border of a square with dimensions \(2.0~{}\mathrm{m}\times 2.0~{}\mathrm{m}\). The target region \(\Omega\) was set as a 2D square region of \(1.0~{}\mathrm{m}\times 1.0~{}\mathrm{m}\) at \(z=0\). The centers of these squares were at the origin. Thirty-six control points were regularly placed over the target region. In Fig. 2, the loudspeakers and control points are indicated by red dots and blue crosses, respectively. Each loudspeaker was assumed to be a point source. The desired sound field was a single plane wave, whose propagation direction was \((\theta,\phi)=(\pi/2,\pi/4)~{}\mathrm{rad}\).
In PM and WPM, \(\mathbf{u}^{\text{des}}\) and \(\mathbf{G}\) in Eqs. (21) and (28) were given as pressure values at the control points. The expansion coefficients \(\hat{\mathbf{G}}\) in Eqs. (37) and (36) were estimated up to the maximum order \(N_{\text{tr}}\) from \(\mathbf{G}\) by infinite-dimensional harmonic analysis (11) in the mode and weighted mode matching. The desired expansion coefficients \(\hat{\mathbf{u}}^{\text{des}}\) were analytically given up to \(N_{\text{tr}}\). In MM, the truncation order was determined as \(N_{\text{tr}}=\lceil kR\rceil\), where \(R\) was set to \(0.5\sqrt{2}~{}\mathrm{m}\) to cover the target region. Furthermore, to enhance the reproduction accuracy on the \(x\)-\(y\)-plane at \(z=0\), the coefficients of \(\nu=|\mu|\) were only used [24]. The truncation order \(N_{\text{tr}}\) for WMM was set to 30, which is sufficiently larger than the maximum required order of MM. The regularization parameters in Eqs. (21), (28), (37), and (36) were determined at each frequency as \(\mathbf{\sigma}_{\text{max}}^{2}(\mathbf{A})\times 10^{-3}\), where \(\sigma_{\text{max}}^{2}(\mathbf{A})\) is the maximum eigenvalue of the matrix to be inverted \(\mathbf{A}\). Therefore, \(\mathbf{A}\) is \(\mathbf{G}^{\text{H}}\mathbf{G}\), \(\mathbf{G}^{\text{H}}\mathbf{W}_{\text{PM}}\mathbf{G}\), \(\hat{\mathbf{G}}^{\text{H}}\hat{\mathbf{G}}\), and \(\hat{\mathbf{G}}^{\text{H}}\mathbf{W}_{\text{MM}}\hat{\mathbf{G}}\) in PM, WPM, MM, and WMM, respectively. The parameter \(\xi\) in Eqs. (26) and (11) was set as \(\sigma_{\text{max}}(\mathbf{K})\times 10^{-3}\) at each frequency.
For evaluation measure in the frequency domain, we define the signal-to-distortion ratio (SDR) as
\[\text{SDR}(\mathbf{\omega})=\frac{\int_{\Omega}|u_{\text{des}}(\mathbf{r},\mathbf{\omega} )|^{2}\mathrm{d}\mathbf{r}}{\int_{\Omega}|u_{\text{syn}}(\mathbf{r},\mathbf{\omega})-u_{ \text{des}}(\mathbf{r},\mathbf{\omega})|^{2}\mathrm{d}\mathbf{r}}, \tag{43}\]
where the integration was computed at the evaluation points. The evaluation points were obtained by regularly discretizing the target region every 0.02 m.
The SDR with respect to the frequency is plotted from 100 Hz to 1500 Hz in Fig. 3. The SDRs of MM were smaller than those of the other three methods below 1000 Hz. This can be considered to be due to the empirical truncation and weighting for the expansion coefficients in MM. Note that the reproduction accuracy further deteriorated when all the expansion coefficients up to the truncation order were used without the extraction of \(\nu=|\mu|\). The other three methods, PM, WPM, and WMM, achieved high reproduction accuracy. However, the SDRs of PM sharply decreased above 1000 Hz. The SDRs of WPM and WMM were slightly higher than those of PM below 1000 Hz, and they were maintained high up to 1100 Hz. Furthermore, the plots of WPM and WMM almost overlapped below 1200 Hz because of the equivalence between the two methods except the setting of the desired sound field, i.e., the desired pressures at the control points or desired expansion coefficients.
As an example, the synthesized pressure distribution of each method at 1100 Hz is shown in Fig. 4. Figure 5 is the square error distribution of each method at 1100 Hz. In WPM and WMM, the error was particularly small around a line in the target region. This is due to the 2D placement of the loudspeakers in 3D space. The amplitude of the synthesized sound field in PM was high outside the target region. In MM, the region of small reproduction error was limited around the center of the target region. The SDRs at this frequency were 12.9, 18.0, 14.4, and 18.2 dB for PM, WPM, MM, and WMM, respectively.
Next, we consider the case that the expansion coefficients of the transfer functions \(\mathbf{\tilde{G}}\) are also analytically given in MM and WMM to investigate the difference between WPM and WMM. The other settings were the same as the previous ones. Figure 6 shows the SDR with respect to the frequency. Note that the results of PM and WPM are the same as those in Fig. 3. The SDRs of MM and WMM gradually decreased, but there was no sharp decrease in SDR appeared in Fig. 3 up to 1500 Hz. Therefore, the sharp decrease of the SDR in Fig. 3 can be considered to be due to the limitation of the estimation accuracy of the expansion coefficients from the pressure measurements at the control points. The SDRs of PM and WPM at 1000 Hz are plotted with respect to the number of control points in Fig. 7. In each case, the control points were regularly placed in the target region. To attain 18.4 dB of SDR, 196 control points were necessary for PM although 64 control points were sufficient for WPM owing to the interpolation by the
Fig. 3: SDR with respect to frequency.
Fig. 2: Experimental setup for numerical simulation. The target region was set as a 2D square region. Red dots and blue crosses indicate loudspeakers and control points, respectively.
weighting matrix \(\mathbf{W}_{\rm PM}\). The absolute value of the weighting matrix \(|\mathbf{W}_{\rm PM}|\)for \(M=64\) is shown in Fig. 8.
MM and WWM does not depend on the control points in this setting. Figure 9 shows the SDR with respect to the maximum order \(N_{\rm tr}\) in the spherical wavefunction expansion. The black line indicates the order of \(\lceil kR\rceil\) used as the truncation criterion for MM in the previous experiment (Fig. 3). From \(N_{\rm tr}=2\) to 14, the SDR of MM increased up to around 14.8 dB and it was maintained up to \(N_{\rm tr}=23\). However, above \(N_{\rm tr}=24\), the SDR of MM sharply decreased. The SDR of WMM attained 18.4 dB above \(N_{\rm tr}=15\) although it was lower than that of MM between \(N_{\rm tr}=4\) and 12. Although the excessively large truncation order degenerate the reproduction accuracy in MM, the weighting matrix \(\mathbf{W}_{\rm MM}\) in WMM appropriately weights on the expansion coefficients to enhance the reproduction accuracy in the target region. The absolute value of the weighting matrix \(|\mathbf{W}_{\rm MM}|\) at 1000 Hz is shown in Fig. 10(a) up to \(N_{\rm tr}=7\). The index of \(\mathbf{W}_{\rm MM}\), denoted by \(i\), corresponds to the order \(\nu\) and degree \(\mu\) as \(i=\nu^{2}+\nu+\mu\). The blue line indicate the range of the indexes of the same \(\nu\). The diagonal elements of \(|\mathbf{W}_{\rm MM}|\) are shown in Fig. 10(b) by sorting them with respect to \(\nu\) and \(\mu\). The weights on the expansion coefficients of \(\nu=|\mu|\) were relatively larger than those of the other coefficients. Therefore, the empirical weighting scheme of MM, i.e., the extraction of the components of \(\nu=|\mu|\), is somehow reasonable. However, the weighting
Fig. 4: Reproduced pressure distribution at 1100 Hz. SDRs of PM, WPM, MM, and WMM were 12.9, 18.0, 14.4, and 18.2 dB, respectively.
Fig. 5: Square error distribution at 1100 Hz.
Fig. 8: Absolute value of weighting matrices of WPM \(|\mathbf{W}_{\rm PM}|\) (\(M=64\)) at 1000 Hz.
Fig. 6: SDR with respect to frequency when true expansion coefficients were used in MM and WMM.
Fig. 7: SDR with respect to number of control points at 1000 Hz.
matrix obtained by Eq. (34) enables achieving much higher reproduction accuracy.
### Experiments using real data
We conducted experiments using impulse responses measured in a practical environment included in the recently published impulse response dataset MeshRIR [25]. The positions of the loudspeakers and evaluation points are shown in Fig. 11. Along the borders of two squares with dimensions of \(2.0\) m \(\times\) 2.0 m at heights of \(z=-0.2\) m and \(0.2\) m, \(32\) loudspeakers were regularly placed; therefore, \(16\) loudspeakers were placed along each square. We used ordinary closed loudspeakers (YAMAHA, VXS1MLB). The measurement region was a square with dimensions of \(1.0\) m \(\times\) 1.0 m at \(z=0.0\) m. The measurement region was discretized at intervals of \(0.05\) m, and \(21\times 21\) (\(=441\)) evaluation points were obtained; therefore, its spatial Nyquist frequency is around \(3400\) Hz. We measured the impulse response at each evaluation point using an omnidirectional microphone (Primo, EM272J) attached to a Cartesian robot (see Fig. 12). The excitation signal of impulse response measurement was a linear swept-sine signal [26]. The reverberation time \(T_{60}\) was \(190\) ms. The details of the measurement conditions are described in Ref. [25]. The sampling frequency of the impulse responses was \(48\) kHz, but it was downsampled to \(8\) kHz.
We compared the four methods in terms of their reproduction performance in a practical environment. The target region was the same as the region of the evaluation points. Thirty-six microphone positions were regularly chosen from the evaluation points, which were used as control points in PM and WPM and to estimate expansion coefficients of the transfer functions \(\tilde{\mathbf{G}}\) in MM and WMM. The expansion coefficients were estimated up to the \(12\)th order. In MM, the truncation order was set to \(N_{\text{tr}}=\min(12,\lceil kR\rceil)\) with \(R=0.5\sqrt{2}\) m and the expansion coefficients of \(\nu=|\mu|\) were only used. Again, the regularization parameter in Eqs. (21), (28), (37), and (36) was set as \(\sigma_{\max}^{2}(\mathbf{A})\)\(\times\)\(10^{-3}\) with the matrix to be inverted \(\mathbf{A}\) at each frequency. The parameter \(\xi\)
Fig. 11: Positions of loudspeakers and evaluation points in experiments using real data.
Fig. 12: Impulse response measurement system.
Fig. 9: SDR with respect to maximum order of spherical wavefunctions \(N_{\text{tr}}\) at \(1000\) Hz. Black line indicates the order of \(\lceil kR\rceil\).
set as \(\sigma_{\max}(\mathbf{K})\times 10^{-3}\). We set the desired sound field to a single plane wave propagating to \((\theta,\phi)=(\pi/2,-\pi/4)\). The source signal was a pulse signal whose frequency band was low-pass-filtered up to 900 Hz. The filter for obtaining driving signals was designed in the time domain, and its length was 8192 samples. For the evaluation measure in the time domain, we define \(\overline{\text{SDR}}\) as
\[\overline{\text{SDR}}=\frac{\iint|u_{\text{des}}(\mathbf{r},t)|^{2}\mathbf{\mathrm{d}} \mathbf{\mathrm{r}}\mathrm{d}t}{\iint|u_{\text{syn}}(\mathbf{r},t)-u_{\text{des}}(\bm {r},t)|^{2}\mathrm{d}\mathbf{\mathrm{r}}\mathrm{d}t}. \tag{44}\]
Figure 13 shows the reproduced pressure distributions at \(t=0.51\) s. Time-averaged square error distributions are shown in Fig. 14. In PM, a small time-averaged square error was observed at the positions of the control points, but the region between them contains large errors. The time-averaged square error of MM was small around the center of the target region, but that was high in the off-center region. In WPM and WMM, a small square error was obtained over the target region. \(\overline{\text{SDR}}\)s of PM, WPM, MM, and WMM were 1.73, 3.57, 2.43, 3.48 dB, respectively.
## 5 Discussion
The weighting matrices in the weighted pressure and mode matching, \(\mathbf{W}_{\text{PM}}\) and \(\mathbf{W}_{\text{MM}}\), were derived to enhance the reproduction accuracy of pressure and mode matching. Although the simple formulations were only shown to discuss the relationship between the two methods, the reproduction accuracy can be further enhanced by introducing directional weighting for sound field capturing and/or regional weighting for sound field reproduction [12, 14, 16]. We here discuss the difference between the two methods in detail.
Although the cost functions of the weighted pressure and mode matching are similar, the roles of the weighing matrices are different. The weighting matrix \(\mathbf{W}_{\text{PM}}\) in the weighted pressure matching is derived from the interpolation of the pressure field between the control points based on the kernel ridge regression to alleviate the effect of spatial aliasing artifacts owing to the spatial sampling in the target region. In contrast, the weighted mode matching is formulated based on the spherical wavefunction expansion with the given expansion coefficients of the transfer functions and desired field. Therefore, the weighted mode matching, as well as mode matching, does not suffer from spatial aliasing owing to the sound field capturing as long as the accurate expansion coefficients are given. The weighting matrix \(\mathbf{W}_{\text{MM}}\) is derived from the approximation of the original cost function \(J\) in Eq. (18) instead of simply matching the expansion coefficients up to an empirical truncation order.
However, in practical situations, the expansion coefficients of the transfer functions \(\mathbf{\hat{G}}\) must be estimated from the microphone measurements because it is difficult to accurately model the practical loudspeakers and reverberations without the measurements. The expansion coefficients of the desired field \(\mathbf{\hat{u}}^{\text{des}}\) must also be estimated from the discrete set of measurements when their analytical representations are difficult to obtain. The infinite-dimensional harmonic analysis is one of the methods to estimate the expansion coefficients from the measurements. As shown in Section 3.3, when the expansion coefficients \(\mathbf{\hat{G}}\) and \(\mathbf{\hat{u}}^{\text{des}}\) in the weighted mode matching are estimated from the pressure observations at the control points by the infinite-dimensional harmonic analysis, the weighted mode matching corresponds to the weighted pressure matching. In the experiments, the reproduction accuracy of these two methods were almost identical. Since the computation of \(\mathbf{W}_{\text{PM}}\) is generally simpler than that of \(\mathbf{W}_{\text{MM}}\) and the estimation operator of the infinite-dimensional harmonic analysis in Eq. (11), the weighted pressure matching is simpler for implementation compared to the weighted mode match
Figure 14: Time-averaged square error distribution.
Figure 13: Reproduced pressure distribution at \(t=0.51\) s. \(\overline{\text{SDR}}\)s of PM, WPM, MM, and WMM were 1.73, 3.57, 2.43, 3.48 dB, respectively.
ing. However, the weighted pressure matching is applicable only when the pressure measurements at the control points are available because the kernel function is derived for interpolating the pressures. When the microphones have directivity, the infinite-dimensional harmonic analysis can be applied.
Another difference is the number of parameters to represent the sound field. It has been shown that the number of expansion coefficients required for the weighted mode matching can be smaller than the number of control points required for pressure matching when the target region is sphere in Ref. [12] (see Fig. 4 in Ref. [12]). When the target region is not a sphere, for example, a horizontal plane, as in the experiments, the representation by the spherical wavefunction expansion is sometimes redundant, and that is the reason why mode matching does not perform well in the experiments. In the experiment in Section 4.1, the maximum order \(N_{\text{tr}}=15\) required to attain 18.4 dB of SDR in the weighted mode matching corresponds to 256 expansion coefficients, which is much larger than the number of control points, 64, required to attain the same SDR in the weighted pressure matching. The number of control points can be further reduced by the sensor placement methods [13]. However, the weighting matrix \(\boldsymbol{W}_{\text{MM}}\) of the weighted mode matching is significantly sparse, as shown in Fig. 10. By extracting the columns and rows of the index set \(\{k\mid\sum_{i}\sum_{k}|\boldsymbol{W}_{\text{MM},i,k}|+\sum_{j}\sum_{k}| \boldsymbol{W}_{\text{MM},i,j}|>\delta\}\) with \(\delta=\max(|\boldsymbol{W}_{\text{MM}}|)\times 10^{-3}\), the number of expansion coefficients was reduced to 120 with the same SDR. Therefore, it is possible to extract required expansion coefficients based on the weighting matrix \(\boldsymbol{W}_{\text{MM}}\) to reduce the number of parameters to represent the sound field. In addition, the expansion coefficients of the spherical wavefunctions are compatible with the existing ambisonics format. Their independence from the microphone positions as an intermediate representation is useful for storing and transmitting data.
Although we focused on the relationship between the weighted pressure and mode matching, a common issue for the sound field reproduction methods including both the analytical and numerical methods is spatial aliasing owing to the discrete arrangement of the secondary sources. Although this issue is beyond the scope of this paper, we here briefly discuss the spatial aliasing problem in the sound field reproduction. Based on the single layer potential [27], any source-free sound field in the interior target region can be synthesized by continuously distributed point sources on a surface surrounding the target region. Since the continuous distribution is replaced with a discrete set of secondary sources in practice, the reproduction accuracy can deteriorate at high frequencies. Specifically, degradation in sound localization and coloration of reproduced sounds can occur. In some applications such as local-field reproduction and noise cancellation, the reproduced frequency range is targeted at low frequencies; therefore, the required number of secondary sources for accurate reproduction is relatively small. The sound field reproduction for the audible frequency range requires a large number of secondary sources. Several attempts have been made to combine with other spatial audio reproduction techniques for high frequencies [28] to prioritize the flat amplitude response under the assumption that inaccurate phase distribution is acceptable at high frequencies in the human auditory system. Nevertheless, there are several techniques to further reduce the number of secondary sources. The first technique is to reduce the number of parameters to be controlled, which makes the problem to be solved in the (weighted) pressure and mode matching overdetermined even with the small number of secondary sources. For example, by limiting the range of the target region and introducing the regional importance of reproduction, the number of control points or expansion coefficients to be controlled can be reduced. As in the experiments, the target region is frequently limited to the horizontal plane because the listeners' ears can be assumed to be approximately on the same plane in practical situations. The second technique is the optimization of the secondary source placement [13; 29; 30]. By selecting an optimal set of secondary source positions from candidate positions in a certain criterion, the minimum required number of secondary sources and their optimal placement can be obtained. We consider that spatial aliasing owing to the secondary sources is still an open issue in this field.
## 6 Conclusion
Theoretical and experimental comparisons of two sound field reproduction methods, weighted pressure and mode matching, were carried out, which can be regarded as a generalization of conventional pressure and mode matching, respectively. In the weighted pressure matching, the weighting matrix is obtained on the basis of the kernel interpolation of the sound field from the pressure at the control points. The weighted mode matching is derived on the basis of the spherical wavefunction expansion of the sound field, and the weighting matrix is defined as the regional integration of the spherical wavefunctions. When the expansion coefficients of the desired sound field and transfer functions are estimated from the pressure observations at the control points by infinite-dimensional harmonic analysis, the weighted mode matching corresponds to the weighted pressure matching. In this sense, the weighted mode matching is more general than the weighted pressure matching because the desired sound field can be given as the analytical formulation of expansion coefficients and directional microphones can also be used to estimate the expansion coefficients. The advantage of the weighted pressure matching is its simplicity for implementation. The difference in the number of parameters required to represent the sound field is discussed through the experiments. The redundancy of the spherical wavefunction expansion when the target region is not a sphere can be alleviated to some extent by extracting the expansion coefficients based on the weighting matrix of the weighted mode matching.
## 7 Acknowledgment
This work was supported by JST FOREST Program, Grant Number JPMJFR216M, and JSPS KAKENHI, Grant Number 22H03608.
|
2306.16781
|
Prospects of measuring Gamma-ray Burst Polarisation with the Daksha
mission
|
The proposed Daksha mission comprises of a pair of highly sensitive space
telescopes for detecting and characterising high-energy transients such as
electromagnetic counterparts of gravitational wave events and gamma-ray bursts
(GRBs). Along with spectral and timing analysis, Daksha can also undertake
polarisation studies of these transients, providing data crucial for
understanding the source geometry and physical processes governing high-energy
emission. Each Daksha satellite will have 340 pixelated Cadmium Zinc Telluride
(CZT) detectors arranged in a quasi-hemispherical configuration without any
field-of-view collimation (open detectors). These CZT detectors are good
polarimeters in the energy range 100 -- 400 keV, and their ability to measure
polarisation has been successfully demonstrated by the Cadmium Zinc Telluride
Imager (CZTI) onboard AstroSat. Here we demonstrate the hard X-ray polarisation
measurement capabilities of Daksha and estimate the polarisation measurement
sensitivity (in terms of the Minimum Detectable Polarisation: MDP) using
extensive simulations. We find that Daksha will have MDP of~$30\%$ for a
fluence threshold of $10^{-4}$ erg cm$^2$ (in 10 -- 1000 keV). We estimate that
with this sensitivity, if GRBs are highly polarised, Daksha can measure the
polarisation of about five GRBs per year.
|
Suman Bala, Sujay Mate, Advait Mehla, Parth Sastry, N. P. S. Mithun, Sourav Palit, Mehul Vijay Chanda, Divita Saraogi, C. S. Vaishnava, Gaurav Waratkar, Varun Bhalerao, Dipankar Bhattacharya, Shriharsh Tendulkar, Santosh Vadawale
|
2023-06-29T08:38:25Z
|
http://arxiv.org/abs/2306.16781v2
|
# Prospects of measuring Gamma-ray Burst Polarisation with the _Daksha_ mission
###### Abstract
The proposed _Daksha_ mission comprises of a pair of highly sensitive space telescopes for detecting and characterising high-energy transients such as electromagnetic counterparts of gravitational wave events and gamma-ray bursts (GRBs). Along with spectral and timing analysis, _Daksha_ can also undertake polarisation studies of these transients, providing data crucial for understanding the source geometry and physical processes governing high-energy emission. Each _Daksha_ satellite will have 340 pixelated Cadmium Zinc Telluride (CZT) detectors arranged in a quasi-hemispherical configuration without any field-of-view collimation (open detectors). These CZT detectors are good polarimeters in the energy range 100 - 400 keV, and their ability to measure polarisation has been successfully demonstrated by the Cadmium Zinc Telluride Imager (CZTI) onboard _AstroSat_. Here we demonstrate the hard X-ray polarisation measurement capabilities of _Daksha_ and estimate the polarisation measurement sensitivity (in terms of the Minimum Detectable Polarisation: MDP) using extensive simulations. We find that _Daksha_ will have MDP of \(30\%\) for a fluence threshold of \(10^{-4}\ \mathrm{erg\ cm^{-2}}\) (in 10 - 1000 keV). We estimate that with this sensitivity, if GRBs are highly polarised, _Daksha_ can measure the polarisation of about five GRBs per year.
gamma-ray burst: general, instrumentation: polarimeters, methods: numerical, techniques: polarimetric. 1
*Suman Bala, [email protected]_
*Sujay Mate, [email protected]_
* These authors contributed equally to this work.
## 1 Introduction
Gamma-Ray Bursts [(1, GRBs)] are the most energetic explosions in the Universe. The initial brief and intense gamma-ray flash originates close to the burst site and is known as the prompt emission. When burst ejecta collides with the external ambient medium, it produces the afterglow emission, which radiates in all wavelengths (radio to gamma-rays) [(2, 3, 4)]. GRBs are classified into two categories depending upon the duration of the prompt emission phase and they originate from two different progenitors. Short GRBs are produced by the merger of two compact objects, such as binary neutron stars (BNS) or a neutron star-black hole (NS-BH) pair [(5, 6, 7)], whereas long GRBs originate from the core collapse of massive stars [(8, 9, 10)]. In both cases, the central engine is believed to be either a black hole [(8, 9, 10)] or a hyper-massive magnetar [(5, 6)], that is thought to emit a relativistic jet giving rise to the GRB.
We have a broad understanding of GRBs, but many detailed questions regarding the nature of the central engine and the emission from the relativistic jet still remain unanswered [(11, 12)]. There
are competing models regarding the exact energy dissipation process, radiation mechanism, and radiation transfer processes in the prompt emission phase [13]. It is well established that these combined processes produce a non-thermal spectrum, often fitted with various phenomenological models like a power law, power law with an exponential cutoff, Band function, etc. [14, 15]. However, most of the spectral analyses show similar fit statistics when fitted with different physical and empirical models [16, 17]. On the other hand, different physical models predict different degrees of polarisation. Hence a statistical analysis of polarisation properties of the GRB prompt emission phase can be a very useful tool to constrain the models and answer questions regarding the emission mechanism and geometry of GRB jets [13, 18, 19, 20, 21].
High-energy polarisation measurement is a very photon-hungry task. Due to the short-lived nature of the prompt emission phase, it has been extremely difficult to measure polarisation of GRBs. Polarisation measurements exits only for \(\sim 40\) GRBs out of \(>5000\) GRBs detected so far. The majority of these detections have come only in the last few years with _POLAR_[12, 22] and _AstroSat_/CZTI [23, 24] leading the numbers and other missions like _BATSE_[25], _GAP_[26] and _INTEGRAL_/SPI [27, 28] contributing to a few measurements. However, different missions have measured different levels of polarisation in the observed GRBs. For example, _GAP_ and _INTEGRAL_ have measured high levels of polarisation (polarisation fraction or degree \(>60\%\)) in their sample; while _POLAR_ and _AstroSat_ measurements find a low level of polarisation in their sample. In a few cases, _POLAR_[12, 29, GRB170114A] and _AstroSat_/CZTI [30, GRB160821A] have also detected a temporal evolution of polarisation fraction, asserting the highly dynamic nature of the prompt emission. A detailed summary of GRB polarisation measurements can be found in [31].
The limited number of measurements and low level of observed polarisation in many GRBs necessitates the need for more sensitive and dedicated GRB polarimeters. In the upcoming years, dedicated GRB polarimeters such as _POLAR-2_[32], _COSI_[33] and _LEAP_[34] have been proposed to obtain more sensitive polarisation measurements. One such addition to this era would be the proposed high-energy transient mission _Daksha_[35, 36]. When launched, it will be one of the the most sensitive high-energy telescope in the world that can also perform hard X-ray polarisation measurements. Here we present the expected hard X-ray polarisation sensitivity of _Daksha_.
This article is organised as follows: In Section 2, we give an overview of the _Daksha_ mission. In Section 3, we briefly explain the polarisation measurement principle _Daksha_ detectors will utilise. In Section 4, we give details of the mass model used to carry out simulations necessary for the sensitivity analysis described in the article. In Section 5, we describe the method that _Daksha_ will use to measure hard X-ray polarisation. In Section 6, we present the sensitivity results, and in Section 7 we summarise and conclude our analysis.
## 2 The _Daksha_ mission
_Daksha_ is a proposed Indian high-energy transient mission dedicated to studying electromagnetic counterparts of gravitational waves (EMGW) and GRBs [35]. Apart from its primary science goals, _Daksha_ will also detect flares from magnetars, possible counterparts of Fast Radio Bursts (FRBs), outbursts from bright X-ray binaries and Active Galactic Nuclei (AGNs), hard X-ray emissions from the Sun and also Terrestrial Gamma-ray Flashes (TGFs). The details about the science goals of _Daksha_ can be found in [36].
_Daksha_ consists of two identical satellites on the opposite side of the Earth in a Low Earth Orbit (LEO). Each satellite will cover a broad energy range (1 keV - 1 MeV) and monitor nearly the
entire sky. It will be able to detect transients onboard and send alerts over the General Coordinates Network (GCN)1 within a few minutes of detection. Three types of detectors are used to span the entire energy range: Silicon Drift Diodes (SDDs) form Low Energy (LE) packages to cover the \(1-30\) keV range; Cadmium Zinc Telluride (CZT) detectors form Medium Energy (ME) packages to cover the \(20-200\) keV range; and lastly Sodium Iodide (NaI) scintillators coupled with Silicon Photo-multipliers (SiPM) cover the \(100-1000\) keV range. The current design of _Daksha_ is shown in Figure 1, and more information about instrument details can be found in [35].
Footnote 1: [https://gcn.nasa.gov/](https://gcn.nasa.gov/)
The hemispherical arrangement of 13 ME and LE packages each gives nearly uniform coverage of half the sky. Photons from a GRB located in this half will be incident on multiple faces, all at varying angles. In the medium energy range, the sensitivity is extended to the remaining half of the sky by 4 sunward pointing ME packages.
## 3 Hard X-ray polarimetry with CZT detectors
The small pixel size (\(\sim 2.5\) mm) and relatively large thickness (\(\sim 5\) mm) of _Daksha_ CZT detectors make them a good hard X-ray polarimeter that works on the principle of Compton scattering. The polarimetric capabilities of these detectors have been discussed and demonstrated before on AstroSat/CZTI [37, 38, 39]. Here we give a brief overview of the principle for completeness.
Figure 1: The design of a single _Daksha_ satellite. The payload carries 13 Low-energy (LE) and Medium-energy (ME) detector packages installed on dome-shaped frame with 13 surfaces. The four High-energy (HE) detector packages and the processing electronics are mounted inside the dome. In addition to the 13 ME packages on the dome, four ME packages are mounted under the satellite bus. The satellite reference frame is indicated on the bottom right. The \(\theta=0^{\circ}\) and \(\phi=0^{\circ}\) corresponds to the +ve Z-axis direction while \(\theta=0^{\circ}\) and \(\phi=90^{\circ}\) corresponds to the +ve Y-axis direction. Figure from [35].
When a photon undergoes Compton scattering, it is scattered preferentially in the direction perpendicular to the direction of the polarisation vector (i.e. direction perpendicular to the electric field vector). Above the incident energy of \(\sim\)100 keV, Compton scattering interactions in the CZT detector can create two-pixel events where one pixel acts as a scatterer while the other acts as an absorber. These "Compton event" pair positions can be mapped onto a square grid of 3\(\times\)3 pixels, with the central pixel being the scattering pixel and the surrounding pixels being the absorber pixels. For a square geometry, we get an azimuthal histogram of eight angular bins, each separated by 45\({}^{\circ}\) (Figure 2). The modulation observed in this histogram gives us the degree of polarisation (Polarisation Fraction, PF hereafter) and the angle of polarisation (Polarisation Angle2, PA hereafter) [37, for more details].
Footnote 2: Polarisation Angle is defined with respect to the local North increasing positively towards the East.
Apart from Compton scattering, other effects such as chance coincidence, fluorescence plus escape peak event pairs can also generate two-pixel events. Hence a selection criteria is needed to separate out possible Compton events from all the two pixel events. We employ the same criteria as described by 37 to select the Compton events. The criteria, applied to both background and GRB events, are as follows:
* To be considered a two pixel event, the two events must have the same time stamp at the microsecond timing resolution of _Daksha_.
* The events should occur in two adjacent pixels.
Figure 2: An example of a simulated azimuthal histogram created using the “Compton Event” pairs in the CZT detector. The observed modulation is effect of polarisation as well as the differences in the pixel solid angles.
* The pixel with lower energy deposit is assumed to be the scattering pixel (\(\rm E_{scat}\)) while the other pixel is assumed to be the absorber (\(\rm E_{abs}\)) and the energy ratio \(\rm E_{abs}/E_{scat}\) should be between 1 and 6.
* The total energy deposited in both pixels combined (i.e. \(\rm E_{scat}+E_{abs}\)) should be between 100 and 400 keV.
A standard method to determine the PA and PF is by fitting a cosine function to the observed azimuthal histogram [23, 24, 37, 40]. However, there is an inherent asymmetry for sources that are off-axis relative to the detector boresight (as will be the case for most GRBs observed by a single ME package), and the distribution of counts in the azimuthal bins is not strictly sinusoidal [41]. Fitting a cosine to such a distribution leads to systematic errors in determining the PA and PF. In order to avoid the systematics that can arise from the modulation fitting, we choose the template matching approach used by 39 and 12 for the analysis carried out in this article (see Section 5 for more details). The templates are generated such that they span entire PA - PF parameter space and are fitted with the observed data to measure the source PA and PF. To generate such templates, we undertake extensive simulations using a detailed chemical and geometrical model (called "mass model") of the instrument, in this case, the ME packages and the _Daksha_ payload. The following section describes the mass model used for our analysis. The details of the template matching method and sensitivity analysis are described in Section 5 and Section 6 respectively.
## 4 _Daksha_ mass model
We use the popular Monte-Carlo particle-matter interaction simulation toolkit GEANT4 [42] to perform _Daksha_ mass model simulations. This section briefly details the prototype mass model and its implementation in GEANT4. Some refinements will be made in this mass model when the final payload design is frozen, but the current model includes all the critical components that will affect the polarization measurements. Before giving more details, we first define some of the GEANT4 terminologies that are needed to describe the mass model. In the discussion below, the term event refers to a simulation of a single photon starting with its creation and its propagation through the geometry until it (and all the other secondary particles generated by it) is stopped or leaves the simulation volume. A run refers to a collection of all events that have the same input properties, i.e., simulation of multiple photons with identical input spectra, direction distribution, etc.
### Geometry
Figure 3 shows the GEANT4 rendering of the prototype _Daksha_ mass model and some of its sub-components. The current version consists of 13 ME packages and 13 LE packages (without the SDD detector volume) in the current configuration, four HE packages in the current configuration, and the support dome. The satellite bus has not been finalised and hence is not included in this simulation. To ensure that this does not affect our results, we are only considering GRBs in the top hemisphere.
The most critical component for our analysis, the single CZT detector module, has been modelled to a mass accuracy of better than 5%, and the model includes most of the internal components (e.g., the Application Specific Integrated Circuit (ASIC) board, heat sink, front end Printed Circuit Board; PCB). For PCBs, individual components are not modelled and they are modelled as sheet
of equivalent mass and a typical PCB composition [43]. For the HE package, only the NaI active volume and the Aluminium shield around the volume have been modelled. Other components like the SiPMs do not have enough material to significantly affect our simulations, but would add computational burden, hence they have been excluded.
### Physics and Tracking
The simulation currently only tracks electromagnetic interactions between 250 eV to 100 GeV using the built-in G4LivermorePolarisedPhysics physics list. Atomic de-excitation is activated to produce secondary particles via fluorescence and Auger electron escape.
We have used the sensitive detector method of tracking to track the interactions in the active volume. The CZT volume has been divided into 256 pixels, and each pixel acts as an individual sensitive detector3. All the electrons produced by the primary (or secondary) photon are tracked along with their energy deposits until they are stopped. If these electrons deposit non-zero energy in a pixel volume, the energy deposit is accumulated and the total energy deposit in that pixel is returned at the end of every event.
Footnote 3: Note that although individual pixel is a sensitive detector, there is no gap between pixels. They are modelled as a parameterised volume filling the entire logical volume equivalent to a single CZT crystal of dimension 39.06 \(\times\) 39.06 \(\times\) 5 mm\({}^{3}\).
### Input and output
The G4GeneralParticleSource class is used to generate input photons. The input characteristics of photons (e.g. spectral, positional, and angular distribution) are defined using the standard GEANT4 input macro files. Currently, a planar or isotropic source with either mono energy or custom input spectra are used depending upon the simulation configuration (see Sections 5.1, 6.2.1 and 6.2.2).
Figure 3: GEANT4 rendering of _Daksha_ mass model and its components. Left: The complete prototype model with 13 ME packages, 13 LE packages without the active volume, and the support dome. The 4 HE packages are not visible here as they are inside the dome. Middle: A single ME package with 20 CZT detector modules (light green squares), the front end PCB (dark green), and the support box with a top lid (grey). Right: A single CZT detector module rendering (top) and a photograph of a real detector module (bottom) for comparison to highlight the accuracy of modelling.
The output of a single run is a FITS file similar to the "time-tagged event files (TTE)" produced by many X/gamma-ray instruments operating in the event mode. The file stores the event ID of the detected event (instead of the time in case of the real event data), the deposited energy, and the pixel in which the event is detected4. In case of multiple events, all the pixels with non-zero energy deposits are recorded individually with the same eventID5. Along with the event info, all the metadata related to the run is stored in the header of the FITS extension. This includes input source geometry, the spectral and angular distribution of photons, and the input seeds to allow full reproducibility of any simulation.
Footnote 4: This is stored as three values: the pixel ID relative to single CZT detector module, the module ID relative to a single ME package and the ME package ID relative to the satellite frame.
Footnote 5: For the simulated data, to select Compton events, we use the events with same eventID in place of same timestamp.
## 5 Polarimetry using template matching
As mentioned in Section 3, we use the template matching method to measure the PA and PF from the observed azimuthal histogram. The method compares (using \(\chi^{2}\) statistics) the observed azimuthal histogram with a library of pre-computed template histograms corresponding to different PA and PF values. The following two subsections describe the steps involved in creating the library of templates and the basic principle of the template matching method.
### Template azimuthal histogram generation
The template matching method requires a template bank that spans entire PA/PF parameter space. The template bank can be created by simulating 100% polarised histograms at different PAs to cover the PA space and combining them with unpolarised histograms to cover the PF space. We generate the template histogram library by running a set of GEANT4 simulations for a given direction in the sky.
For a single direction, 57 energies between 100 keV to 1 MeV are chosen with increasing step size at higher energies, considering the resolution of the detectors. For each energy, 18 simulations with 100% polarised input photons are carried out by varying the polarisation angle in steps of 10\({}^{\circ}\) from 0\({}^{\circ}\) to 170\({}^{\circ}\). By carrying out a limited number of simulations with finer PA spacing, we found that the number of photons in each of the eight azimuthal bins varies smoothly with PA, and the variation can be modelled as the sum of a sinusoid and its second harmonic. Thus, we can carry out simulations with 10\({}^{\circ}\) spacing in PA, and interpolate to calculate the histograms for intermediate angles. Additional simulations with the same energies are carried out with unpolarised input photons from the same incidence direction. In total, for a given source direction, we run \(57\times(18+1)=1083\) simulations.
For each simulation, we shine 5\(\times\)10\({}^{6}\) photons on the entire mass model. The number is determined by carrying out a convergence estimation such that for the final template the relative error in each azimuthal bin is less than 1%. From these simulations, azimuthal histograms at each energy and polarisation angle are extracted using the criteria explained in Section 3.
For a source (GRB in our case) with incident spectra \(S(E)\), where \(E\) is the photon energy, the azimuthal histograms at each PA are co-added in the energy space by weighting them with \(S(E)\). This gives us the 100% polarised azimuthal histograms for the incident source spectra from a given direction. The same spectral scaling is applied for the unpolarised simulations to get the unpolarized histogram for the incident source spectrum.
To create an azimuthal histogram template bank in PA/PF space for the given source, the 100% polarised azimuthal histograms are first interpolated to a 1\({}^{\circ}\) grid in PA space, and then they are combined with the unpolarised histograms to generate template azimuthal histograms in PF space (in PF range 0 to 1 in steps of 0.01) using the following relation:
\[H_{i,t}(p,\psi)=p\cdot H_{i,t}(1,\psi)+(1-p)\cdot H_{i,t}(0);\quad 0\leq i<8, \tag{1}\]
where \(H_{i,t}(p,\psi)\) is the template azimuthal histogram for PA=\(\psi\) and PF=\(p\) grid point (\(i\) represents the \(i^{\rm th}\) azimuthal bin), \(H_{i.t}(1,\psi)\) is the 100% polarised azimuthal histogram with PA=\(\psi\) and \(H_{i,t}(0)\) is the unpolarised histogram. A schematic representation of template creation process is shown in the Figure 4.
Figure 4: A schematic diagram showing creation of template histogram (\(H_{p,\psi}\)) for PA=\(\psi\) and PF=\(p\). The \(H_{100,\psi}(E)\) represents the 100% polarised histogram at energy \(E\) and PA=\(\psi\), \(H_{0}(E)\) represents the unpolarised histogram at energy \(E\). The \(S(E)\) represents the GRB spectra. The process is repeated for different PAs and PFs to generate the template bank over the entire PA/PF parameter space.
### Template matching by \(\chi^{2}\) minimisation
We use \(\chi^{2}\) statistics to quantitatively compare the observed azimuthal histogram with the library of template azimuthal histograms. We define a \(\chi^{2}(p,\psi)\) as:
\[\chi^{2}(p,\psi)=\sum_{i=0}^{i=7}\frac{\left[\bar{H}_{i,o}(p,\psi)-\bar{H}_{i,t} (p,\psi)\right]^{2}}{\bar{\sigma}_{i,o}^{2}(p,\psi)+\bar{\sigma}_{i,t}^{2}(p, \psi)} \tag{2}\]
Where \(\bar{H}_{i,o}(p,\psi)\) and \(\bar{H}_{i,t}(p,\psi)\) are normalised observed and template histograms for PF value of \(p\) and PA value of \(\psi\) respectively and \(\bar{\sigma}_{i,o}^{2}(p,\psi)\) and \(\bar{\sigma}_{i,t}^{2}(p,\psi)\) are errors on them. The normalisation is done by dividing by total number of counts in all eight bins, i.e. by \(N=\sum H_{i}\). We simulate our template bank such that \(\bar{\sigma}_{i,t}^{2}(p,\psi)<0.01\) and hence in actual estimation of the \(\chi^{2}(p,\psi)\) we neglect the \(\bar{\sigma}_{i,t}^{2}(p,\psi)\) term. The \(\chi^{2}(p,\psi)\) is estimated individually for each ME package on all 13 faces of _Daksha_. As observed counts on each face are independent, we add the \(\chi^{2}(p,\psi)\) values from different faces to estimate the total \(\chi^{2}_{tot}(p,\psi)\):
\[\chi^{2}_{tot}(p,\psi)=\sum_{j}\chi^{2}_{j}(p,\psi) \tag{3}\]
where the sum is evaluated over the face ID \(j\). The best-fit values for \(p\) and \(\psi\) for a given GRB are obtained by minimising the \(\chi^{2}_{tot}(p,\psi)\) over the predetermined grid of PA/PF values. The errors on the best-fit parameters are obtained using the confidence contours for a two-parameter \(\chi^{2}\) distribution with \(\Delta\chi^{2}_{tot}(p,\psi)\) = 2.3, 6.18, and 11.83 corresponding to the confidence levels of 1\(\sigma\), 2\(\sigma\) and 3\(\sigma\) respectively with \(\Delta\chi^{2}_{tot}(p,\psi)\) defined as:
\[\Delta\chi^{2}_{tot}(p,\psi)=\chi^{2}_{tot}(p,\psi)-\min(\chi^{2}_{tot}(p,\psi)) \tag{4}\]
For the minimisation process, we only use the top five faces (arranged in a decreasing order effective area for a given direction) to compute the \(\chi^{2}_{tot}(p,\psi)\) as our tests show that adding more faces does not have a significant impact on the results. Hence, for all the analyses described below, only the relevant five faces are used in the template-matching process.
## 6 Polarimetric sensitivity of _Daksha_
The most common way to express the sensitivity of a polarimeter is in terms of the "Minimum Detectable Polarisation" (MDP) that can be achieved with it. Formally, MDP is defined as: "The degree of polarization corresponding to the amplitude of modulation that has only a 1% probability of being detected by chance" [40]. It is computed as follows:
\[\mathrm{MDP}=\frac{4.29}{\mu_{100}R_{s}}\left[\frac{R_{s}+R_{B}}{T}\right]^{1 /2} \tag{5}\]
where \(\mu_{100}\) is the modulation factor for 100% polarised light, \(R_{s}\) is the source count rate, \(R_{B}\) is the background count rate and \(T\) is the total source exposure. However, this is strictly valid only for on-axis incidence as the equation is derived assuming ideal sinusoidal variation [40, for derivation]. As mentioned in Section 5, the observed modulation is not sinusoidal for off-axis incidence and particularly for the pixellated CZT detectors, hence the above equation is not strictly valid.
Therefore to quantify the polarisation measurement sensitivity of _Daksha_, we use a Monte Carlo-based approach which stems from the basic definition of MDP following the approach presented in [39]. In this section, we describe the Monte Carlo method (called MC-MDP hereafter) used to measure the MDP for _Daksha_, show the sensitivity results obtained using the MC-MDP, and verify our sensitivity by injecting two GRBs and recovering their polarisation.
### Minimum Detectable Polarisation using Monte Carlo approach
The definition of MDP states that if we repeatedly measure polarisation for an unpolarised source, in 99% of the cases we should measure the PF less than the MDP. In other words, if we measure PF for a large number of realisations of azimuthal histograms from an unpolarised GRB, the 99\({}^{\rm th}\) percentile of the measured PF distribution gives the value of MDP for a given source direction and spectrum. The MC-MDP uses this definition to measure MDP for a given polarimeter.
Mathematically, a single realisation of an unpolarised observed azimuthal histogram corresponding to a GRB with fixed spectral parameters and duration \(T_{grb}\) can be represented as:
\[H_{i,o}(0)=P(\overline{H}_{i}^{grb}T_{grb})+P(\overline{H}_{i}^{bg}T_{grb})-P( \overline{H}_{i}^{bg}T_{bkg})\frac{T_{grb}}{T_{bkg}} \tag{6}\]
where \(P(\lambda)\) represents Poisson random variable with a mean \(\lambda\), \(\overline{H}_{i}^{grb}\) is the mean (time-averaged, in terms of counts/s) azimuthal histogram (without any background) for an unpolarised GRB of a given fluence, \(T_{grb}\) is the GRB duration, \(\overline{H}_{i}^{bg}\) is the mean (time-averaged, in terms of counts/s) background azimuthal histogram and \(T_{bkg}\) is the background duration. Using this equation, one can Poisson sample large number of azimuthal histograms and measure the PA/PF value for each realisation. The MDP is then obtained from the cumulative distribution of measured PF values. The mean histograms for GRB and background are sampled by running GEANT4 simulations. The details of the MC-MDP method in case of MDP estimate for _Daksha_ are explained in the next section.
### Minimum Detectable Polarisation computation for Daksha
To estimate MDP for a given fluence and a given incident direction using MC-MDP, we create 20,000 realisations of the "source" azimuthal histograms. Each realisation is then subjected to the template matching fitting explained in Section 5 to get the measured PA and PF values and the corresponding cumulative histogram to estimate the MDP. For this analysis, we fix the \(T_{grb}\) to be 30 s6 and \(T_{bkg}\) to be 1000 s. The mean GRB and background histograms are computed from GEANT4 simulations as explained below. A schematic representation of the entire process is shown in Figure 5.
Footnote 6: This value corresponds to the approximate peak of the \(\rm T_{90}\) distribution in the _Fermi_/GBM catalogue by [44].
#### 6.2.1 Mean GRB azimuthal histograms
The mean GRB azimuthal histogram is created by shining a large number of _unpolarised_ GRB photons onto the _Daksha_ mass model for a given direction. The input spectra for these simulations follow the Band [14] function with parameters: \(\alpha=-1.08\), \(\beta=-2.14\), \(\rm E_{peak}\) = 196 keV and
norm = 1 ph/cm\({}^{2}\)/s/keV at 100 keV7. We simulate a total of 65,465,2408 input photons in the 90 - 1000 keV energy range. The input energy range is fixed to 90 - 1000 keV as we are only interested in the 100 - 400 keV range for the polarisation measurements. From this simulation, we extract the azimuthal histograms as explained in the Section 3 for each of the top five faces of _Daksha_. These histograms are then scaled appropriately to compute the mean GRB histogram \(\overline{H}_{i}^{grb}\).
Footnote 7: The parameters are taken from the _Fermi_/GBM catalogue by Gruber et. al. 2014 15. The values indicated as “BEST” in Table 4 have been chosen.
Footnote 8: The number is obtained by integrating the Band function with the given parameters over circular planar source by assuming a GRB duration of 100 s.
#### 6.2.2 Mean background azimuthal histograms
The mean background azimuthal histogram is created by shining large number of photons onto the _Daksha_ mass model from the inner surface of a model spherical source of radius of 5 m with a cosine-biased angular distribution to ensure an isotropic flux. The emission angle of photons is restricted between 0\({}^{\circ}\) and 6\({}^{\circ}\) from the surface normal to the sphere to avoid shinning input photons that do not reach the mass model [45, see section 4.2 for details]. We only consider photon background components, i.e. Cosmic X-ray Background (CXB), reflection of CXB from Earth's atmosphere (reflection hereafter) and hard X-ray albedo of Earth (albedo hereafter) as our preliminary study shows that these components are dominant. As CXB and reflection plus albedo have different directional origin, we carry out two different simulations, one with only CXB spectrum as input and one with reflection plus albedo spectrum as input. We assume that _Daksha_ is pointing away from the Earth, i.e. the top face of _Daksha_ is oriented in an opposite direction to the centre of the Earth, and combine both the simulations by weighting them appropriately by
Figure 5: A diagram showing the steps involved in the MC-MDP method that is used to determine MDP for _Daksha_. The steps in green square are repeated 20,000 times to get the measured PF distribution from unpolarised light which is then used to determine the MDP.
expected solid angles. The spectral models for all three components are taken from 46. From these simulation, we extract the azimuthal histograms as explained in the Section 3 for each of the top five faces of _Daksha_. These histograms are then scaled appropriately to compute the mean background histogram \(\overline{H}_{i}^{bkg}\).
### Polarisation sensitivity results for Daksha
First, to test the accuracy our method, we compare the MC-MDP with the analytical MDP (Equation 5) for a normal incidence case. For ease of comparison, this test is performed on a single CZT detector module with incidence fluence of 10\({}^{-3}\)\(\mathrm{erg\ cm^{-2}}\) in the 10 - 1000 keV range. The comparison is shown in Figure 6.
The reason for two MDP values in the analytical case is because \(\mu_{100}\) is a function of PA in case of the pixellated CZT detector and it is minimum for PA = 0\({}^{\circ}\) and maximum for PA = 45\({}^{\circ}\)[37, for more details]. It can be seen that the MC-MDP is close to the average of the two analytical values.
The MDP estimate for a near on-axis incidence9 (\(\theta\) = 5\({}^{\circ}\) and \(\phi\) = 0\({}^{\circ}\)) for _Daksha_ is shown in Figure 7. The left panel shows the PA/PF contour plot for one azimuthal histogram realisation, and the right panel shows the the final distribution of PA/PF for 20,000 realisations, along with the cumulative histogram of all PF values used to determine the MDP. We estimate an MDP value of 30% with a fluence of 10\({}^{-4}\)\(\mathrm{erg\ cm^{-2}}\) in the 10 - 1000 keV range.
Footnote 9: _Daksha_ coordinate system is defined in Figure 1 and the on-axis direction corresponds to the direction of the Z-axis.
Figure 6: Comparison of MC-MDP (Red line) with analytical MDP (Green and Orange lines) for a normal incidence on a single CZT detector module. The MDP depends on PA for the analytical case because the modulation obtained from the pixellated CZT detectors is sensitive to the incident polarisation angle (see figure 15 in 37). From the figure it can be seen that the MC-MDP is close to the average of the two analytical values (dashed Black line).
The polarisation sensitivity of a polarimeter is usually correlated with source fluence, i.e., MDP decreases with an increasing GRB fluence. The sensitivity can also be a function of incident direction. To check the MDP dependence on incident direction and fluence, we estimate MDP for _Daksha_ for 12 different incident directions (see Table 1) and 10 different fluence values. The 12 directions are chosen in the first half of the first octant of the coordinate sphere (\(\theta\in\) [0\({}^{\circ}\), 90\({}^{\circ}\)] and \(\phi\in\) [0\({}^{\circ}\), 45\({}^{\circ}\)]) such that they span unique directions in that volume. Given the azimuthal symmetry of _Daksha_, the results obtained for these directions can be extrapolated to other \(\theta,\phi\) in the top hemisphere.
\begin{table}
\begin{tabular}{c c} \hline \(\theta\) & \(\phi\) \\ \hline
0\({}^{\circ}\) & 5\({}^{\circ}\) \\
22.5\({}^{\circ}\) & 0\({}^{\circ}\), 45\({}^{\circ}\) \\
45\({}^{\circ}\) & 0\({}^{\circ}\), 22.5\({}^{\circ}\), 45\({}^{\circ}\) \\
67.5\({}^{\circ}\) & 0\({}^{\circ}\), 22.5\({}^{\circ}\), 45\({}^{\circ}\) \\
90\({}^{\circ}\) & 0\({}^{\circ}\), 22.5\({}^{\circ}\), 45\({}^{\circ}\) \\ \hline \end{tabular}
\end{table}
Table 1: Incident directions (in polar coordinates) for which MDP is computed.
Figure 7: MDP estimation for _Daksha_ ME detectors using the MC-MDP method for a near on-axis incidence (\(\theta\) = 5\({}^{\circ}\), \(\phi\) = 0\({}^{\circ}\)). The incidence fluence is 10\({}^{-4}\)\(\mathrm{erg~{}cm^{-2}}\) in the 10 – 1000 keV range. **Left**: The \(\chi^{2}\) distribution in the PA/PF space obtained from the template matching fit for one realisation of the source azimuthal histogram. The measured value, PA = 79\({}^{\circ}\) and PF = 0.06 (corresponding to the minimum \(\chi^{2}\)), is marked with the blue cross. The \(\Delta\chi^{2}\) contours corresponding to 1\(\sigma\), 2\(\sigma\) and 3\(\sigma\) confidence intervals are shown with the black lines. **Right**: Corner plot for measured PA and PF distribution for all 20,000 realisations of source azimuthal histogram. The probability and cumulative distributions of PF values (marginalised over all PAs) are shown in the top panel of the corner plot. The estimated MDP value of 0.30 is marked with a dashed line.
The Figure 8 shows the MDP variation against the incident GRB fluence for five different off-axis directions. It shows that there is no strong dependence of incident direction on MDP thanks to the symmetrical design of _Daksha_ ME packages. The dependence of MDP on fluence shows an expected behaviour, and the MDP drops \(\sim 1/\sqrt{\mathrm{fluence}}\) as the fluence increases.
### Validation of polarization measurements
We verify the estimated polarisation sensitivity for _Daksha_ by injecting two GRBs detected by AstroSat/CZTI and are above our MDP and fluence threshold. The spectral and polarimetric parameters for these GRBs are taken from [24].
To compute the position of these GRBs in the _Daksha_ frame (\(\theta\), \(\phi\) in polar coordinates), we use satellite orbit simulations as per the mission profile and convert the celestial coordinates of the GRB into _Daksha_ frame. The spectral and polarimetric parameters of the GRB as well as the calculated \(\theta\), \(\phi\) are given in Table 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Name & \(\theta\), \(\phi\) & \(\alpha\) & \(\beta\) & \(\mathrm{E_{p}}\) & \(\Delta\mathrm{T}\) & Fluence & PF & PA \\ & (deg) & & & (keV) & (s) & (erg cm\({}^{-2}\)) & & (deg) \\ \hline GRB 180103A & 90.44, 239.67 & -1.31 & -2.24 & 273 & 165.83 & \(2.3\times 10^{-4}\) & 0.71 & 122.13 \\ GRB 180914B & 34.16, 124.08 & -0.75 & -2.10 & 453 & 160.04 & \(5.99\times 10^{-4}\) & 0.48 & 68.41 \\ \hline \end{tabular}
\end{table}
Table 2: Injected GRB parameters for the polarisation sensitivity verification.
Figure 8: The variation of MDP against the incident GRB fluence for five different off-axis directions with respect to _Daksha_ pointing. As expected, the MDP drops \(\sim 1/\sqrt{\mathrm{fluence}}\) (i.e. better sensitivity) as the fluence increases. Due to the symmetry of _Daksha_, the MDP does not depend strongly on the incident direction.
The results of this test are shown in Figure 9. It can be seen that, for both GRBs, the measured values of PA and PF are close to the injected values and lie well within the 1-\(\sigma\) confidence interval. In both cases, the injected PF is above the MDP, indicating _Daksha_ can measure the polarisation properties of a GRB accurately, provided the PF is above the MDP threshold.
Photons for any given GRB have different angles of incidence on each ME package of the payload. This provides us with an excellent opportunity to control any systematic effects that may arise from unknown angle-dependent effects, which will likely average out over the multiple packages. For the brightest bursts, we will also be able to measure the PA and PF from individual ME packages and compare the consistency of the results.
### GRB polarisation measurement rates
_Daksha_ is expected to detect about five GRBs per year above the fluence of \(10^{-4}\ \mathrm{erg\ cm^{-2}}\) if GRBs are highly polarised (PF \(>\) 30%). The rate is estimated using the fluence distribution of _Fermi_/GBM detected GRBs and scaling with duty cycle of two _Daksha_ satellites. Comparing this with _AstroSat_/CZTI, _Daksha_ will be five times more sensitive. Note that although _AstroSat_/CZTI has reported 20 GRBs in five years, only five of them have secured measurements giving measurement rate of one per year [24].
## 7 Conclusions
The proposed _Daksha_ mission has high sensitivity for detecting Gamma Ray Bursts and other high energy transients. We can exploit the creation of two-pixel events by Compton scattering of incident photons to measure the polarisation of the source in the 100 - 400 keV range. This ability has already been demonstrated on _AstroSat_/CZTI, and _Daksha_ is poised to surpass CZTI's polarisation sensitivity.
Figure 9: Figure showing measured values of PA and PF for two injected GRBs (GRB 180103A on left and GRB 180914B on right). The solid, dashed and dotted dashed black lines show the 1\(\sigma\), 2\(\sigma\) and 3\(\sigma\) confidence intervals respectively. The injected parameters are given in table 2. For GRB 180103A, the measured PA/PF values are (119\({}^{\circ}\)\(\pm\)6\({}^{\circ}\), 0.71\(\pm\)0.12) and for GRB 180914B, the measured PA/PF values are (67\({}^{\circ}\)\(\pm\)4\({}^{\circ}\), 0.44\(\pm\)0.06). _Daksha_ can recover the PA and PF accurately for both the GRBs.
In this article, we have discussed the method that will be used to measure hard X-ray polarisation using _Daksha_ CZT detectors. The method uses a template matching approach where pre-computed templates (over a grid of PA/PF values) are compared with the observed modulation to measure the polarisation angle (PA) and polarisation fraction (PF) of the source. We have performed detailed GEANT4 simulations using the mass model of _Daksha_ to quantify the polarisation measurement sensitivity of _Daksha_. The sensitivity has been quoted using the standard Minimum Detectable Polarisation (MDP) metric. To account for the non-normal incidence directions of photons, we have adopted a new Monte-Carlo-based approach to compute the MDP. This method gives consistent results with the analytical formula for on-axis cases, but can be readily generalised to other angles.
Our results show that thanks to the symmetrical design of _Daksha_, for a given fluence, MDP does not depend strongly on the incident direction (with respect to the satellite pointing), and hence _Daksha_ will have a near-uniform polarisation measurement sensitivity for half of the sky. For a fluence of \(10^{-4}\ \mathrm{erg\ cm^{-2}}\) in the energy range \(10-1000\) keV, we obtain MDP of 30% for _Daksha_. Given this sensitivity, we predict that if GRBs are highly polarised, _Daksha_ can confidently measure polarisation for at least five GRBs per year; five times better than _AstroSat_/CZTI. _Daksha_ is likely to be operational during the same period as other dedicated GRB polarisation missions such as _POLAR-2_, _COSI_ and _LEAP_. Given the detection sensitivity and all-sky coverage of _Daksha_, it will detect many GRBs simultaneously with these missions, and joint analysis of such GRBs will play an important role in understanding the prompt emission. A more detailed analysis of GRB polarisation measurement statistics with _Daksha_ and its implication on breaking the degeneracy in the proposed physical models for prompt emission will be carried out in subsequent works. Overall, when launched, _Daksha_ will play an important role in the field of GRB polarisation.
### Acknowledgments
We thank the Space Program Office (SPO) of the Indian Space Research Organisation for its Announcement of Opportunity for space astrophysics missions, under which _Daksha_ was proposed. Development of the _Daksha_ Medium Energy Package laboratory model was started with funding support from SPO, and continued with support from all partner organisations. We thank the administrative and support staff at all partner institutes for their help in all _Daksha_-related matters.
We want to thank Dr. Shabnam Iyyani from IISER Thiruvananthapuram, and Dr. Tanmoy Chattopadhyay from Kavli Institute of Particle Astrophysics and Cosmology, Stanford University for their valuable inputs, comments and discussions that have helped improve the manuscript. S.M. would like to thank Dr. Ajay Vibhute and Mr. Dhanraj Borgaonkar of IUCAA, Pune for their help to configure the High Performance Computing cluster at IUCAA which was essential for the simulations performed in this article. S.P.T. is a CIFAR Azrieli Global Scholar in the Gravity and Extreme Universe Program and this work was supported from the CIFAR Azrieli Fellowship Grant.
### Softwares
Numpy [47], Scipy [48], Matplotlib [49], Astropy [50, 51, [http://www.astropy.org](http://www.astropy.org)], Ephem [[https://pypi.python.org/pypi/pyephem/](https://pypi.python.org/pypi/pyephem/)], GEANT4 [42, [https://geant4.web.cern.ch/](https://geant4.web.cern.ch/)]
### Code, Data, and Materials Availability
The codes are in a continuous state of development and can be made available on a reasonable request to the authors.
|
2302.04672
|
Nucleon relativistic polarization and magnetization distributions
|
As a follow up of our work on the electromagnetic four-current, we study for
the first time the relativistic polarization and magnetization spatial
distributions inside a spin-$\frac{1}{2}$ target within the quantum phase-space
approach. While the polarization-magnetization tensor is usually defined in
terms of the Gordon decomposition of the electromagnetic four-current, a Breit
frame analysis reveals that a physically simpler and more natural picture of
the system arises when the polarization-magnetization tensor is instead defined
in terms of a Sachs decomposition. Relativistic polarization and magnetization
distributions for a moving target are compared with their light-front
counterparts. In particular, we show that the genuine light-front magnetization
distributions are defined in terms of Fourier transforms of the Sachs magnetic
form factor, rather than in terms of the Pauli form factor as suggested earlier
in the literature. We finally illustrate our results in the case of a nucleon
using the electromagnetic form factors extracted from experimental data.
|
Yi Chen, Cédric Lorcé
|
2023-02-09T14:49:13Z
|
http://arxiv.org/abs/2302.04672v2
|
# Nucleon relativistic polarization and magnetization distributions
###### Abstract
As a follow up of our work on the electromagnetic four-current, we study for the first time the relativistic polarization and magnetization spatial distributions inside a spin-\(\frac{1}{2}\) target within the quantum phase-space approach. While the polarization-magnetization tensor is usually defined in terms of the Gordon decomposition of the electromagnetic four-current, a Breit frame analysis reveals that a physically simpler and more natural picture of the system arises when the polarization-magnetization tensor is instead defined in terms of a Sachs decomposition. Relativistic polarization and magnetization distributions for a moving target are compared with their light-front counterparts. In particular, we show that the genuine light-front magnetization distributions are defined in terms of Fourier transforms of the Sachs magnetic form factor, rather than the Pauli form factor as suggested earlier in the literature. We finally illustrate our results in the case of a nucleon using the electromagnetic form factors extracted from experimental data.
Introduction
Nucleons (i.e. protons and neutrons) are by far the most abundant bound-state systems in nature and are key for studying quantum chromodynamics (QCD), the fundamental theory of strong interactions. A central goal of modern nuclear physics is to explain how nucleons emerge in QCD from first principles [1; 2]. Due to the complicated non-perturbative dynamics of their quark and gluon degrees of freedom, nucleons inherit particularly rich and intricate internal structures.
Electromagnetic form factors (FFs) encode fundamental information on the internal electromagnetic structure of hadrons [3; 4; 5; 6; 7; 8]. Nucleon electromagnetic FFs in particular have been extensively measured over the past decades with very high precision in various scattering experiments [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. On the theory side, _ab initio_ calculations within the lattice QCD approach have also significantly been improved in the last few years [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. For recent reviews on electromagnetic FFs, see Refs. [1; 52; 53; 54; 55; 56; 57; 58; 59].
Spatial distributions of charge and magnetization can be defined in the Breit frame (BF) in terms of 3D Fourier transforms of these electromagnetic FFs [60; 61], but they cannot be considered as probabilistic densities due to relativistic recoil corrections [62; 63; 6; 11; 64; 65]. Spatial distributions with probabilistic interpretation can however be defined within the light-front (LF) formalism [66; 67; 68; 69; 70; 71; 72; 73; 74], at the cost of losing one spatial dimension and exhibiting distortions induced by the LF perspective.
Understanding better the relation between 3D BF and 2D LF distributions has been the focus of many recent works, see e.g. [75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85]. The quantum phase-space formalism distinguishes itself by the fact that the requirement of a strict probabilistic interpretation is relaxed and replaced by a milder quasiprobabilistic picture [86; 87; 88]. This approach is quite appealing since it allows one to define in a consistent way relativistic spatial distributions inside a target with arbitrary spin and arbitrary average momentum [89; 90; 91; 92; 93; 94; 95; 96; 97]. In particular, when the average momentum vanishes one recovers the BF picture, while in the limit of infinite average momentum one recovers _essentially_ the LF picture.
In this work, we use the quantum phase-space formalism to study for the first time the relativistic polarization and magnetization spatial distributions inside a spin-\(\frac{1}{2}\) target. The paper is organized as follows. In Sec. II, we first briefly review the description of elastic electron-nucleon scattering in terms of electromagnetic FFs, and then discuss the concept of polarization-magnetization tensor. In Sec. III, we present in detail the quantum phase-space formalism and compare the phase-space picture with the light-front picture. We start our analysis in Sec. IV with the Breit frame distributions of polarization and magnetization for a spin-\(\frac{1}{2}\) target. We argue that the polarization-magnetization tensor suggested by the Sachs decomposition of the electromagnetic four-current is physically more transparent than the one suggested by the Gordon decomposition. We proceed in Sec. V with the elastic frame distributions of polarization and magnetization, and study in detail their frame dependence, and derive the analytic expressions for electric and magnetic dipole
moments. For completeness, we also present in Sec. VI the light-front distributions and multipole moments, and compare them with the infinite-momentum limit of their elastic frame counterparts. In particular, we explain why the genuine light-front magnetization distributions are given by the 2D Fourier transforms of the Sachs magnetic form factor, rather than the Pauli form factor as suggested earlier in the literature. Finally, we summarize our findings in Sec. VII, and provide further discussions about charge radii, relativistic centers and multipole decomposition of polarization and magnetization distributions in three Appendices.
## II Polarization and magnetization for a spin-\(\frac{1}{2}\) target
Long ago it has been shown that the matrix elements of the electromagnetic four-current operator for a general spin-\(\frac{1}{2}\) system can be parametrized as [98; 6; 99]
\[\langle p^{\prime},s^{\prime}|\hat{j}^{\mu}(0)|p,s\rangle=e\,\overline{u}(p^{ \prime},s^{\prime})\Gamma^{\mu}(P,\Delta)u(p,s) \tag{1}\]
with \(e\) the unit of electric charge (chosen to be that of a proton) and
\[\Gamma^{\mu}(P,\Delta)=\gamma^{\mu}\,F_{1}(Q^{2})+\frac{i\sigma^{\mu\nu}\Delta _{\nu}}{2M}\,F_{2}(Q^{2}), \tag{2}\]
where \(F_{1}(Q^{2})\) and \(F_{2}(Q^{2})\) are Lorentz-invariant functions called Dirac and Pauli form factors (FFs), respectively. For convenience, we introduced the variables \(P=\frac{1}{2}(p^{\prime}+p)\), \(\Delta=p^{\prime}-p\) and \(Q^{2}=-\Delta^{2}\); see, e.g., the tree-level Feynman diagram in Fig. 1. The on-shell conditions \(p^{\prime 2}=p^{2}=M^{2}\) imply in particular \(P\cdot\Delta=0\) and \(P^{2}+\frac{\Delta^{2}}{4}=M^{2}\). There is therefore only one dimensionless Lorentz-invariant variable which we chose as \(\tau=Q^{2}/(4M^{2})\). The initial and final canonical polarizations of the system are denoted by \(s\) and \(s^{\prime}\), respectively.
Figure 1: Feynman diagram of the \(t\)-channel elastic reaction \(e^{-}(k)+N(p)\to e^{-}(k^{\prime})+N(p^{\prime})\) in the one-photon-exchange approximation. The four-momentum transfer is \(\Delta=k-k^{\prime}=p^{\prime}-p\).
In the Breit frame (BF), defined by the condition \(\mathbf{P}=\mathbf{0}\), the amplitudes read [6; 60; 61]
\[\begin{split}\langle p^{\prime}_{B},s^{\prime}_{B}|\hat{j}^{0}(0)|p _{B},s_{B}\rangle&=e\,2M\,\delta_{s^{\prime}_{B}s_{B}}\,G_{E}(Q^{ 2}),\\ \langle p^{\prime}_{B},s^{\prime}_{B}|\hat{\mathbf{j}}(0)|p_{B},s_{B} \rangle&=e\,(\mathbf{\sigma}_{s^{\prime}_{B}s_{B}}\times i\mathbf{\Delta} )\,G_{M}(Q^{2}),\end{split} \tag{3}\]
where \(\mathbf{\sigma}\) are the Pauli matrices and the combinations
\[\begin{split} G_{E}(Q^{2})&=F_{1}(Q^{2})-\tau F_{2 }(Q^{2}),\\ G_{M}(Q^{2})&=F_{1}(Q^{2})+F_{2}(Q^{2}),\end{split} \tag{4}\]
are known as the electric and magnetic Sachs FFs. The spin structure of the amplitudes in the BF turns out to be the same as in the non-relativistic theory. In any other frame, the spin structure becomes more complicated as a result of Wigner rotations [92; 77; 96]. A somewhat related observation is that the differential cross section in the first Born approximation (i.e. one-photon exchange) can be expressed as [6; 8]
\[\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=\left(\frac{\mathrm{d}\sigma}{ \mathrm{d}\Omega}\right)_{\mathrm{Mott}}f_{\mathrm{recoil}}\left[G_{E}^{2}(Q^ {2})+\frac{\tau}{\epsilon}\,G_{M}^{2}(Q^{2})\right]\frac{1}{1+\tau}, \tag{5}\]
where \(\epsilon=(1+2(1+\tau)\tan^{2}\frac{\theta}{2})^{-1}\) is the virtual photon polarization with \(\theta\) the scattered electron angle in the lab frame. The Mott cross section and the recoil factor are given by
\[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\right)_{\mathrm{Mott}}=\frac {\alpha^{2}\cos^{2}\frac{\theta}{2}}{4E^{2}\sin^{4}\frac{\theta}{2}},\qquad f_ {\mathrm{recoil}}=\frac{E^{\prime}}{E}=\frac{1}{1+\frac{2E}{M}\sin^{2}\frac{ \theta}{2}}, \tag{6}\]
where \(\alpha=e^{2}/(4\pi)\approx 1/137\) is the electromagnetic fine structure constant1 and \(E\) (\(E^{\prime}\)) is the initial (final) electron energy in the lab frame. For comparison, in terms of Dirac and Pauli FFs the differential cross section reads [3; 6]
Footnote 1: The convention we used throughout this paper is \(\hbar=c=1\) with \(\mu_{0}=\epsilon_{0}=1\).
\[\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=\left(\frac{\mathrm{d}\sigma}{ \mathrm{d}\Omega}\right)_{\mathrm{Mott}}f_{\mathrm{recoil}}\left\{F_{1}^{2}(Q^ {2})+\tau\left(F_{2}^{2}(Q^{2})+2\left[F_{1}(Q^{2})+F_{2}(Q^{2})\right]^{2}\tan ^{2}\frac{\theta}{2}\right)\right\}. \tag{7}\]
The absence of interference terms in Eq. (5) makes the separate extraction of \(G_{E}\) and \(G_{M}\) easier, and suggests also that they could be considered as the "physical" electromagnetic FFs. A parametrization of Eq. (1) directly in terms of Sachs FFs reads [92; 100]
\[\Gamma^{\mu}(P,\Delta)=\frac{MP^{\mu}}{P^{2}}\,G_{E}(Q^{2})+\frac{i\epsilon^{ \mu\alpha\beta\lambda}\Delta_{\alpha}P_{\beta}\gamma_{\lambda}\gamma_{5}}{2P^{ 2}}\,G_{M}(Q^{2}) \tag{8}\]
with \(\epsilon_{0123}=+1\). It is equivalent to the parametrization (2) on-shell, i.e. once sandwiched between Dirac spinors. A similar expression for the Dirac theory, i.e. with \(G_{E}(Q^{2})=G_{M}(Q^{2})=Z\), has been considered in position space in Ref. [101]. The structure of Eq. (8) is particularly
interesting since it is reminiscent of a classical current in a polarizable medium, giving further support to the interpretation of the Sachs FFs as the "physical" electromagnetic FFs2. Similar observations apply to spin-1 systems [6; 94; 102] and a generalization of Eq. (8) to higher-spin systems has even been proposed in Ref. [103].
Footnote 2: Strictly speaking, both Eqs. (8) and (5) suggest that the actual physical FFs are given by \(\bar{G}_{E,M}(Q^{2})\equiv\frac{M}{\sqrt{P^{2}}}\,G_{E,M}(Q^{2})=\frac{1}{ \sqrt{1+\tau}}\,G_{E,M}(Q^{2})\).
### Convection and polarization currents
In classical electromagnetism, it is customary to decompose the electromagnetic four-current in position space into "convection" and "polarization" currents [104; 105]
\[J^{\mu}(x)=J^{\mu}_{c}(x)+J^{\mu}_{P}(x),\qquad J^{\mu}_{P}(x)=\partial_{ \alpha}P^{\alpha\mu}(x). \tag{9}\]
The basic idea is that in a polarizable medium some of the charges are somewhat free to move and constitute the convective part of the current, also known as the "free" current. The rest of the charges is confined in compact regions, e.g. around atomic nuclei. Applying an external electromagnetic field to the medium can induce (electric) polarization \(\mathbf{\mathcal{P}}\) and magnetization \(\mathbf{M}\), generating a new contribution to the total current often called the "bound" current. From a relativistic perspective, polarization and magnetization are the two sides of a same coin, the polarization-magnetization tensor
\[P^{\mu\nu}=\begin{pmatrix}0&\mathcal{P}_{x}&\mathcal{P}_{y}&\mathcal{P}_{z}\\ -\mathcal{P}_{x}&0&-M_{z}&M_{y}\\ -\mathcal{P}_{y}&M_{z}&0&-M_{x}\\ -\mathcal{P}_{z}&-M_{y}&M_{x}&0\end{pmatrix}, \tag{10}\]
just like the electric and magnetic fields are the two sides of the Faraday tensor \(F^{\mu\nu}\). This means that under a Lorentz boost, polarization and magnetization will mix with each other.
Writing Eq. (9) more explicitly, one obtains
\[\begin{split} J^{0}&=\rho_{c}-\mathbf{\nabla}\cdot\mathbf{\mathcal{P}},\\ \mathbf{J}&=\rho_{c}\mathbf{v}+\mathbf{\nabla}\times\mathbf{M}+\partial_{0}\mathbf{ \mathcal{P}}.\end{split} \tag{11}\]
Assuming as usual that surface terms vanish at spatial infinity, we see that the induced polarization does not change the total charge of the system but simply modifies its spatial distribution. Relativistically, this arises from the fact that the divergence of the polarization four-current vanishes identically \(\partial_{\mu}J^{\mu}_{P}(x)=\partial_{\mu}\partial_{\alpha}P^{\alpha\mu}(x)=0\) owing to the antisymmetry of the polarization-magnetization tensor. In other words, the polarization four-current has the form of what is known in the literature as a _superpotential_.
At the classical level, angular momentum appears only in orbital form. Magnetization arises therefore from loops of charge current, while polarization arises from the separation of electric charges due to the external electric field. At the quantum level, a new form of angular momentum known as _spin_ enters the game. As a result, a spinning charged particle at rest will also present a permanent magnetic dipole moment (MDM). We should therefore distinguish external and internal contributions to the polarization-magnetization tensor due to, respectively, the external electromagnetic fields and the spin degrees of freedom. At the level of one-photon exchange, we are only sensitive to the spin contribution. The external contribution requires at least two photons and is described at linear order in the electromagnetic field in terms of the medium polarizabilities \(P_{\rm ext}^{\mu\nu}=\alpha^{\mu\nu\alpha\beta}F_{\alpha\beta}\)[106]. In this work, we will focus on the internal (or spin) polarization-magnetization tensor.
For a particle at rest, a permanent electric dipole moment (EDM) along the angular momentum breaks time-reversal (T) and hence the combined charge-conjugation and parity (CP) symmetries. In the Standard Model, these symmetries are known to be broken by the weak interactions and the \(\theta\)-term in QCD [107], but the breaking is so small that one can consider to an excellent approximation that these symmetries remain exact when studying the internal structure of hadrons. It follows that the polarization for a point-like particle at rest must vanish. In the non-relativistic limit, Eq. (11) reduces then to
\[\begin{split} J^{0}&\approx\rho_{c},\\ \mathbf{J}&\approx\rho_{c}\mathbf{v}+\mathbf{\nabla}\times\mathbf{M}, \end{split} \tag{12}\]
where the term \(\mathbf{\nabla}\times\mathbf{M}\) is known as the spin current [108; 109; 6; 110] since \(\mathbf{M}\propto\mathbf{S}\) with \(\mathbf{S}\) the spin vector. For systems moving with relativistic velocities, one should also include the contributions from \(\mathbf{\mathcal{P}}\). The latter do not however contain any new intrinsic information since they simply result from the Lorentz boost of the rest-frame magnetization.
### Polarization-magnetization tensor
Let us now come back to the electromagnetic four-current for a spin-\(\frac{1}{2}\) target. In momentum space, the four-divergence turns into a contraction with the four-momentum transfer
\[\langle p^{\prime},s^{\prime}|\partial_{\mu}\hat{O}^{\mu}(x)|p,s\rangle=i\Delta _{\mu}\langle p^{\prime},s^{\prime}|\hat{O}^{\mu}(x)|p,s\rangle, \tag{13}\]
using the translation invariance property. It is then clear that the parametrization (8) exhibits the same structure as the total current (9) in classical electromagnetism [81; 92]. Accordingly, we identify the convection current with the \(G_{E}\) term and the polarization
current with the \(G_{M}\) term. In other words, we write \(\Gamma^{\mu}(P,\Delta)=\Gamma^{\mu}_{c}(P,\Delta)+\Gamma^{\mu}_{P}(P,\Delta)\) with
\[\begin{split}\Gamma^{\mu}_{c}(P,\Delta)&=\frac{MP^{ \mu}}{P^{2}}\,G_{E}(Q^{2}),\\ \Gamma^{\mu}_{P}(P,\Delta)&=\frac{i\epsilon^{\mu \alpha\beta\lambda}\Delta_{\alpha}P_{\beta}\gamma_{\lambda}\gamma_{5}}{2P^{2}} \,G_{M}(Q^{2}).\end{split} \tag{14}\]
This suggests in particular that the polarization-magnetization tensor for a spin-\(\frac{1}{2}\) target is given in momentum space by
\[\widetilde{P}^{\mu\nu}=-\frac{e}{2M}\,\frac{M\,\epsilon^{\mu\nu\beta\lambda}P_ {\beta}}{P^{2}}\,\overline{u}(p^{\prime},s^{\prime})\gamma_{\lambda}\gamma_{5} u(p,s)\,G_{M}(Q^{2}). \tag{15}\]
Since it involves the axial-vector Dirac bilinear, we will refer to it as the \(A\)-type polarization-magnetization tensor.
We point out that the identification of a polarization-magnetization tensor from the electromagnetic four-current alone is in fact ambiguous. One reason is that only the divergence of \(P^{\mu\nu}\) contributes to \(J^{\mu}\) in Eq. (9). As a result, one can alternatively consider the tensor
\[P^{\mu\nu}_{\mathcal{A}}(x)=P^{\mu\nu}(x)+\epsilon^{\mu\nu\alpha\beta}\partial _{\alpha}\mathcal{A}_{\beta}(x) \tag{16}\]
with \(\mathcal{A}^{\beta}\) an arbitrary axial four-vector field assumed to vanish sufficiently fast at infinity. Our choice in Eq. (15) is motivated by its simplicity and by the fact that the relativistic spin appears explicitly in the form of the Dirac axial-vector four-current. An additional ambiguity comes from the equation of motion. Indeed, since Eq. (2) is meant to be sandwiched between free Dirac spinors, we can use the Gordon identity [111]
\[\overline{u}(p^{\prime},s^{\prime})\gamma^{\mu}u(p,s)=\overline{u}(p^{\prime}, s^{\prime})\left[\frac{P^{\mu}}{M}+\frac{i\sigma^{\mu\nu}\Delta_{\nu}}{2M} \right]u(p,s) \tag{17}\]
and write \(\Gamma^{\mu}(P,\Delta)=\Gamma^{\prime\mu}_{c}(P,\Delta)+\Gamma^{\prime\mu}_{P }(P,\Delta)\) with
\[\begin{split}\Gamma^{\prime\mu}_{c}(P,\Delta)&=\frac {P^{\mu}}{M}\,F_{1}(Q^{2}),\\ \Gamma^{\prime\mu}_{P}(P,\Delta)&=\frac{i\sigma^{\mu \nu}\Delta_{\nu}}{2M}\,G_{M}(Q^{2}),\end{split} \tag{18}\]
suggesting another a priori acceptable definition for the polarization-magnetization tensor
\[\widetilde{P}^{\prime\mu\nu}=-\frac{e}{2M}\,\overline{u}(p^{\prime},s^{\prime })\sigma^{\mu\nu}u(p,s)\,G_{M}(Q^{2}). \tag{19}\]
Since it involves the tensor Dirac bilinear, we will refer to it as the \(T\)-type polarization-magnetization tensor. The decomposition of a current into convection and polarization parts
is therefore not unique, and can be understood as a consequence of the on-shell identity
\[\overline{u}(p^{\prime},s^{\prime})i\sigma^{\mu\nu}\Delta_{\nu}u(p,s)=\overline{u} (p^{\prime},s^{\prime})\left[\frac{\Delta^{2}}{2P^{2}}\,P^{\mu}+\frac{Mi\epsilon ^{\mu\alpha\beta\lambda}\Delta_{\alpha}P_{\beta}\gamma_{\lambda}\gamma_{5}}{P^ {2}}\right]u(p,s), \tag{20}\]
which can easily be derived from the relations given in Ref. [112]. As a result of Gordon's work [111], the \(T\)-type definition (19) is often the only one considered in the literature, but we will show later that the \(A\)-type definition (15) turns out in fact to be more natural.
As a last remark, we note that in field theory it is customary to describe the full electromagnetic interaction of particles through the single interaction term
\[S_{\rm int}=\int\mathrm{d}^{4}x\,J^{\mu}(x)A_{\mu}(x), \tag{21}\]
which can be rewritten as follows
\[S_{\rm int}=\int\mathrm{d}^{4}x\,J^{\mu}_{c}(x)A_{\mu}(x)-\frac{1}{2}\int \mathrm{d}^{4}x\,P^{\mu\nu}(x)F_{\mu\nu}(x) \tag{22}\]
using integration by parts. It is then easy to see that the ambiguity mentioned in Eq. (16) exists because of the homogeneous Maxwell equation \(\epsilon^{\mu\nu\alpha\beta}\partial_{\nu}F_{\alpha\beta}=0\), which expresses the absence of magnetic charges. Even though the form (22) makes the physics more transparent, it is in practice easier to consider that all the electromagnetic properties can be described in terms of a single electromagnetic four-current \(J^{\mu}\), rather than by a combination of \(J^{\mu}_{c}\) and \(P^{\mu\nu}\). Opinions differ in the literature about whether \(J^{\mu}\) or \(J^{\mu}_{c}\) should be regarded as the fundamental electromagnetic four-current, just like they differ about whether the (symmetric) Belinfante or the (asymmetric) kinetic energy-momentum tensor should be considered as the fundamental energy-momentum tensor [113]. In particular, if one assumes that all forms of magnetism arise from the sole circulation of charges, the polarization-magnetization tensor
\[P^{\mu\nu}_{J}(x)\equiv-\frac{1}{2}\left[x^{\mu}J^{\nu}(x)-x^{\nu}J^{\mu}(x)\right] \tag{23}\]
would then seem to be a natural choice for a system sitting at the origin, fixing therefore the form of the polarization current to \(J^{\mu}_{P}=\partial_{\alpha}P^{\alpha\mu}_{J}=\frac{1}{2}[J^{\mu}-\partial_{ \alpha}(x^{\alpha}J^{\mu})]\) and hence the convection current to \(J^{\mu}_{c}=\frac{1}{2}[J^{\mu}+\partial_{\alpha}(x^{\alpha}J^{\mu})]\). We will not discuss in detail this option in the present work.
## III Quantum phase-space formalism
Electromagnetic FFs describe the internal charge and magnetization content of a system. While they are objects defined in momentum space and extracted from experimental data involving particles with well-defined momenta, their physical interpretation actually
resides in position space. It is therefore important to understand how the concept of spatial distribution arises in quantum field theory.
Let us consider a generic local operator \(\hat{O}(x)\). Its expectation value in a physical state can be written as
\[\langle\Psi|\hat{O}(x)|\Psi\rangle=\sum_{s^{\prime},s}\int\frac{\mathrm{d}^{3}p^ {\prime}}{(2\pi)^{3}}\,\frac{\mathrm{d}^{3}p}{(2\pi)^{3}}\,\widetilde{\Psi}^{ *}(\mathbf{p}^{\prime},s^{\prime})\widetilde{\Psi}(\mathbf{p},s)\,\frac{\langle p^{ \prime},s^{\prime}|\hat{O}(x)|p,s\rangle}{2\sqrt{p^{\prime 0}p^{0}}}, \tag{24}\]
with the four-momentum eigenstates normalized as \(\langle p^{\prime},s^{\prime}|p,s\rangle=2p^{0}(2\pi)^{3}\delta^{(3)}(\mathbf{p}^{ \prime}-\mathbf{p})\delta_{s^{\prime}s}\) and the momentum-space wave packet \(\widetilde{\Psi}(\mathbf{p},s)\equiv\langle p,s|\Psi\rangle/\sqrt{2p^{0}}\) normalized as
\[\sum_{s}\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3}}\,|\widetilde{\Psi}(\mathbf{p},s)|^ {2}=1. \tag{25}\]
The four-momenta being on-shell, the energy components are given by \(p^{0}=\sqrt{\mathbf{p}^{2}+M^{2}}\) and \(p^{\prime 0}=\sqrt{\mathbf{p}^{\prime 2}+M^{2}}\).
In a relativistic theory, the Newton-Wigner position operator [114; 115; 116; 117] is the only 3D position operator satisfying usual commutation relations with linear and angular momentum operators, and having mutually commuting components. Although this operator does not transform as part of a Lorentz four-vector, it allows one to localize a relativistic system at a fixed time. The eigenstates of this operator at \(t=0\) are related to momentum eigenstates via Fourier transform
\[|\mathbf{r},s\rangle=\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3}}\,e^{-i\mathbf{p}\cdot\mathbf{ r}}\,\frac{|p,s\rangle}{\sqrt{2p^{0}}} \tag{26}\]
and are normalized as \(\langle\mathbf{r}^{\prime},s^{\prime}|\mathbf{r},s\rangle=\delta^{(3)}(\mathbf{r}^{\prime }-\mathbf{r})\delta_{s^{\prime}s}\). The position-space wave packet at \(t=0\) is then given by
\[\Psi(\mathbf{r},s)\equiv\langle\mathbf{r},s|\Psi\rangle=\int\frac{\mathrm{d}^{3}p}{(2 \pi)^{3}}\,e^{i\mathbf{p}\cdot\mathbf{r}}\,\widetilde{\Psi}(\mathbf{p},s) \tag{27}\]
and satisfies the normalization condition
\[\sum_{s}\int\mathrm{d}^{3}r\,|\Psi(\mathbf{r},s)|^{2}=1. \tag{28}\]
In position space, the expectation value (24) takes then the familiar form
\[\langle\Psi|\hat{O}(x)|\Psi\rangle=\sum_{s^{\prime},s}\int\mathrm{d}^{3}r^{ \prime}\,\mathrm{d}^{3}r\,\Psi^{*}(\mathbf{r}^{\prime},s^{\prime})\Psi(\mathbf{r},s)\, \langle\mathbf{r}^{\prime},s^{\prime}|\hat{O}(x)|\mathbf{r},s\rangle. \tag{29}\]
This construction is very similar to the non-relativistic one and reduces to the latter when \(p^{0}\approx p^{\prime 0}\approx M\).
For a probabilistic interpretation, we need to be able to express the expectation value
\(\langle\Psi|\hat{O}|\Psi\rangle\) in a diagonal form3. In position space this can be achieved in the case of Galilean symmetry since the latter implies invariance of inertia under a change of frame, and hence a decoupling in momentum space of \(\mathbf{P}\)- and \(\mathbf{\Delta}\)-dependences in the matrix elements \(\langle p^{\prime},s^{\prime}|\hat{O}(x)|p,s\rangle/(2\sqrt{p^{\prime 0}p^{0}})\). One can then write in general
Footnote 3: In spin space, one uses a spin density matrix representation where two canonical polarizations are converted into an unpolarized contribution \(\delta_{s^{\prime}s}\) and polarized contributions involving the spin matrices \(\mathbf{S}_{s^{\prime}s}\).
\[\begin{split}&\int\frac{\mathrm{d}^{3}p^{\prime}}{(2\pi)^{3}}\, \frac{\mathrm{d}^{3}p}{(2\pi)^{3}}\,\widetilde{\Psi}^{*}(\mathbf{p}^{\prime},s^{ \prime})\widetilde{\Psi}(\mathbf{p},s)\,f(\mathbf{P})\,g(\mathbf{\Delta})\\ &=\int\frac{\mathrm{d}^{3}P}{(2\pi)^{3}}\,\frac{\mathrm{d}^{3} \Delta}{(2\pi)^{3}}\,\mathrm{d}^{3}r^{\prime}\,\mathrm{d}^{3}r\,\Psi^{*}(\mathbf{ r}^{\prime},s^{\prime})\Psi(\mathbf{r},s)\,e^{-i\mathbf{P}\cdot\mathbf{z}}\,f(\mathbf{P})\,e^{i \mathbf{\Delta}\cdot\mathbf{R}}\,g(\mathbf{\Delta})\\ &=\int\mathrm{d}^{3}R\left[\Psi^{*}(\mathbf{R},s^{\prime})f\Big{(} \tfrac{1}{i}\overset{\leftrightarrow}{\nabla}\Big{)}\Psi(\mathbf{R},s)\right]\int \frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{i\mathbf{\Delta}\cdot\mathbf{R}}\,g(\mathbf{ \Delta}),\end{split} \tag{30}\]
where \(f\) and \(g\) are two functions, and \(A\overset{\leftrightarrow}{\nabla}B\equiv\frac{1}{2}\left[A(\mathbf{\nabla}B)-B( \mathbf{\nabla}A)\right]\). Note that the average momentum \(\mathbf{P}\) is conjugate to the position shift \(\mathbf{z}=\mathbf{r}-\mathbf{r}^{\prime}\), whereas the momentum transfer \(\mathbf{\Delta}\) is conjugate to the average position \(\mathbf{R}=(\mathbf{r}+\mathbf{r}^{\prime})/2\). The ability to perform the \(\mathbf{P}\)-integration independently of the value of \(\mathbf{\Delta}\) corresponds therefore to the ability to provide a density interpretation in position space.
In a relativistic theory, inertia is a frame-dependent concept and \(\mathbf{P}\) is usually entangled with \(\mathbf{\Delta}\). It is therefore usually not possible to provide a relativistic density interpretation in 3D position space. The only way out is to switch to the light-front (LF) formalism [118] (or consider the infinite-momentum frame), where a Galilean subgroup of the Lorentz group is singled out by choosing a particular LF direction [119; 120], allowing for a density interpretation in impact-parameter space (i.e. the 2D position space orthogonal to the LF direction) [63; 66; 73; 121]. Similar densities were proposed earlier by Fleming [122] using a rescaling of the wave packets. An extension of this method has recently been used to define new 3D densities [79; 80; 82; 85], but concerns about their physical meaning have triggered some discussions [83; 84].
Despite their nice probabilistic interpretation, LF densities in impact-parameter space have however some shortcomings. First, the probabilistic interpretation is limited by the Galilean subgroup. Considering for example the electromagnetic four-current operator \(\hat{j}^{\mu}\), a probabilistic interpretation can be attributed to the LF charge density \(\hat{j}^{+}=(\hat{j}^{0}+\hat{j}^{3})/\sqrt{2}\) but not to the longitudinal LF current \(\hat{j}^{-}=(\hat{j}^{0}-\hat{j}^{3})/\sqrt{2}\), see e.g. Ref. [96]. Second, in the non-relativistic regime it is in general not clear how to relate the LF densities to the standard non-relativistic 3D densities, even when the system is in average at rest. Third, LF densities appear to be distorted for transversely polarized targets [66; 68; 69; 70; 71; 123], a phenomenon which can be understood to some extent as an artifact coming from looking at \(\hat{j}^{+}\) instead of \(\hat{j}^{0}\). Last but not least, even for unpolarized targets the structure of LF densities can sometimes be difficult to conciliate with an intuitive picture of the system. A typical
example is the appearance of an unexpected negative core in the LF charge distribution of a neutron [67]. These additional LF distortions have recently been understood as artifacts caused by the Melosh-Wigner spin rotation4[92, 94, 96].
Footnote 4: Melosh-Wigner rotations are also at the origin of some relations between transverse-momentum dependent parton distributions and orbital angular momentum observed in various models of the nucleon [124, 125].
Because of Lorentz symmetry, the notion of relativistic spatial distribution necessarily depends on the target average momentum \(\mathbf{P}\), hindering therefore in general a probabilistic interpretation in position space. We are therefore naturally led to switch our perspective to a phase-space picture, which is _quasi_probabilistic at the quantum level owing to Heisenberg's uncertainty relations. Following the quantum phase-space formalism [86, 87, 88], one rewrites Eq. (24) as
\[\langle\Psi|\hat{O}(x)|\Psi\rangle=\sum_{s^{\prime},s}\int\frac{\mathrm{d}^{3 }P}{(2\pi)^{3}}\,\mathrm{d}^{3}R\,\rho_{\Psi}^{s^{\prime}s}(\mathbf{R},\mathbf{P}) \,\langle\hat{O}\rangle_{\mathbf{R},\mathbf{P}}^{s^{\prime}s}(x), \tag{31}\]
where
\[\begin{split}\rho_{\Psi}^{s^{\prime}s}(\mathbf{R},\mathbf{P})& \equiv\int\mathrm{d}^{3}z\,e^{-i\mathbf{P}\cdot\mathbf{z}}\,\Psi^{*}(\mathbf{R }-\tfrac{\mathbf{z}}{2},s^{\prime})\Psi(\mathbf{R}+\tfrac{\mathbf{z}}{2},s)\\ &=\int\frac{\mathrm{d}^{3}q}{(2\pi)^{3}}\,e^{-i\mathbf{q}\cdot\mathbf{R} }\,\widetilde{\Psi}^{*}(\mathbf{P}+\tfrac{\mathbf{q}}{2},s^{\prime})\widetilde{\Psi}( \mathbf{P}-\tfrac{\mathbf{q}}{2},s)\end{split} \tag{32}\]
is the Wigner distribution interpreted as the quantum weight (positive or negative) for finding the system at average position \(\mathbf{R}\) with average momentum \(\mathbf{P}\). This construction does not rely particularly on Galilean or Lorentz symmetries, and hence makes the connection with the non-relativistic theory straightforward. Probabilistic densities are recovered upon integration over average position or momentum variables
\[\begin{split}\int\mathrm{d}^{3}R\,\rho_{\Psi}^{s^{\prime}s}(\mathbf{R },\mathbf{P})&=\widetilde{\Psi}^{*}(\mathbf{P},s^{\prime})\widetilde{ \Psi}(\mathbf{P},s),\\ \int\frac{\mathrm{d}^{3}P}{(2\pi)^{3}}\,\rho_{\Psi}^{s^{\prime}s}( \mathbf{R},\mathbf{P})&=\Psi^{*}(\mathbf{R},s^{\prime})\Psi(\mathbf{R},s).\end{split} \tag{33}\]
A compelling feature of the quantum phase-space formalism is that wave-packet details are cleanly factorized in Eq. (31). We can then interpret the phase-space amplitude
\[\langle\hat{O}\rangle_{\mathbf{R},\mathbf{P}}^{s^{\prime}s}(x)=\int\frac{\mathrm{d}^{ 3}\Delta}{(2\pi)^{3}}\,e^{i\mathbf{\Delta}\cdot\mathbf{R}}\,\frac{\langle P+\tfrac{ \Delta}{2},s^{\prime}|\hat{O}(x)|P-\tfrac{\Delta}{2},s\rangle}{2\sqrt{p^{0}p ^{0}}} \tag{34}\]
as the internal distribution associated with a state localized in the Wigner sense around average position \(\mathbf{R}\) and average momentum \(\mathbf{P}\)[90, 91, 93]. Whenever the \(\mathbf{P}\)-dependence of \(\langle\hat{O}\rangle_{\mathbf{R},\mathbf{P}}^{s^{\prime}s}(x)\) is simple (typically when Galilean symmetry is at play), we can extract it and use
\[\int\frac{\mathrm{d}^{3}P}{(2\pi)^{3}}\,\rho_{\Psi}^{s^{\prime}s}(\mathbf{R},\mathbf{ P})f(\mathbf{P})=\Psi^{*}(\mathbf{R},s^{\prime})f\Big{(}\tfrac{1}{i}\overleftrightarrow{ \mathbf{\nabla}}\Big{)}\Psi(\mathbf{R},s) \tag{35}\]
to obtain genuine internal densities (i.e. internal distributions with a probabilistic interpre
tation), see e.g. Ref. [83] for a recent detailed discussion.
By relaxing the requirement of probabilistic interpretation, the quantum phase-space formalism overcomes the shortcomings associated with the LF densities, shows that the latter are closely related to the instant-form distributions defined in the infinite-momentum frame (IMF), and explains the various LF distortions as a result of relativistic kinematical effects associated with spin [92; 94; 96].
## IV Breit frame distributions
From a phase-space perspective, the BF can be regarded as the average rest frame of the system. Since the energy transfer constrained by \(\Delta^{0}=\mathbf{P}\cdot\mathbf{\Delta}/P^{0}\) vanishes when \(\mathbf{P}=\mathbf{0}\), internal distributions in the BF do not depend on \(x^{0}\). BF distributions are therefore defined as
\[O_{B}(\mathbf{r})\equiv\langle\hat{O}\rangle^{s^{\prime}_{B}s_{B}}_{\mathbf{0},\mathbf{0}} (\mathbf{r})=\int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot\bm {r}}\,\frac{\langle p^{\prime}_{B},s^{\prime}_{B}|\hat{O}(0)|p_{B},s_{B} \rangle}{2P^{0}_{B}}, \tag{36}\]
where \(\mathbf{r}=\mathbf{x}-\mathbf{R}\) is the distance relative to the center of the system, \(\mathbf{p}^{\prime}_{B}=-\mathbf{p}_{B}=\mathbf{\Delta}/2\) and \(P^{0}_{B}=p^{\prime 0}_{B}=p^{0}_{B}=M\sqrt{1+\tau}\).
Applying the general definition (36) to the electromagnetic four-current operator, one obtains using the BF amplitudes in Eq. (3) [92; 126; 6]
\[\begin{split} J^{0}_{B}(\mathbf{r})&=e\int\frac{ \mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot\mathbf{r}}\,\frac{M}{P^{0 }_{B}}\,G_{E}(\mathbf{\Delta}^{2}),\\ \mathbf{J}_{B}(\mathbf{r})&=e\,\frac{\mathbf{\nabla}\times\mathbf{ \sigma}}{2M}\int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot \mathbf{r}}\,\frac{M}{P^{0}_{B}}\,G_{M}(\mathbf{\Delta}^{2}),\end{split} \tag{37}\]
where explicit spin indices have been omitted for better legibility. These relativistic distributions differ from the conventional ones introduced by Sachs [60; 61], where the factor \(M/P^{0}_{B}=1/\sqrt{1+\tau}\) has been removed by hand. A detailed discussion of these BF distributions for a nucleon target can be found in Refs. [92; 96].
### \(A\)-type polarization-magnetization tensor
We can now apply the same formalism to the polarization-magnetization tensor \(P^{\mu\nu}\). Evaluating Eq. (15) in the BF leads to
\[\begin{split}\widetilde{\mathcal{P}}^{i}_{B}&= \widetilde{P}^{0i}_{B}=0,\\ \widetilde{M}^{i}_{B}&=-\frac{1}{2}\,\epsilon^{ijk} \widetilde{P}^{jk}_{B}=e\left[\sigma^{i}-\frac{\Delta^{i}(\mathbf{\Delta}\cdot\bm {\sigma})}{4P^{0}_{B}(P^{0}_{B}+M)}\right]G_{M}(Q^{2}).\end{split} \tag{38}\]
The corresponding relativistic 3D distributions are then given by
\[\begin{split}\mathbf{\mathcal{P}}_{B}(\mathbf{r})&=\mathbf{0},\\ \mathbf{M}_{B}(\mathbf{r})&=\frac{e}{2M}\int\frac{\mathrm{d} ^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot\mathbf{r}}\left[\mathbf{\sigma}-\frac{ \mathbf{\Delta}(\mathbf{\Delta}\cdot\mathbf{\sigma})}{4P_{B}^{0}(P_{B}^{0}+M)}\right] \frac{M}{P_{B}^{0}}\,G_{M}(\mathbf{\Delta}^{2}).\end{split} \tag{39}\]
We see that \(\rho_{P}\equiv-\mathbf{\nabla}\cdot\mathbf{\mathcal{P}}\), the polarization contribution to the charge distribution, vanishes in the BF simply because the BF polarization distribution itself vanishes. The BF magnetization distribution has two terms. Taking the curl eliminates the second term and we find
\[\mathbf{J}_{B}(\mathbf{r})=\mathbf{\nabla}\times\mathbf{M}_{B}(\mathbf{r}), \tag{40}\]
as expected for a system in its average rest frame.
In magnetostatics, it is customary to define an _effective_ magnetic charge distribution
\[\rho_{M}\equiv-\mathbf{\nabla}\cdot\mathbf{M}, \tag{41}\]
by analogy with the polarization charge distribution \(\rho_{P}\). Using the results in Eq. (39), we find that the BF effective magnetic charge distribution is given by
\[\rho_{M,B}(\mathbf{r})=\frac{e}{2M}\int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e ^{-i\mathbf{\Delta}\cdot\mathbf{r}}\left(i\mathbf{\Delta}\cdot\mathbf{\sigma}\right)\left( \frac{M}{P_{B}^{0}}\right)^{2}G_{M}(\mathbf{\Delta}^{2}). \tag{42}\]
Contrary to the BF charge distribution \(J_{B}^{0}(\mathbf{r})\), the BF effective magnetic charge distribution is spin-dependent and is not spherically symmetric. The target polarization provides a preferred spatial direction which reduces spherical symmetry to axial symmetry.
Discrete spacetime symmetries impose that the total EDM must vanish in the average rest frame. We indeed find
\[\mathbf{d}_{B}=\int\mathrm{d}^{3}r\,\mathbf{r}\,J_{B}^{0}(\mathbf{r})=\int\mathrm{d}^{3}r \,\mathbf{r}\,\rho_{P,B}(\mathbf{r})=\int\mathrm{d}^{3}r\,\mathbf{\mathcal{P}}_{B}(\mathbf{r}) =\mathbf{0}. \tag{43}\]
In contrast, the total MDM in the rest frame is not required to vanish and can be expressed in at least three different but equivalent ways,
\[\mathbf{\mu}_{B}=\int\mathrm{d}^{3}r\,\mathbf{M}_{B}(\mathbf{r})=\int\mathrm{d}^{3}r\,\bm {r}\,\rho_{M,B}(\mathbf{r})=\int\mathrm{d}^{3}r\,\frac{\mathbf{r}\times\mathbf{J}_{B}(\bm {r})}{2}=\mathbf{\sigma}\,G_{M}(0)\,\frac{e}{2M}, \tag{44}\]
provided that the surface terms vanish at infinity. Note however that at the level of spatial distributions the three integrands, namely \(\mathbf{M}_{B}(\mathbf{r})\), \(\mathbf{M}_{\mathrm{eff},B}(\mathbf{r})\equiv\mathbf{r}\rho_{M,B}(\mathbf{r})\) and \(\mathbf{M}_{J,B}(\mathbf{r})\equiv\frac{1}{2}\,\mathbf{r}\times\mathbf{J}_{B}(\mathbf{r})\), look quite different, see Fig. 2. Since these BF spatial distributions are axially symmetric about the polarization axis, it is sufficient to show a section containing the latter. Strictly speaking, \(\mathbf{M}_{J,B}(\mathbf{r})\) should be interpreted as the contribution to the MDM at \(\mathbf{r}=\mathbf{0}\) due to the current element at position \(\mathbf{r}\). Similarly, \(\mathbf{M}_{\mathrm{eff},B}(\mathbf{r})\) corresponds to the contribution
Figure 2: Comparisons between three kinds of magnetization distributions in the Breit frame \(\mathbf{M}_{B}(\mathbf{r})\), \(\mathbf{M}_{\text{eff},B}(\mathbf{r})\equiv\mathbf{r}\rho_{M,B}(\mathbf{r})\) and \(\mathbf{M}_{J,B}(\mathbf{r})\equiv\frac{1}{2}\,\mathbf{r}\times\mathbf{J}_{B}(\mathbf{r})\) inside a proton (left panels) or a neutron (right panels) polarized along the \(z\)-direction. The vector plots give the direction and magnitude of the magnetization distributions, evaluated in the \(r_{y}=0\) plane using the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
to the MDM at \(\mathbf{r}=\mathbf{0}\) due to the effective magnetic charge element at position \(\mathbf{r}\). Only \(\mathbf{M}_{B}(\mathbf{r})\) can be thought of as the genuine spatial distribution of magnetization.
### \(T\)-type polarization-magnetization tensor
For comparison, we consider here the \(T\)-type definition \(P^{\prime\mu\nu}\) for the polarization-magnetization tensor. Evaluating Eq. (19) in the BF gives
\[\begin{split}\widetilde{\mathcal{P}}_{B}^{\prime i}& =\widetilde{P}_{B}^{\prime 0i}=e\,\frac{i\Delta^{i}}{2M}\,G_{M}(Q^{2}),\\ \widetilde{M}_{B}^{\prime i}&=-\frac{1}{2}\,\epsilon ^{ijk}\widetilde{P}_{B}^{\prime jk}=e\left[\sigma^{i}+\frac{\Delta^{i}(\mathbf{ \Delta}\cdot\mathbf{\sigma})}{4M(P_{B}^{0}+M)}\right]G_{M}(Q^{2}),\end{split} \tag{45}\]
and so the corresponding relativistic 3D distributions read
\[\begin{split}\mathbf{\mathcal{P}}_{B}^{\prime}(\mathbf{r})& =\frac{e}{2M}\int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i \mathbf{\Delta}\cdot\mathbf{r}}\,\frac{i\mathbf{\Delta}}{2M}\,\frac{M}{P_{B}^{0}}\,G_{M}( \mathbf{\Delta}^{2}),\\ \mathbf{M}_{B}^{\prime}(\mathbf{r})&=\frac{e}{2M}\int\frac{ \mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot\mathbf{r}}\left[\mathbf{\sigma }+\frac{\mathbf{\Delta}(\mathbf{\Delta}\cdot\mathbf{\sigma})}{4M(P_{B}^{0}+M)}\right]\frac {M}{P_{B}^{0}}\,G_{M}(\mathbf{\Delta}^{2}).\end{split} \tag{46}\]
This time we have a non-vanishing polarization distribution, but since it is time-independent we still get a pure spin current according to Eq. (11)
\[\mathbf{J}_{B}(\mathbf{r})=\mathbf{\nabla}\times\mathbf{M}_{B}^{\prime}(\mathbf{r}). \tag{47}\]
This can also be seen directly from the expressions for the \(A\)-type and \(T\)-type magnetization distributions since they differ only in the part that does not contribute to the curl.
Remembering that \(\mathbf{p}_{B}^{\prime}=-\mathbf{p}_{B}=\mathbf{\Delta}/2\) and \(p_{B}^{\prime 0}=p_{B}^{0}=P_{B}^{0}\), we recognize in Eq. (45) the characteristic structure of spin defined relative to the center of energy \(\left(\mathbf{\sigma}+\frac{\mathbf{p}_{B}(\mathbf{p}_{B}\cdot\mathbf{\sigma})}{M(p_{B}^{0}+M )}\right)\), while we find in Eq. (38) the characteristic structure of spin defined relative to the center of mass \(\left(\mathbf{\sigma}-\frac{\mathbf{p}_{B}(\mathbf{p}_{B}\cdot\mathbf{\sigma})}{p_{B}^{0}(p_{ B}^{0}+M)}\right)\), see Ref. [93]5. Therefore, part of the ambiguity in the definition of the polarization-magnetization tensor comes from the choice made for the center of the system, which in turn defines the internal angular momentum or spin. Even though the system in the BF is in average at rest, the initial and final momenta are non-zero. Contrary to the center of mass, the position of the center of energy inside a spinning system depends on its momentum [90; 93], see Appendix B. For \(\mathbf{\Delta}\neq\mathbf{0}\) the center of energy is in general shifted relative to the center of mass, but the shift in the initial state is exactly opposite to that in the final state, so that the _average_ position of the center of energy coincides in the BF with that of the center of mass. However, the initial and final shifts affect the appearance of the spatial distributions and imply that \(\mathbf{M}_{B}^{\prime}(\mathbf{r})\neq\mathbf{M}_{B}(\mathbf{r})\).
Footnote 5: Pushing the logic further suggests that the combination \(P^{\prime\prime\mu\nu}=(\sqrt{P^{2}}P^{\mu\nu}+MP^{\prime\mu\nu})/(\sqrt{P^{2} }+M)\) could be interpreted as the polarization-magnetization tensor defined relative to the center of spin.
In Fig. 3, we show the spatial distributions of the \(T\)-type polarization and magnetization in the BF. Except at the origin, the \(T\)-type polarization distribution does not vanish and has the structure of a spherical hedgehog. The \(T\)-type magnetization distribution is indeed similar to but different from the \(A\)-type magnetization distribution shown in Fig. 2.
In the picture based on the \(T\)-type decomposition (18) of the electromagnetic four-current, the total charge distribution
\[J^{0}_{B}(\mathbf{r})=\rho^{\prime}_{c,B}(\mathbf{r})+\rho^{\prime}_{P,B}(\mathbf{r}) \tag{48}\]
Figure 3: Breit frame \(T\)-type polarization and magnetization distributions \(\mathbf{\mathcal{P}}^{\prime}_{B}(\mathbf{r})\) and \(\mathbf{M}^{\prime}_{B}(\mathbf{r})\), see Eq. (46), inside a proton (left panels) or a neutron (right panels) polarized along the \(z\)-direction. The vector plots give the direction and magnitude of the distributions, evaluated in the \(r_{y}=0\) plane using the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
consists in a convection charge distribution driven by the Dirac FF
\[\rho^{\prime}_{c,B}(\mathbf{r})=e\int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i \mathbf{\Delta\cdot r}}\,\frac{P_{B}^{0}}{M}\,F_{1}(\mathbf{\Delta}^{2}) \tag{49}\]
and a non-vanishing polarization charge distribution given by
\[\rho^{\prime}_{P,B}(\mathbf{r})=-\mathbf{\nabla}\cdot\mathbf{\mathcal{P}}^{\prime}_{B}=-e \int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta\cdot r}}\,\tau\, \frac{M}{P_{B}^{0}}\,G_{M}(\mathbf{\Delta}^{2}). \tag{50}\]
Both of these contributions are spherically symmetric and are represented in Fig. 4. While the proton charge distribution is dominated by the convection contribution, the neutron charge distribution appears to be globally dominated by the polarization contribution for \(r\lesssim 1.4\) fm and by the convection contribution for \(r\gtrsim 1.4\) fm. We observe in particular a large cancellation between the convection and polarization charge distributions close to the center of the nucleon, suggesting that the \(T\)-type decomposition is not really natural.
Figure 4: Decomposition of the Breit frame charge distribution into \(T\)-type convection and polarization contributions, see Eqs. (49) and (50), inside a proton (left panels) or a neutron (right panels), based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
We also find that the \(T\)-type effective magnetic charge distribution
\[\rho^{\prime}_{M,B}(\mathbf{r})=-\mathbf{\nabla}\cdot\mathbf{M}^{\prime}_{B}=\frac{e}{2M}\int \frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot\mathbf{r}}\left(i\bm {\Delta}\cdot\mathbf{\sigma}\right)G_{M}(\mathbf{\Delta}^{2}) \tag{51}\]
differs from the \(A\)-type one (42) by a relativistic kinematical factor \((M/P_{B}^{0})^{2}\), which reflects the difference in the Lorentz boost properties between spin defined relative to the center of mass and spin defined relative to the center of energy [93].
Finally, the \(T\)-type BF EDM and MDM
\[\mathbf{d}^{\prime}_{B} =\int\mathrm{d}^{3}r\,\mathbf{r}\,\rho^{\prime}_{P,B}(\mathbf{r})=\int \mathrm{d}^{3}r\,\mathbf{\mathcal{P}}^{\prime}_{B}(\mathbf{r})=\mathbf{0}, \tag{52}\] \[\mathbf{\mu}^{\prime}_{B} =\int\mathrm{d}^{3}r\,\mathbf{M}^{\prime}_{B}(\mathbf{r})=\int\mathrm{d} ^{3}r\,\mathbf{r}\,\rho^{\prime}_{M,B}(\mathbf{r})=\mathbf{\sigma}\,G_{M}(0)\,\frac{e}{2M},\]
are the same as the \(A\)-type ones, see Eqs. (43) and (44). The reason is that integrating over whole position space amounts to setting \(\mathbf{\Delta}=\mathbf{0}\) in momentum space. As one can see from the on-shell identity [112]
\[\overline{u}(p^{\prime},s^{\prime})\sigma^{\mu\nu}u(p,s)=\overline{u}(p^{ \prime},s^{\prime})\left[\frac{i\Delta^{[\mu}\gamma^{\nu]}}{2M}+\frac{\epsilon ^{\mu\nu\beta\lambda}P_{\beta}\gamma_{\lambda}\gamma_{5}}{M}\right]u(p,s), \tag{53}\]
where we used the shorthand notation \(a^{[\mu}b^{\nu]}\equiv a^{\mu}b^{\nu}-a^{\nu}b^{\mu}\), the difference between \(\widetilde{P}^{\mu\nu}\) and \(\widetilde{P}^{\mu\nu}\) vanishes in the forward limit \(\mathbf{\Delta}\to\mathbf{0}\), and so the \(A\)-type and \(T\)-type polarization-magnetization tensors agree on the integrated quantities but disagree on how these quantities are distributed over space. The results in Eq. (52) should also be expected from the fact that the EDM and MDM can be expressed directly in terms of the electromagnetic four-current, and hence should not depend on how the latter is decomposed into convection and polarization contributions.
In conclusion, even if defining the polarization-magnetization tensor in terms of the tensor Dirac bilinear seems a priori natural, the associated picture turns out to be more complicated than the one based on the axial-vector Dirac bilinear. For this reason, we consider that the \(A\)-type polarization-magnetization tensor gives a more physical picture than the \(T\)-type one.
## V Elastic frame distributions
BF distributions provide our best proxy for picturing a system at rest around the origin. If we are however interested in the internal structure of a moving system, we can use the so-called elastic frame (EF) distributions introduced in Ref. [89]. They are defined as
\[O_{\mathrm{EF}}(\mathbf{b}_{\perp};P_{z})\equiv\int\mathrm{d}r_{z}\,\langle\hat{O} \rangle_{\mathbf{0},\mathbf{P}}^{s^{\prime}s}(r)=\int\frac{\mathrm{d}^{2}\Delta_{\perp }}{(2\pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{\langle p^ {\prime},s^{\prime}|\hat{O}(0)|p,s\rangle}{2P^{0}}\bigg{|}_{\Delta_{z}=0}, \tag{54}\]
where the \(z\)-axis has been chosen for convenience along \(\mathbf{P}=(\mathbf{0}_{\perp},P_{z})\), and \(\mathbf{r}=(\mathbf{b}_{\perp},r_{z})\) is the distance relative to the center of the system, which has been set at the origin \(\mathbf{R}=\mathbf{0}\). Integrating over the longitudinal coordinate amounts to setting the momentum transfer in the longitudinal direction to zero, which in turn implies a vanishing energy transfer \(\Delta^{0}=\mathbf{P}\cdot\mathbf{\Delta}/P^{0}=0\) and hence a time-independent distribution.
At \(P_{z}=0\), the EF distributions coincide with the BF distributions projected onto the transverse plane
\[O_{\text{EF}}(\mathbf{b}_{\perp};0)=\int\mathrm{d}r_{z}\,O_{B}(\mathbf{r}). \tag{55}\]
In the limit \(P_{z}\to\infty\), we obtain the IMF distributions
\[O_{\text{IMF}}(\mathbf{b}_{\perp})\equiv\lim_{P_{z}\to\infty}O_{\text{EF}}(\mathbf{b}_ {\perp};P_{z}) \tag{56}\]
which coincide most of the time with the distributions defined within the light-front (LF) formalism, up to some trivial factors [77, 78, 91, 92, 94, 96, 97]. EF distributions provide therefore a nice and clear interpolation between BF and LF distributions.
To understand how the distributions change with \(P_{z}\), we need to know how matrix elements for different sets of initial and final momenta are related to each other. For the electromagnetic four-current operator, Poincare symmetry implies that [7, 128]
\[\langle p^{\prime},s^{\prime}|\hat{j}^{\mu}(0)|p,s\rangle=\sum_{s^{\prime}_{B},s_{B}}D^{\dagger(j)}_{s^{\prime}s^{\prime}_{B}}(p^{\prime}_{B},\Lambda)D^{(j )}_{s_{B}s}(p_{B},\Lambda)\,\Lambda^{\mu}_{\phantom{\mu}\nu}\,\langle p^{ \prime}_{B},s^{\prime}_{B}|\hat{j}^{\nu}(0)|p_{B},s_{B}\rangle, \tag{57}\]
where \(p^{(\prime)\mu}=\Lambda^{\mu}_{\phantom{\mu}\nu}p^{(\prime)\nu}_{B}\) and \(D^{(j)}\) is the Wigner rotation matrix for spin-\(j\) targets. For the polarization-magnetization tensor, we can write in a similar way
\[(\widetilde{P}^{\mu\nu})_{s^{\prime}s}=\sum_{s^{\prime}_{B},s_{B}}D^{\dagger( j)}_{s^{\prime}s^{\prime}_{B}}(p^{\prime}_{B},\Lambda)D^{(j)}_{s_{B}s}(p_{B}, \Lambda)\,\Lambda^{\mu}_{\phantom{\mu}\alpha}\,\Lambda^{\nu}_{\phantom{\nu} \beta}\,(\widetilde{P}^{\alpha\beta}_{B})_{s^{\prime}_{B}s_{B}}. \tag{58}\]
In the case of a spin-\(\frac{1}{2}\) system in the EF, the Wigner rotation matrix takes the form
\[D^{(1/2)}_{s_{B}s}(p_{B},\Lambda)=D^{\dagger(1/2)}_{s^{\prime}s^{\prime}_{B}} (p^{\prime}_{B},\Lambda)=\begin{pmatrix}\cos\frac{\theta}{2}&-e^{-i\phi_{ \Delta}}\sin\frac{\theta}{2}\\ e^{i\phi_{\Delta}}\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}, \tag{59}\]
with \(\mathbf{\Delta}=(Q\cos\phi_{\Delta},Q\sin\phi_{\Delta},0)\), and the Wigner rotation angle \(\theta\) satisfies [96]
\[\cos\theta=\frac{P^{0}+M(1+\tau)}{(P^{0}+M)\sqrt{1+\tau}},\qquad\sin\theta=- \frac{\sqrt{\tau}P_{z}}{(P^{0}+M)\sqrt{1+\tau}}, \tag{60}\]
where the EF energy is given by \(P^{0}=p^{\prime 0}=p^{0}=\sqrt{P_{z}^{2}+M^{2}(1+\tau)}\). When \(P_{z}\neq 0\), the Wigner rotation depends on the momentum transfer \(\mathbf{\Delta}\), and hence distorts the spatial distributions after the Fourier transform [96].
### Elastic frame polarization and magnetization
Since the BF analysis in the previous section revealed that the \(A\)-type definition of the polarization-magnetization tensor from the physics perspective was more natural than the \(T\)-type one, we will consider only the former in the following. Evaluating Eq. (15) in the generic EF leads to
\[\widetilde{M}_{z,\text{EF}} =e\,\sigma_{z}\,G_{M}(Q^{2}), \tag{61}\] \[\widetilde{\mathbf{M}}_{\perp,\text{EF}} =e\,\gamma\left[\frac{(\mathbf{e}_{z}\times i\mathbf{\Delta})_{\perp}}{| \mathbf{\Delta}_{\perp}|}\left(\frac{(\mathbf{\sigma}\times i\mathbf{\Delta})_{z}}{|\mathbf{ \Delta}_{\perp}|}\,\cos\theta-\sin\theta\right)+\frac{\mathbf{\Delta}_{\perp}(\mathbf{ \Delta}_{\perp}\cdot\mathbf{\sigma}_{\perp})}{\mathbf{\Delta}_{\perp}^{2}\sqrt{1+\tau} }\right]G_{M}(Q^{2}),\] \[\widetilde{\mathbf{\mathcal{P}}}_{\text{EF}} =\mathbf{\beta}\times\widetilde{\mathbf{M}}_{\text{EF}},\]
where \(\gamma=P^{0}/\sqrt{P^{2}}\) and \(\mathbf{\beta}=\mathbf{P}/P^{0}\). We see that the Wigner rotation mixes \((\mathbf{\sigma}_{s^{\prime}s}\times i\mathbf{\Delta})_{z}\) and \(\delta_{s^{\prime}s}\), but leaves \((\sigma_{z})_{s^{\prime}s}\) and \((\mathbf{\Delta}_{\perp}\cdot\mathbf{\sigma}_{s^{\prime}s})\) unchanged6
Footnote 6: This would have been less clear if we had written the transverse magnetization amplitudes as
\[\widetilde{\mathbf{M}}_{\perp,\text{EF}}=e\,\gamma\left[\mathbf{\sigma}_{\perp}\cos \theta-\frac{(\mathbf{e}_{z}\times i\mathbf{\Delta})_{\perp}}{|\mathbf{\Delta}_{\perp}|} \,\sin\theta-\frac{\mathbf{\Delta}_{\perp}(\mathbf{\Delta}_{\perp}\cdot\mathbf{\sigma}_{ \perp})}{4M\sqrt{1+\tau}(P^{0}+M)}\right]G_{M}(Q^{2}).\)
, as can be checked using Eq. (59). Beside the Wigner rotation, we recognize the familiar structure of the Lorentz transformation of a rest-frame MDM (or of a pure magnetic field). Moreover, the expression for the polarization amplitudes is reminiscent of the classical expression for an induced EDM \(\mathbf{d}=\mathbf{v}\times\mathbf{\mu}\). Comparing with the BF amplitudes (38) in the limit \(\Delta_{z}\to 0\), we see that Eq. (61) is fully consistent with the general expectation (58).
Following the general definition (54), the EF polarization and magnetization distributions are given by
\[\mathbf{\mathcal{P}}_{\text{EF}}(\mathbf{b}_{\perp};P_{z}) =\int\frac{\text{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{1}{2P^{0}}\,\widetilde{\mathbf{ \mathcal{P}}}_{\text{EF}}(\mathbf{\Delta}_{\perp};P_{z}), \tag{62}\] \[\mathbf{M}_{\text{EF}}(\mathbf{b}_{\perp};P_{z}) =\int\frac{\text{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{1}{2P^{0}}\,\widetilde{\mathbf{M}}_{ \text{EF}}(\mathbf{\Delta}_{\perp};P_{z}),\]
which coincide at \(P_{z}=0\) with the projections of the BF polarization and magnetization distributions (39) onto the transverse plane, respectively. The longitudinal components assume a particularly simple form
\[\mathcal{P}_{z,\text{EF}}(\mathbf{b}_{\perp};P_{z}) =0, \tag{63}\] \[M_{z,\text{EF}}(\mathbf{b}_{\perp};P_{z}) =\frac{e}{2M}\,\sigma_{z}\int\frac{\text{d}^{2}\Delta_{\perp}}{(2 \pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{M}{P^{0}}\,G_{ M}(\mathbf{\Delta}_{\perp}^{2}),\]
because they do not mix with other components under a Lorentz boost. Since the polarization distribution vanishes in the BF (39), so does \(\mathcal{P}_{z,\text{EF}}\).
In Fig. 5, we show the EF spatial distributions of transverse polarization and magnetization in the transverse plane from Eq. (62) for a nucleon polarized along the \(x\)-axis and moving with average momentum \(P_{z}=1\) GeV. Note that the vector fields point toward slightly differ
ent directions at different positions in the transverse plane as a result of the Wigner rotation, see Appendix C. In addition, the momentum dependence of the axially symmetric longitudinal magnetization distribution inside the longitudinally polarized nucleon is sketched in Fig. 6. As \(P_{z}\) increases, the magnitude of the longitudinal magnetization decreases as a consequence of the relativistic factor \(M/P^{0}\) in Eq. (63).
A comparison of these results with the EF distributions of the electromagnetic four
Figure 5: Elastic frame transverse polarization and magnetization distributions \(\mathbf{\mathcal{P}}_{\perp,\text{EF}}(\mathbf{b}_{\perp};P_{z})\) and \(\mathbf{M}_{\perp,\text{EF}}(\mathbf{b}_{\perp};P_{z})\) in the transverse plane, see Eq. (62), inside a proton (left panels) or a neutron (right panels) polarized along the \(x\)-direction and with momentum \(P_{z}=1~{}\text{GeV}\). Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
current studied in Ref. [96]
\[J_{\rm EF}^{0}(\mathbf{b}_{\perp};P_{z}) =e\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\left[\cos\theta+\frac{(\mathbf{\sigma}\times i \mathbf{\Delta})_{z}}{|\mathbf{\Delta}_{\perp}|}\,\sin\theta\right]\frac{G_{E}(\mathbf{ \Delta}_{\perp}^{2})}{\sqrt{1+\tau}} \tag{64}\] \[+e\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{P_{z}}{P^{0}}\left[-\sin\theta+ \frac{(\mathbf{\sigma}\times i\mathbf{\Delta})_{z}}{|\mathbf{\Delta}_{\perp}|}\,\cos \theta\right]\frac{\sqrt{\tau}\,G_{M}(\mathbf{\Delta}_{\perp}^{2})}{\sqrt{1+\tau}},\] \[J_{z,\rm EF}(\mathbf{b}_{\perp};P_{z}) =e\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{P_{z}}{P^{0}}\left[\cos\theta+ \frac{(\mathbf{\sigma}\times i\mathbf{\Delta})_{z}}{|\mathbf{\Delta}_{\perp}|}\,\sin \theta\right]\frac{G_{E}(\mathbf{\Delta}_{\perp}^{2})}{\sqrt{1+\tau}}\] \[+e\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\left[-\sin\theta+\frac{(\mathbf{\sigma}\times i \mathbf{\Delta})_{z}}{|\mathbf{\Delta}_{\perp}|}\,\cos\theta\right]\frac{\sqrt{\tau}\, G_{M}(\mathbf{\Delta}_{\perp}^{2})}{\sqrt{1+\tau}},\] \[\mathbf{J}_{\perp,\rm EF}(\mathbf{b}_{\perp};P_{z}) =e\,\sigma_{z}\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}} \,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{(\mathbf{e}_{z}\times i\bm {\Delta})_{\perp}}{2P^{0}}\,G_{M}(\mathbf{\Delta}_{\perp}^{2}),\]
indicates that the EF polarization four-current distributions (given by the \(G_{M}\)-dependent terms) can be expressed as7
Footnote 7: Note that acting with \(\nabla_{z}\) on any 2D EF distribution gives zero.
\[\rho_{P,\rm EF}(\mathbf{b}_{\perp};P_{z}) =-\mathbf{\nabla}\cdot\mathbf{\mathcal{P}}_{\rm EF}(\mathbf{b}_{\perp};P_{z}), \tag{65}\] \[\mathbf{J}_{P,\rm EF}(\mathbf{b}_{\perp};P_{z}) =\mathbf{\nabla}\times\mathbf{M}_{\rm EF}(\mathbf{b}_{\perp};P_{z}).\]
By analogy with the 3D case (42), we can also define a 2D effective magnetic charge distribution as follows
\[\rho_{M,\rm EF}(\mathbf{b}_{\perp};P_{z})\equiv-\mathbf{\nabla}\cdot\mathbf{M}_{\rm EF}( \mathbf{b}_{\perp};P_{z})=\frac{e}{2M}\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2 \pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,(i\mathbf{\Delta}_{\perp }\cdot\mathbf{\sigma}_{\perp})\,\frac{G_{M}(\mathbf{\Delta}_{\perp}^{2})}{1+\tau}. \tag{66}\]
Figure 6: Elastic frame longitudinal magnetization distribution \(M_{z,\rm EF}(\mathbf{b}_{\perp};P_{z})\), see Eq. (63), inside a longitudinally polarized proton (left panel) or neutron (right panel) for different values of average momentum \(P_{z}\). Based on the parametrization of nucleon electromagnetic form factors given in Ref. [127].
Interestingly, it does not depend on \(P_{z}\) (remember that \(\mathbf{\Delta}_{\perp}\cdot\mathbf{\sigma}\) is invariant under the Wigner rotation) and coincides with the projection of the BF effective magnetic charge distribution (42) onto the transverse plane. In Fig. 7, we show the \(P_{z}\)-independent spatial distribution of the 2D relativistic effective magnetic charge distribution from Eq. (66) inside a transversely polarized nucleon. Likewise, we show in Fig. 8 the \(P_{z}\)-dependent spatial distributions of the 2D relativistic polarization charge distribution from Eq. (65)
\[\rho_{P,\text{EF}}(\mathbf{b}_{\perp};P_{z})=e\int\frac{\mathrm{d}^{2}\Delta_{ \perp}}{(2\pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{P_{z} }{P^{0}}\left[-\sin\theta+\frac{(\mathbf{\sigma}\times i\mathbf{\Delta})_{z}}{|\mathbf{ \Delta}_{\perp}|}\,\cos\theta\right]\frac{\sqrt{\tau}\,G_{M}(\mathbf{\Delta}_{ \perp}^{2})}{\sqrt{1+\tau}} \tag{67}\]
inside a transversely polarized nucleon.
Figure 8: Elastic frame polarization charge distribution \(\rho_{P,\text{EF}}=-\mathbf{\nabla}\cdot\mathbf{\mathcal{P}}_{\text{EF}}\), see Eq. (67), at \(b_{x}=0\) inside a proton (left panel) or a neutron (right panel) polarized along the \(x\)-direction for different values of the average momentum \(P_{z}\). Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
Figure 7: Elastic frame effective magnetic charge distribution \(\rho_{M,\text{EF}}=-\mathbf{\nabla}\cdot\mathbf{M}_{\text{EF}}\), see Eq. (66), at \(b_{y}=0\) inside a proton (left panel) or a neutron (right panel) polarized along the \(x\)-direction. Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
### Elastic frame electric and magnetic dipole moments
The EF MDM is obtained by integrating the EF magnetization distribution over the transverse plane,
\[\mathbf{\mu}_{\rm EF}(P_{z})=\int\mathrm{d}^{2}b_{\perp}\,\mathbf{M}_{\rm EF}(\mathbf{b}_{ \perp};P_{z})=\frac{1}{2E_{P}}\,\widetilde{\mathbf{M}}_{\rm EF}(\mathbf{0}_{\perp};P_{z }), \tag{68}\]
where we remind that \(E_{P}=\sqrt{M^{2}+\mathbf{P}^{2}}\). By analogy with the 3D BF expressions, we can alternatively define the longitudinal EF MDM as
\[\mu_{z,\rm EF}(P_{z})=\int\mathrm{d}^{2}b_{\perp}\,\frac{[\mathbf{b}_{\perp}\times \mathbf{J}_{\rm EF}(\mathbf{b}_{\perp};P_{z})]_{z}}{2}=\sigma_{z}\,\frac{M}{E_{P}}\,G_{ M}(0)\,\frac{e}{2M}, \tag{69}\]
which agrees with the longitudinal component in Eq. (68). A similar expression for the transverse MDM would require a 3D definition of the EF current, which is beyond the scope of the present work. We can however use the 2D effective magnetic charge distribution (66) and alternatively define the transverse EF MDM as
\[\mathbf{\mu}_{\perp,\rm EF}(P_{z})=\int\mathrm{d}^{2}b_{\perp}\,\mathbf{b}_{\perp}\, \rho_{M,\rm EF}(\mathbf{b}_{\perp};P_{z})=\mathbf{\sigma}_{\perp}\,G_{M}(0)\,\frac{e}{ 2M}, \tag{70}\]
which agrees with the transverse components in Eq. (68). A similar expression for the longitudinal MDM would require a 3D definition of the EF effective magnetic charge distribution, which is also beyond the scope of the present work.
From the familiar Lorentz transformation of the magnetic field, one might naively think that a global Lorentz factor \(\gamma_{P}=E_{P}/M\) is missing in the expressions for \(\mathbf{\mu}_{\rm EF}(P_{z})\). It is in fact compensated by the Lorentz contraction factor \(1/\gamma_{P}\) associated with the volume element. We expect that similar expressions should hold for spin-\(j\) targets, namely
\[\begin{split}\mu^{(j)}_{z,\rm EF}(P_{z})&=\Sigma_{z }\,\frac{M}{E_{P}}\,G_{M1}(0)\,\frac{e}{2M},\\ \mathbf{\mu}^{(j)}_{\perp,\rm EF}(P_{z})&=\mathbf{\Sigma}_{ \perp}\,G_{M1}(0)\,\frac{e}{2M},\end{split} \tag{71}\]
where \(G_{M1}(Q^{2})\) is the BF magnetic dipole FF for a spin-\(j\) system [123], and \(\mathbf{\Sigma}_{s^{\prime}s}\) are the generalization of the Pauli matrices to higher spin8.
Footnote 8: The spin matrices for a spin-\(j\) target are generically given by \(\mathbf{S}_{s^{\prime}s}=j\,\mathbf{\Sigma}_{s^{\prime}s}\).
Let us now discuss the (transverse) EF EDM. It is defined as
\[\mathbf{d}_{\perp,\rm EF}(P_{z})=\int\mathrm{d}^{2}b_{\perp}\,\mathbf{b}_{\perp}\,J_{ \rm EF}^{0}(\mathbf{b}_{\perp};P_{z}). \tag{72}\]
For a spin-\(\frac{1}{2}\) target, we find that it is explicitly given by
\[\mathbf{d}_{\perp,\text{EF}}(P_{z})=(\mathbf{e}_{z}\times\mathbf{\sigma})_{\perp}\,\frac{P_{z }}{E_{P}}\left[G_{M}(0)-\frac{E_{P}}{E_{P}+M}\,G_{E}(0)\right]\frac{e}{2M}. \tag{73}\]
This analytic expression agrees with the numerical results for the nucleon obtained in Ref. [77]. The first contribution corresponds to the longitudinal boost of a rest-frame transverse MDM and has the expected form \(\frac{\mathbf{P}}{E_{P}}\times\mathbf{\mu}_{\text{EF}}(0)\). The second contribution comes from the Wigner rotation and can be understood in terms of a sideways shift of the center of spin9, defining the origin of our coordinate system, with respect to the relativistic center of mass in a moving frame [90; 93]. Its magnitude is precisely the relative distance between these two points, see Appendix B, multiplied by the total charge of the system as if the latter were concentrated at the relativistic center of mass. Since the sideways shift is proportional to the spin value, we expect the induced EDM for a spin-\(j\) target to read
Footnote 9: The center of spin is given by the expectation value of the Newton-Wigner operator [115]. It is the point about which the angular momentum coincides with spin in an arbitrary frame.
\[\mathbf{d}_{\perp,\text{EF}}^{(j)}(P_{z})=(\mathbf{e}_{z}\times\mathbf{\Sigma})_{\perp}\, \frac{P_{z}}{E_{P}}\left[G_{M1}(0)-\frac{E_{P}}{E_{P}+M}\,2j\,G_{E0}(0)\right] \frac{e}{2M}, \tag{74}\]
where \(G_{E0}(Q^{2})\) is the BF electric monopole FF for a spin-\(j\) system. This generic expression agrees with the result found for spin-1 targets [94]. In Fig. 9, we show the momentum dependence of the transverse EDM in a transversely polarized nucleon and of the longitudinal MDM in a longitudinally polarized nucleon. The maximum transverse EDM for a proton is reached for \(P_{z}=\frac{M}{2}\sqrt{(k+\sqrt{k^{2}+4k})^{2}-4}\) with \(k=G_{M}^{p}(0)/G_{E}^{p}(0)=1+\kappa_{p}\approx 2.793\).
Figure 9: Transverse electric dipole moment \(d_{y}(P_{z})\), see Eq. (73), inside a nucleon polarized along the \(x\)-axis (left panel) and longitudinal magnetic dipole moment \(\mu_{z}(P_{z})\), see Eq. (69), inside a longitudinally polarized nucleon (right panel), as functions of the nucleon average momentum \(P_{z}\). \(\kappa_{p,n}\equiv G_{M}^{p,n}(0)-G_{E}^{p,n}(0)\) stand for the proton and neutron anomalous magnetic dipole moments. At \(P_{z}\approx 3.22\) GeV, the proton transverse electric dipole moment reaches its maximum value \(d_{y,\text{max}}^{p}\approx 0.203\)\(e\cdot\)fm.
Light-front Distributions
For completeness, we finally study the polarization-magnetization distributions within the LF formalism, where LF components are defined as \(x^{\mu}=[x^{+},x^{-},\mathbf{x}_{\perp}]\) with \(x^{\pm}\equiv(x^{0}\pm x^{3})/\sqrt{2}\). As a result, scalar products read \(p\cdot x=p^{+}x^{-}+p^{-}x^{+}-\mathbf{p}_{\perp}\cdot\mathbf{x}_{\perp}\) and the constrained momentum component is then given by \(p^{-}=(\mathbf{p}_{\perp}^{2}+M^{2})/(2p^{+})\).
It is possible to define \(x^{+}\)-independent LF distributions [66; 73; 89] by considering the so-called symmetric LF frame specified by the conditions10\(\mathbf{P}_{\perp}=\mathbf{0}_{\perp}\) and \(\Delta^{+}=0\), which ensure that the LF energy transfer \(\Delta^{-}=(\mathbf{P}_{\perp}\cdot\mathbf{\Delta}_{\perp}-P^{-}\Delta^{+})/P^{+}\) vanishes. Similarly to Eq. (54), the LF distributions are defined as
Footnote 10: One can relax the condition \(\mathbf{P}_{\perp}=\mathbf{0}_{\perp}\) provided that LF distributions are restricted to \(x^{+}=0\), as stressed recently in Refs. [129; 83]. Note however that LF boosts are kinematical operations and so the description at \(\mathbf{P}_{\perp}\neq\mathbf{0}_{\perp}\) can be related in a straightforward way to the description at \(\mathbf{P}_{\perp}=\mathbf{0}_{\perp}\), just like in the non-relativistic theory.
\[O_{\rm LF}(\mathbf{b}_{\perp};P^{+})\equiv\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{ (2\pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{{\rm LF} \langle p^{\prime},\lambda^{\prime}|\hat{O}(0)|p,\lambda\rangle_{\rm LF}}{2P^ {+}}\bigg{|}_{\Delta^{+}=|\mathbf{P}_{\perp}|=0}, \tag{75}\]
where the LF helicity states are related to the canonical spin states via the Melosh rotation \(|p,\lambda\rangle_{\rm LF}=\sum_{s}|p,s\rangle\,\mathcal{M}_{s\lambda}\) with
\[\mathcal{M}_{s\lambda}=\frac{(\sqrt{2}p^{+}+M)\,\delta_{s\lambda}-i(\mathbf{p}_{ \perp}\times\mathbf{\sigma}_{s\lambda})_{z}}{\sqrt{2\sqrt{2}p^{+}(p^{0}+M)}} \tag{76}\]
in the case of a spin-\(\frac{1}{2}\) system [130]. As already discussed in Section III, a key feature of the LF formalism is that the symmetry subgroup associated with the transverse LF plane is Galilean. As a result, LF distributions can in some cases be interpreted as probabilistic densities. The pictures provided by these LF densities cannot however be considered as realistic representations of the system at rest, even when \(P^{-}=P^{+}=M\sqrt{1+\tau}/\sqrt{2}\), because they are distorted by relativistic artefacts caused by the Melosh rotation [92; 96; 94].
### Light-front polarization and magnetization
We have seen in Eq. (10) that polarization and magnetization correspond to the following components of the antisymmetric polarization-magnetization tensor \(P^{\mu\nu}\)
\[\mathcal{P}^{\mu}=P^{0\mu},\qquad M^{\mu}=-\frac{1}{2}\,\epsilon^{\mu\alpha \beta 0}P_{\alpha\beta}. \tag{77}\]
Note that despite what the notation suggests, \(\mathcal{P}^{\mu}\) and \(M^{\mu}\) are not Lorentz four-vectors. In particular, we have by construction \(\mathcal{P}^{0}=M^{0}=0\) in any frame. In the LF formalism, it is
therefore natural to define LF polarization and magnetization components as follows
\[\mathcal{P}^{\mu}_{\rm LF}=P^{+\mu},\qquad M^{\mu}_{\rm LF}=-\frac{1}{2}\,\epsilon ^{\mu\alpha\beta-}P_{\alpha\beta}. \tag{78}\]
More explicitly, we have
\[\mathcal{P}^{+}_{\rm LF}=0,\qquad\mathcal{P}^{i}_{\perp,{\rm LF}}=P^{+i}= \frac{\mathcal{P}^{i}_{\perp}-\epsilon^{ij}_{\perp}M^{j}_{\perp}}{\sqrt{2}}, \qquad\mathcal{P}^{-}_{\rm LF}=P^{+-}=-\mathcal{P}_{z}, \tag{79}\]
and
\[M^{+}_{\rm LF}=-\frac{1}{2}\,\epsilon^{ij}_{\perp}P^{ij}=M_{z},\qquad M^{i}_{ \perp,{\rm LF}}=-\epsilon^{ij}_{\perp}P^{-j}=\frac{M^{i}_{\perp}-\epsilon^{ij} _{\perp}\mathcal{P}^{j}_{\perp}}{\sqrt{2}},\qquad M^{-}_{\rm LF}=0, \tag{80}\]
which is similar to the decomposition of the generalized angular momentum tensor into LF boost and angular momentum operators11[18; 120].
Footnote 11: In the literature, the LF angular momentum operators are unfortunately often defined _without_ the transverse Levi-Civita symbol, missing therefore the axial-vector nature of angular momentum.
For the LF polarization and magnetization amplitudes, the evaluation of Eq. (15) in the symmetric LF frame with LF helicity states gives
\[\widetilde{M}^{+}_{\rm LF} =e\,(\sigma_{z})_{\lambda^{\prime}\lambda}\,G_{M}(Q^{2}), \tag{81}\] \[\widetilde{\mathbf{M}}_{\perp,{\rm LF}} =e\,\frac{P^{-}}{M(1+\tau)}\left[(\mathbf{\sigma}_{\perp})_{\lambda^{ \prime}\lambda}+\delta_{\lambda^{\prime}\lambda}\,\frac{(\mathbf{e}_{z}\times i \mathbf{\Delta})_{\perp}}{2M}\right]G_{M}(Q^{2}),\] \[\widetilde{\mathcal{P}}^{-}_{\rm LF} =0,\] \[\widetilde{\mathcal{P}}^{i}_{\perp,{\rm LF}} =-\frac{P^{+}}{P^{-}}\,\epsilon^{ij}_{\perp}\widetilde{M}^{j}_{ \perp,{\rm LF}}.\]
Similarly to Eq. (62), the LF polarization and magnetization distributions are then obtained by following 2D Fourier transforms
\[\mathcal{P}^{\mu}_{\rm LF}(\mathbf{b}_{\perp};P^{+}) =\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{1}{2P^{+}}\,\widetilde{\mathcal{P} }^{\mu}_{\rm LF}(\mathbf{\Delta}_{\perp};P^{+}), \tag{82}\] \[M^{\mu}_{\rm LF}(\mathbf{b}_{\perp};P^{+}) =\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,\frac{1}{2P^{+}}\,\widetilde{M}^{\mu}_{ \rm LF}(\mathbf{\Delta}_{\perp};P^{+}).\]
Based on the expressions in Eq. (81), we observe that the LF polarization distributions do not depend on \(P^{+}\), while the longitudinal (transverse) LF magnetization distribution will be suppressed by one power (two powers) of \(1/P^{+}\). In Fig. 10, we show the 2D LF transverse polarization and (scaled) magnetization distributions in the transverse plane for transversely polarized nucleons. To make the transverse magnetization distributions \(P^{+}\)-independent, a dimensionless factor \((P^{+}/M)^{2}\) has been introduced. While the BF polarization distribution vanishes, the transverse LF polarization distribution is nonzero even for \(P_{z}=0\). This
demonstrates once again that LF distributions provide distorted pictures of the system. A multipole decomposition of these distributions is discussed in Appendix C.
Let us now compare the EF and LF distributions in the IMF. For the polarization distributions, we find that both sets coincide in that limit
\[\lim_{P^{+}\to\infty}\mathcal{P}^{\mu}_{\rm LF}(\mathbf{b}_{\perp};P^{+})=\lim_{P_{ z}\to\infty}\mathcal{P}^{\mu}_{\rm EF}(\mathbf{b}_{\perp};P_{z}). \tag{83}\]
Interestingly, while the longitudinal magnetization distribution vanishes in both cases
\[\lim_{P^{+}\to\infty}M^{+}_{\rm LF}(\mathbf{b}_{\perp};P^{+})=\lim_{P_{z}\to\infty }M_{z,{\rm EF}}(\mathbf{b}_{\perp};P_{z})=0, \tag{84}\]
Figure 10: Light-front transverse polarization and (scaled) magnetization distributions \(\mathbf{\mathcal{P}}_{\perp,{\rm LF}}(\mathbf{b}_{\perp};P^{+})\) and \(\frac{(P^{+})}{M^{2}}\mathbf{M}_{\perp,{\rm LF}}(\mathbf{b}_{\perp};P^{+})\) in the transverse plane, see Eq. (82), inside a proton (left panels) or a neutron (right panels) polarized along the \(x\)-direction. Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
it turns out that the scaled distributions do also coincide
\[\begin{split}\lim_{P^{+}\to\infty}\frac{P^{+}}{M}\,M^{+}_{\rm LF}( \mathbf{b}_{\perp};P^{+})&=\lim_{P_{z}\to\infty}\frac{P_{z}}{M}\,M_{z, \rm EF}(\mathbf{b}_{\perp};P_{z})\\ &=\frac{e}{2M}\,\sigma_{z}\int\frac{\mathrm{d}^{2}\Delta_{\perp} }{(2\pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,G_{M}(\mathbf{\Delta }_{\perp}^{2}).\end{split} \tag{85}\]
For the transverse magnetization, we find a relation for the scaled momentum amplitudes
\[\begin{split}\lim_{P^{+}\to\infty}\frac{M}{P^{-}}\,\widetilde{\mathbf{ M}}_{\perp,\rm LF}(\mathbf{\Delta}_{\perp};P^{+})&=\lim_{P_{z}\to \infty}\frac{M}{P_{z}}\,\widetilde{\mathbf{M}}_{\perp,\rm EF}(\mathbf{\Delta}_{\perp}; P_{z})\\ &=e\left[\mathbf{\sigma}_{\perp}+\frac{(\mathbf{e}_{z}\times i\mathbf{\Delta} )_{\perp}}{2M}\right]\frac{G_{M}(\mathbf{\Delta}_{\perp}^{2})}{1+\tau}.\end{split} \tag{86}\]
Unfortunately, \(P^{-}\) depends on the momentum transfer and therefore cannot be factored out of the Fourier transform, implying that the above relation does not hold in position space. A similar problem was observed for the \(J^{-}\)-component of the electromagnetic four-current in Ref. [96]. This is of course not too surprising since the longitudinal LF polarization current reads \(J^{-}_{P,\rm LF}=-(\mathbf{\nabla}_{\perp}\times\mathbf{M}_{\perp,\rm LF})_{z}\).
### Light-front electric and magnetic dipole moments
Similarly to Eq. (72), the (transverse) LF EDM is defined as
\[\mathbf{d}_{\perp,\rm LF}(P^{+})=\int\mathrm{d}^{2}b_{\perp}\,\mathbf{b}_{\perp}\,J^{ +}_{\rm LF}(\mathbf{b}_{\perp};P^{+}), \tag{87}\]
and is given for a spin-\(\frac{1}{2}\) target by [131; 66]
\[\mathbf{d}_{\perp,\rm LF}(P^{+})=(\mathbf{e}_{z}\times\mathbf{\sigma})_{\perp}\,F_{2}(0)\, \frac{e}{2M}. \tag{88}\]
This quantity does not depend on \(P^{+}\) and is proportional to the anomalous MDM \(\kappa=F_{2}(0)\).
Since it is well known that objects with MDM in the rest frame display an EDM when viewed from a moving frame [132], LF magnetization distributions were defined in Refs. [133; 73; 134] directly in terms of 2D Fourier transforms of \(F_{2}(Q^{2})\), as suggested by Eq. (88). To explain why in the LF formalism \(F_{2}(Q^{2})\) appears instead of \(G_{M}(Q^{2})\), the authors invoked "relativistic corrections caused by the transverse localization of the wave packet" and referred to [135] for more explanations. In the latter paper, it is argued that Melosh rotations (76) cause a transverse shift12 of the center of \(P^{+}\) (identified with the origin within the LF formalism) relative to the center of mass of the system. The expression in Eq. (88) represents
therefore the EDM defined relative to the center of \(P^{+}\). It coincides with the IMF limit of the EF EDM (73)
\[\mathbf{d}_{\perp,\text{LF}}(P^{+})=\lim_{P_{z}\to\infty}\mathbf{d}_{\perp,\text{EF}}(P_{ z})=(\mathbf{e}_{z}\times\mathbf{\sigma})_{\perp}\left[G_{M}(0)-G_{E}(0)\right]\frac{e}{2M}, \tag{89}\]
since \(G_{M}(0)-G_{E}(0)=F_{2}(0)\). We have seen in Section V.2 that the first term corresponds to the contribution associated with the rest-frame MDM. The second term arises from the sideways shift of the center of spin relative to the center of mass. In the IMF, the center of spin coincides with the center of \(P^{+}\)[90], see Fig. 12 in Appendix B, and we can identify the second term with the shift pointed out in Ref. [135] (equal to one half of the reduced Compton wavelength when the spin-\(\frac{1}{2}\) system is transversely polarized) multiplied by the total charge \(G_{E}(0)\,e\) of the system. Contrary to Refs. [133, 134, 73], we interpret this contribution as a relativistic artifact rather than a "relativistic correction". Genuine LF magnetization distributions should therefore be defined in terms of Fourier transforms of \(G_{M}(Q^{2})\) rather than \(F_{2}(Q^{2})\).
For a spin-\(j\) target, the EF EDM (74) reduces in the IMF limit to
\[\lim_{P_{z}\to\infty}\mathbf{d}^{(j)}_{\perp,\text{EF}}(P_{z})=(\mathbf{e}_{z}\times \mathbf{\Sigma})_{\perp}\left[G_{M1}(0)-2j\,G_{E0}(0)\right]\frac{e}{2M} \tag{90}\]
and coincides with the spin-\(j\) LF EDM derived in Ref. [123]. Interestingly, this EDM vanishes when \(G_{M1}(0)=2j\,G_{E0}(0)\), i.e. when the Lande factor assumes the universal value \(g=2\). The combination \(\kappa\equiv G_{M1}(0)-2j\,G_{E0}(0)\) is then interpreted in general as the anomalous MDM for a spin-\(j\) system. For \(j=\frac{1}{2}\), we recover naturally \(\kappa=F_{2}(0)\).
If we integrate the transverse LF polarization distribution (82) over the impact-parameter space, we will find
\[\int\text{d}^{2}b_{\perp}\,\mathbf{\mathcal{P}}_{\perp,\text{LF}}(\mathbf{b}_{\perp}; P^{+})=\frac{1}{2P^{+}}\,\widetilde{\mathbf{\mathcal{P}}}_{\perp,\text{LF}}(\mathbf{0}_{ \perp};P^{+})=(\mathbf{e}_{z}\times\mathbf{\sigma})_{\perp}\,G_{M}(0)\,\frac{e}{2M}. \tag{91}\]
This quantity corresponds to the first term in Eq. (89) since it is simply the LF EDM arising from the polarization part of the LF charge distribution \(J^{+}_{\text{LF}}(\mathbf{b}_{\perp};P^{+})\)[96]
\[\mathbf{d}_{P,\perp,\text{LF}}(P^{+})=\int\text{d}^{2}b_{\perp}\,\mathbf{b}_{\perp}\, \rho_{P,\text{LF}}(\mathbf{b}_{\perp};P^{+})=\int\text{d}^{2}b_{\perp}\,\mathbf{ \mathcal{P}}_{\perp,\text{LF}}(\mathbf{b}_{\perp};P^{+}), \tag{92}\]
where the LF polarization charge distribution \(\rho_{P,\text{LF}}(\mathbf{b}_{\perp};P^{+})\) coincides with the infinite-momentum limit of the corresponding EF polarization charge distribution (67), namely
\[\begin{split}\rho_{P,\text{LF}}(\mathbf{b}_{\perp};P^{+})& =-\mathbf{\nabla}_{\perp}\cdot\mathbf{\mathcal{P}}_{\text{LF}}(\mathbf{b}_{ \perp};P^{+})=\lim_{P_{z}\to\infty}\rho_{P,\text{EF}}(\mathbf{b}_{\perp};P_{z})\\ &=e\int\frac{\text{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{ \Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\left[\tau+\frac{(\mathbf{\sigma}\!\times\!i \mathbf{\Delta}_{\perp})_{z}}{2M}\right]\frac{G_{M}(\mathbf{\Delta}_{\perp}^{2})}{1+ \tau}.\end{split} \tag{93}\]
The LF magnetization distributions studied in the present work are directly defined in terms of the matrix elements of a polarization-magnetization tensor operator, see Eqs. (78) and (82). These distributions therefore exclude from the beginning any contribution from the convective part of the electromagnetic four-current, and are naturally given by 2D Fourier transforms of \(G_{M}(Q^{2})\) rather than \(F_{2}(Q^{2})\). In particular, longitudinal and transverse LF MDMs are respectively defined as
\[\begin{split}\mu_{z,\mathrm{LF}}(P^{+})&=\frac{1}{ \sqrt{2}}\int\mathrm{d}^{2}b_{\perp}\,M_{\mathrm{LF}}^{+}(\mathbf{b}_{\perp};P^{+} )=\sigma_{z}\,\frac{M}{\sqrt{2}P^{+}}\,G_{M}(0)\,\frac{e}{2M},\\ \mathbf{\mu}_{\perp,\mathrm{LF}}(P^{+})&=\int\mathrm{d}^ {2}b_{\perp}\,\mathbf{M}_{\perp,\mathrm{LF}}(\mathbf{b}_{\perp};P^{+})=\mathbf{\sigma}_{ \perp}\,\frac{M^{2}}{2(P^{+})^{2}}\,G_{M}(0)\,\frac{e}{2M},\end{split} \tag{94}\]
which agree in the rest frame (i.e. when \(P^{+}=M/\sqrt{2}\) with \(\mathbf{\Delta}_{\perp}=\mathbf{0}_{\perp}\) resulting from the integration over the impact-parameter space) with the BF results (44). It may seem a priori surprising that \(\mathbf{\mu}_{\perp,\mathrm{LF}}(\infty)=0\) whereas \(\mathbf{\mu}_{\perp,\mathrm{EF}}(\infty)=\mathbf{\sigma}_{\perp}\,G_{M}(0)\,e/(2M)\). This can however be understood by the fact that \(d_{P,\perp,\mathrm{EF}}^{i}(\infty)=-\epsilon_{\perp}^{ij}\mu_{\perp,\mathrm{ EF}}^{j}(\infty)\), where \(\mathbf{d}_{P,\perp,\mathrm{EF}}(P_{z})=\int\mathrm{d}^{2}b_{\perp}\,\mathbf{\mathcal{P }}_{\mathrm{EF}}(\mathbf{b}_{\perp};P_{z})\) is the polarization part of the transverse EF EDM. Using Eq. (80) we then find that \(\mu_{\perp,\mathrm{LF}}^{i}(\infty)\propto\mu_{\perp,\mathrm{EF}}^{i}(\infty )-\epsilon_{\perp}^{ij}d_{P,\perp,\mathrm{EF}}^{j}(\infty)=0\).
Following the spirit of the LF polarization charge distribution (93), we can likewise define the LF effective magnetic charge distribution via
\[\begin{split}\rho_{M,\mathrm{LF}}(\mathbf{b}_{\perp};P^{+})& =-\mathbf{\nabla}_{\perp}\cdot\mathbf{M}_{\mathrm{LF}}(\mathbf{b}_{\perp};P^{ +})\\ &=\frac{e}{2M}\,\frac{M^{2}}{2(P^{+})^{2}}\int\frac{\mathrm{d}^{2} \Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp}\cdot\mathbf{b}_{\perp}}\,(i \mathbf{\Delta}_{\perp}\cdot\mathbf{\sigma}_{\perp})\,G_{M}(\mathbf{\Delta}_{\perp}^{2}), \end{split} \tag{95}\]
and hence equivalently rewrite the transverse LF MDM as follows
\[\mathbf{\mu}_{\perp,\mathrm{LF}}(P^{+})=\int\mathrm{d}^{2}b_{\perp}\,\mathbf{b}_{\perp }\,\rho_{M,\mathrm{LF}}(\mathbf{b}_{\perp};P^{+}). \tag{96}\]
## VII Summary
In this paper, we extended our study of the relativistic electromagnetic four-current distributions inside a spin-\(\frac{1}{2}\) system and applied the quantum phase-space formalism to the polarization-magnetization tensor operator. In doing so, relativistic polarization and magnetization distributions were for the first time systematically studied in the Breit frame, the elastic frame and on the light-front.
In the literature, the polarization-magnetization tensor is usually motivated by the Gordon decomposition of the electromagnetic four-current and is accordingly defined in terms of the tensor Dirac bilinear. However, we pointed out that a Sachs decomposition of the electromagnetic four-current suggests instead a definition in terms of the axial-vector Dirac bilinear. Axial-vector and tensor Dirac bilinears simply correspond to two natural ways
of describing spin in a relativistic theory, differing by the point about which the internal angular momentum is defined. Through our analysis of the polarization and magnetization distributions in the Breit frame (where the spin structure assumes its simplest form), we observed that the axial-vector description leads to the simplest and physically most natural picture of the polarization and magnetization content of the system.
Relativistic polarization and magnetization distributions are in general frame-dependent. We studied in detail their frame-dependence and compared them in the infinite-momentum frame with the corresponding light-front distributions. We explicitly showed that the genuine light-front magnetization distributions are defined in terms of 2D Fourier transforms of the Sachs magnetic form factor, rather than the Pauli form factor (as suggested earlier in the literature). We explained that the difference results from the transverse shift of the center of light-front momentum relative to the center of mass.
For illustration, we finally applied our results to the case of a nucleon and used the electromagnetic form factors extracted from experimental data. Our analytic expressions and physical interpretations of relativistic polarization and magnetization distributions hold in fact for any physical spin-\(\frac{1}{2}\) targets and can be generalized to higher-spin targets. All that required from the experimental side is an extraction of the corresponding electromagnetic form factors.
###### Acknowledgements.
Y. C. is grateful to Prof. Qun Wang, Prof. Shi Pu, and the Department of Modern Physics for their very kind hospitality and help during his visit to the University of Science and Technology of China. Y. C. thanks Prof. Qun Wang, Prof. Yang Li, Prof. Dao-Neng Gao and Prof. Ming-Zhe Li for very insightful discussions at the early stage of this work. This work is supported in part by the National Natural Science Foundation of China (NSFC) under Grant Nos. 12135011, 11890713 (a sub-Grant of 11890710), and by the Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) under Grant No. XDB34030102.
## Appendix A Charge radii
In this Appendix, we review the concept of relativistic mean square radii for spatial distributions, apply it to the case of the relativistic charge distribution for a spin-\(\frac{1}{2}\) system, and study in particular the momentum dependence in the 2D case.
### In the 3D Breit frame
The mean square radius of a 3D spatial distribution \(O(\mathbf{r})\) is defined as
\[\langle\mathbf{r}_{O}^{2}\rangle\equiv\frac{\int\mathrm{d}^{3}r\,\mathbf{r}^{2}O(\mathbf{r}) }{\int\mathrm{d}^{3}r\,O(\mathbf{r})}. \tag{104}\]
Applying this definition to the 3D BF charge distribution leads to [60; 6]
\[\langle\mathbf{r}_{\text{ch}}^{2}\rangle=\frac{\int\mathrm{d}^{3}r\,\mathbf{r}^{2}J_{B }^{0}(\mathbf{r})}{\int\mathrm{d}^{3}r\,J_{B}^{0}(\mathbf{r})}=\langle\mathbf{r}_{E}^{2} \rangle+\frac{3}{4M^{2}}, \tag{105}\]
where the first term is the conventional Sachs mean square radius defined as13
Footnote 13: Since \(G_{E}^{n}(0)=0\), the neutron Sachs mean square radius is defined with \(G_{E}^{p}(0)=1\) in the denominator.
\[\langle\mathbf{r}_{E}^{2}\rangle\equiv-\frac{6}{G_{E}(0)}\,\frac{\mathrm{d}G_{E}( Q^{2})}{\mathrm{d}Q^{2}}\bigg{|}_{Q=0}=\frac{1}{G_{E}(0)}\left[-\mathbf{\nabla}_{ \mathbf{\Delta}}^{2}G_{E}(\mathbf{\Delta}^{2})\right]_{\mathbf{\Delta}=\mathbf{0}}, \tag{106}\]
and the second term is known as the Darwin-Foldy term [136; 137; 116]. For purely historical reasons, the Darwin-Foldy term is kept separate in the literature, and so the charge radius of a spin-\(\frac{1}{2}\) system is traditionally defined by \(r_{E}\equiv\sqrt{\langle\mathbf{r}_{E}^{2}\rangle}\)[126; 74]. Similarly, one can consider the mean square radius of the effective magnetic charge distribution, but the result is trivial, viz. \(\int\mathrm{d}^{3}r\,\mathbf{r}^{2}\rho_{M,B}(\mathbf{r})=\int\mathrm{d}^{3}r\,\rho_{ M,B}(\mathbf{r})=0\), because the expression for \(\rho_{M,B}\) in momentum space is odd in \(\mathbf{\Delta}\), see Eq. (42).
If one adopts a \(T\)-type decomposition of the charge density (48), the mean square charge radius can be split as follows
\[\langle\mathbf{r}_{\text{ch}}^{2}\rangle=\langle\mathbf{r}_{\text{ch},c^{\prime}}^{2} \rangle+\langle\mathbf{r}_{\text{ch},P^{\prime}}^{2}\rangle, \tag{107}\]
where the convection and polarization contributions are respectively given by
\[\begin{split}\langle\mathbf{r}_{\text{ch},c^{\prime}}^{2}\rangle& =\langle\mathbf{r}_{D}^{2}\rangle-\frac{3}{4M^{2}},\\ \langle\mathbf{r}_{\text{ch},P^{\prime}}^{2}\rangle&= \frac{3}{2M^{2}}\,\frac{G_{M}(0)}{G_{E}(0)}.\end{split} \tag{108}\]
Beside the Dirac mean square radius
\[\langle\mathbf{r}_{D}^{2}\rangle\equiv-\frac{6}{F_{1}(0)}\,\frac{\mathrm{d}F_{1}( Q^{2})}{\mathrm{d}Q^{2}}\bigg{|}_{Q=0}, \tag{109}\]
we observe in the convection contribution a negative Darwin-Foldy term coming from the factor \(P_{B}^{0}/M\) in Eq. (49), analogous to the positive Darwin-Foldy term in Eq. (105) coming from the factor \(M/P_{B}^{0}\) in Eq. (37). Interestingly, even if the \(T\)-type polarization does not
contribute to the total charge of the system, it does contribute to the charge radius. This is reflected in momentum space by the global factor of \(\tau=Q^{2}/(4M^{2})\) in Eq. (50).
### In the 2D elastic and light-front frames
The mean square transverse radius of a 2D spatial distribution \(O(\mathbf{b}_{\perp})\) is defined similarly to its 3D counterpart (110)
\[\langle\mathbf{b}_{\perp,O}^{2}\rangle\equiv\frac{\int\mathrm{d}^{2}b_{\perp}\, \mathbf{b}_{\perp}^{2}O(\mathbf{b}_{\perp})}{\int\mathrm{d}^{2}b_{\perp}\,O(\mathbf{b}_{ \perp})}. \tag{111}\]
Applying this definition to the 2D EF charge distribution \(J_{\text{EF}}^{0}(\mathbf{b}_{\perp};P_{z})\) in Eq. (64) leads to
\[\begin{split}\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{ EF}}(P_{z})&=\frac{\int\mathrm{d}^{2}b_{\perp}\,\mathbf{b}_{\perp}^{2}J_{ \text{EF}}^{0}(\mathbf{b}_{\perp};P_{z})}{\int\mathrm{d}^{2}b_{\perp}\,J_{\text{EF} }^{0}(\mathbf{b}_{\perp};P_{z})}\\ &=\frac{2}{3}\,\langle\mathbf{r}_{E}^{2}\rangle+\frac{1}{M^{2}}\left[ \frac{E_{P}}{E_{P}+M}-\frac{E_{P}-M}{E_{P}}\,\frac{G_{M}(0)}{G_{E}(0)}\right] \end{split} \tag{112}\]
with \(E_{P}=\sqrt{P_{z}^{2}+M^{2}}\). In particular, in the BF we have
\[\lim_{P_{z}\to 0}\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{EF}}(P_{z}) =\frac{2}{3}\,\langle\mathbf{r}_{E}^{2}\rangle+\frac{1}{2M^{2}}, \tag{113}\]
which is consistent with our expectation \(\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{EF}}(0)=\frac{2}{3}\, \langle\mathbf{r}_{\text{ch}}^{2}\rangle\) for a spherically symmetric BF charge distribution. In the IMF, we find
\[\lim_{P_{z}\to\infty}\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{EF}}(P _{z})=\frac{2}{3}\,\langle\mathbf{r}_{D}^{2}\rangle, \tag{114}\]
where we used the relation
\[\langle\mathbf{r}_{E}^{2}\rangle=\langle\mathbf{r}_{D}^{2}\rangle+\frac{3}{2M^{2}}\, \frac{F_{2}(0)}{F_{1}(0)} \tag{115}\]
between the Sachs and Dirac mean square radii.
In Fig. 11, we show the EF mean square transverse charge radii \(\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle(P_{z})\) of the nucleon as a function of the average momentum \(P_{z}\). The proton and neutron Sachs mean square radii
\[\begin{split}\langle\mathbf{r}_{E}^{2}\rangle^{p}&=(0. 831\pm 0.007_{\text{stat.}}\pm 0.012_{\text{syst.}})^{2}\text{ fm}^{2},\\ \langle\mathbf{r}_{E}^{2}\rangle^{n}&=(-0.1161\pm 0.0022) \text{ fm}^{2},\end{split} \tag{116}\]
are taken from recent measurements by the PRad Collaboration [27; 30] and from the Particle Data Group [138], respectively. Interestingly, we observe that \(\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle^{n}(P_{z})\) switches sign from negative to positive around \(P_{z}\approx 1.893\) GeV.
Applying now the definition (111) to the 2D LF charge distribution \(J_{\text{LF}}^{+}(\mathbf{b}_{\perp};P^{+})\) leads
to [74]
\[\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{LF}}(P^{+})=\frac{\int\text{d}^{ 2}b_{\perp}\,\mathbf{b}_{\perp}^{2}J_{\text{LF}}^{+}(\mathbf{b}_{\perp};P^{+})}{\int\text {d}^{2}b_{\perp}\,J_{\text{LF}}^{+}(\mathbf{b}_{\perp};P^{+})}=\frac{2}{3}\,\langle \mathbf{r}_{D}^{2}\rangle=\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{EF}}( \infty), \tag{113}\]
which is consistent with the fact that \(J_{\text{LF}}^{+}(\mathbf{b}_{\perp};P^{+})=J_{\text{EF}}^{0}(\mathbf{b}_{\perp};\infty)\)[96].
## Appendix B Relativistic centers of the nucleon
In this Appendix, we remind the relations between the positions of the various possible centers of a relativistic spin-\(\frac{1}{2}\) system [90; 93]. For a spin-\(j\) system, it suffices to replace \(\frac{1}{2}\mathbf{S}\) by \(j\mathbf{S}\) in the following expressions.
The position of the center of canonical spin \(\mathbf{R}_{c}\) (the point about which the internal angular momentum takes the same value as in the rest frame) coincides with the average position \(\mathbf{R}\) appearing in the quantum phase-space formalism, namely
\[\mathbf{R}_{c}=\mathbf{R}=\tfrac{1}{2}(\mathbf{r}+\mathbf{r}^{\prime}). \tag{114}\]
Since in the literature one is usually interested only in the internal structure of the target, one often sets \(\mathbf{R}=\mathbf{0}\) for convenience.
The positions of the center of energy (or inertia) \(\mathbf{R}_{E}\) and the center of mass \(\mathbf{R}_{M}\) are respectively given by
\[\begin{split}\mathbf{R}_{E}&=\mathbf{R}+\frac{\mathbf{P}\times \mathbf{S}}{2E_{P}(E_{P}+M)},\\ \mathbf{R}_{M}&=\mathbf{R}-\frac{\mathbf{P}\times\mathbf{S}}{2M(E_{P }+M)},\end{split} \tag{115}\]
where \(\mathbf{S}\) is the unit polarization vector. For a system at rest (\(\mathbf{P}=\mathbf{0}\)) or longitudinally
Figure 11: Mean-square transverse charge radii \(\langle\mathbf{b}_{\perp,\text{ch}}^{2}\rangle_{\text{EF}}(P_{z})\) of the nucleon in the elastic frame, see Eq. (100), as functions of the nucleon average momentum \(P_{z}\). The proton and neutron Sachs mean square radii \(\langle\mathbf{r}_{E}^{2}\rangle\) are taken from the recent measurements by the PRad Collaboration [27; 30] and the data tables by the Particle Data Group [138], respectively.
polarized (\(\mathbf{P}\times\mathbf{S}=\mathbf{0}\)), all these relativistic centers coincide
\[\mathbf{R}_{M}=\mathbf{R}_{E}=\mathbf{R}_{\rm c}=\mathbf{R}. \tag{101}\]
The center of mass is the only one transforming as the spatial part of a Lorentz four-vector, and corresponds therefore to the _true_ center of the system. The shifts
\[\mathbf{R}_{\rm c}-\mathbf{R}_{M} =\frac{\mathbf{P}\times\mathbf{S}}{2M(E_{P}+M)}, \tag{102}\] \[\mathbf{R}_{E}-\mathbf{R}_{M} =\frac{\mathbf{P}\times\mathbf{S}}{2ME_{P}},\]
are a pure relativistic effect. The set of all possible centers of energy forms a disk centered at \(\mathbf{R}_{M}\) and orthogonal to \(\mathbf{S}\), known as Moller's disk [139; 140]. Its radius is equal to half the reduced Compton wavelength
\[R_{\rm Moller}=\frac{1}{2M}, \tag{103}\]
and corresponds to the maximum value of \(|\mathbf{R}_{\rm c,E}-\mathbf{R}_{M}|\) in Eq. (102), reached in the IMF for a purely transverse polarization.
In the LF formalism one identifies the center of the target with the center of \(P^{+}\)[66],
Figure 12: _Left panel_: Illustration of the relative positions in the \(r_{x}=0\) plane of the relativistic centers of mass \(\mathbf{R}_{M}\), energy \(\mathbf{R}_{E}\) and canonical spin \(\mathbf{R}_{\rm c}\) inside a transversely polarized proton viewed in the Breit, elastic and infinite-momentum frames. The light-blue arrows represent the local momentum density. The proton charge radius \(r_{E}^{p}\approx 0.831\,\)fm is taken from recent precision measurements by the PRad Collaboration [27; 30]. The horizontal gray-dashed line corresponds to the maximum shift given by the Möller radius (103). _Right panel_: Momentum dependence of sideways shifts along the \(y\)-axis of the relativistic centers inside a proton. As an example, the vertical dashed red line at \(P_{z}=\sqrt{3}M\approx 1.625\,\)GeV corresponds to the elastic frame case (with Lorentz factor \(\gamma=2\)) in the left panel.
whose transverse position is given by [90; 135]
\[\mathbf{R}_{P^{+},\perp}=\mathbf{R}_{M,\perp}+\frac{(\mathbf{e}_{z}\times\mathbf{S})_{\perp}}{2M}. \tag{101}\]
The center of \(P^{+}\) can therefore be identified with the IMF center of energy (or equivalently the IMF center of spin).
The relative positions of the various relativistic centers are illustrated in Fig. 12. The left panel shows a representation of the proton viewed from different Lorentz frames. The right panel shows the momentum dependence of the transverse position of the center of energy, spin and \(P^{+}\) relative to the center of mass. The EF situation represented in the left panel corresponds to the Lorentz factor \(\gamma=2\) (i.e. \(P_{z}=\sqrt{3}M\approx 1.625\) GeV for a proton) and is represented by the vertical dashed line in the right panel.
## Appendix C Multipole decomposition of the relativistic polarization and magnetization distributions
In this Appendix, we discuss the multipole decomposition of the relativistic polarization and magnetization distributions in both 3D and 2D cases. Since polarization and magnetization transform as vectors under rotations, their matrix elements for a spin-\(\frac{1}{2}\) system can only consist in monopole, dipole and quadrupole contributions.
### In the 3D Breit frame
In the \(n\)D Euclidean space, the Fourier transform of a quadrupole in \(\mathbf{\Delta}\) can conveniently be expressed as follows
\[\begin{split}\int\frac{\mathrm{d}^{n}\Delta}{(2\pi)^{n}}& \,e^{-i\mathbf{\Delta}\cdot\mathbf{r}}\left(\Delta^{i}\Delta^{j}-\tfrac{1}{n}\, \delta^{ij}\mathbf{\Delta}^{2}\right)f(\mathbf{\Delta}^{2})\\ &=\frac{r^{i}r^{j}-\tfrac{1}{n}\,\delta^{ij}\mathbf{r}^{2}}{r^{2}} \left(\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r}-\frac{\mathrm{d}^{2}}{\mathrm{ d}r^{2}}\right)\int\frac{\mathrm{d}^{n}\Delta}{(2\pi)^{n}}\,e^{-i\mathbf{\Delta} \cdot\mathbf{r}}\,f(\mathbf{\Delta}^{2})\end{split} \tag{102}\]
with \(r=|\mathbf{r}|\). In the 3D Euclidean space, we have in particular
\[\int\frac{\mathrm{d}^{3}\Delta}{(2\pi)^{3}}\,e^{-i\mathbf{\Delta}\cdot\mathbf{r}}\left( \Delta^{i}\Delta^{j}-\tfrac{1}{3}\,\delta^{ij}\mathbf{\Delta}^{2}\right)f(\mathbf{ \Delta}^{2})=-\frac{r^{i}r^{j}-\tfrac{1}{3}\,\delta^{ij}\mathbf{r}^{2}}{r^{2}} \int\frac{\mathrm{d}Q}{2\pi^{2}}\,Q^{4}j_{2}(Qr)\,f(Q^{2}), \tag{103}\]
where the \(n\)-th order spherical Bessel function \(j_{n}(x)\) is given by
\[j_{n}(x)=(-1)^{n}x^{n}\left(\frac{1}{x}\frac{\mathrm{d}}{\mathrm{d}x}\right)^ {n}j_{0}(x) \tag{104}\]
with \(j_{0}(x)=\sin x/x\) the zeroth order spherical Bessel function.
It is then straightforward to decompose the BF magnetization distribution in Eq. (39) into two terms \(\mathbf{M}_{B}=\mathbf{M}_{B}^{(M)}+\mathbf{M}_{B}^{(Q)}\), where \(\mathbf{M}_{B}^{(M)}(\mathbf{r})\) corresponds to the monopole contribution
\[\mathbf{M}_{B}^{(M)}(\mathbf{r})=\frac{e}{2M}\,\mathbf{\sigma}\int\frac{\mathrm{d}Q}{2\pi^{ 2}}\,Q^{2}j_{0}(Qr)\,\frac{1}{3}\left[2+\frac{M}{P_{B}^{0}}\right]\frac{M}{P_{ B}^{0}}\,G_{M}(Q^{2}), \tag{104}\]
and \(\mathbf{M}_{B}^{(Q)}(\mathbf{r})\) corresponds to the quadrupole contribution
\[\mathbf{M}_{B}^{(Q)}(\mathbf{r})=\frac{e}{2M}\left[\hat{\mathbf{r}}(\hat{\mathbf{r}}\cdot\mathbf{ \sigma})-\tfrac{1}{3}\,\mathbf{\sigma}\right]\int\frac{\mathrm{d}Q}{2\pi^{2}}\,Q^ {4}j_{2}(Qr)\,\frac{1}{4P_{B}^{0}(P_{B}^{0}+M)}\,\frac{M}{P_{B}^{0}}\,G_{M}(Q ^{2}) \tag{105}\]
with \(\hat{\mathbf{r}}\equiv\mathbf{r}/|\mathbf{r}|\) the unit vector along \(\mathbf{r}\).
In Fig. 13, we show the monopole and quadrupole contributions to the BF magnetization distribution of a proton presented in the upper left panel of Fig. 2. The quadrupole contributions have an interesting structure which we highlighted with streamlines. These contributions are however small, explaining why the BF magnetization distributions presented in the first row of Fig. 2 look ess
Figure 13: Monopole (left panel) and quadrupole (right panel) contributions, see Eqs. (104) and (105), to the Breit frame magnetization distribution inside a proton polarized along the \(z\)-direction in the \(r_{y}=0\) plane. Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
Figure 14: Monopole (upper panels), dipole (middle panels) and quadrupole (lower panels) contributions, see Eqs. (100) and (101), to the elastic frame magnetization (left panels) and polarization (right panels) distributions inside a proton polarized along the \(x\)-direction and with average momentum \(P_{z}=1\) GeV. Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
### In the 2D elastic and light-front frames
In the 2D transverse Euclidean plane, the relation (157) for the Fourier transform of a quadrupole in \(\mathbf{\Delta}\) reduces to
\[\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{(2\pi)^{2}}\,e^{-i\mathbf{\Delta}_{\perp} \cdot\mathbf{b}_{\perp}}\left(\Delta_{\perp}^{i}\Delta_{\perp}^{j}-\tfrac{1}{2}\, \delta_{\perp}^{ij}\mathbf{\Delta}_{\perp}^{2}\right)f(\mathbf{\Delta}_{\perp}^{2})=- \frac{b_{\perp}^{i}b_{\perp}^{j}-\tfrac{1}{2}\,\delta_{\perp}^{ij}\mathbf{b}_{\perp }^{2}}{b^{2}}\int\frac{\mathrm{d}Q}{2\pi}\,Q^{3}J_{2}(Qb)f(Q^{2}), \tag{158}\]
where the \(n\)-th order cylindrical Bessel function \(J_{n}(x)\) is given by
\[J_{n}(x)=(-1)^{n}x^{n}\left(\frac{1}{x}\frac{\mathrm{d}}{\mathrm{d}x}\right)^{ n}J_{0}(x), \tag{159}\]
with \(J_{0}(x)=\tfrac{1}{2\pi}\int_{-\pi}^{\pi}\,\mathrm{d}\theta\,e^{-ix\cos\theta}\) the zeroth order cylindrical Bessel function.
It is then straightforward to decompose the transverse EF magnetization distribution in Eq. (62) into three terms \(\mathbf{M}_{\perp,\mathrm{EF}}=\mathbf{M}_{\perp,\mathrm{EF}}^{(M)}+\mathbf{M}_{\perp, \mathrm{EF}}^{(D)}+\mathbf{M}_{\perp,\mathrm{EF}}^{(Q)}\), where the monopole, dipole and quadrupole contributions are respectively given by
\[\mathbf{M}_{\perp,\mathrm{EF}}^{(M)}(\mathbf{b}_{\perp};P_{z}) =\frac{e}{2M}\,\mathbf{\sigma}_{\perp}\int\frac{\mathrm{d}Q}{2\pi}\, QJ_{0}(Qb)\,\frac{P^{0}+M(1+\tau/2)}{(P^{0}+M)(1+\tau)}\,G_{M}(Q^{2}), \tag{160}\] \[\mathbf{M}_{\perp,\mathrm{EF}}^{(D)}(\mathbf{b}_{\perp};P_{z}) =\frac{e}{2M}\,\frac{P_{z}}{2M}\,(\mathbf{e}_{z}\times\hat{\mathbf{b}}_{ \perp})\int\frac{\mathrm{d}Q}{2\pi}\,Q^{2}J_{1}(Qb)\,\frac{G_{M}(Q^{2})}{(P^{0 }+M)(1+\tau)},\] \[\mathbf{M}_{\perp,\mathrm{EF}}^{(Q)}(\mathbf{b}_{\perp};P_{z}) =\frac{e}{2M}\left[\hat{\mathbf{b}}_{\perp}(\hat{\mathbf{b}}_{\perp}\cdot \mathbf{\sigma}_{\perp})-\tfrac{1}{2}\,\mathbf{\sigma}_{\perp}\right]\int\frac{\mathrm{ d}Q}{2\pi}\,Q^{3}J_{2}(Qb)\,\frac{G_{M}(Q^{2})}{4M(P^{0}+M)(1+\tau)}\]
with \(\hat{\mathbf{b}}_{\perp}\equiv\mathbf{b}_{\perp}/|\mathbf{b}_{\perp}|\) the unit vector along \(\mathbf{b}_{\perp}\). Similarly, the transverse EF polarization distribution can also be decomposed into three terms \(\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}=\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}^{( M)}+\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}^{(D)}+\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}^{(Q)}\), where the monopole, dipole and quadrupole contributions are respectively given by
\[\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}^{(M)}(\mathbf{b}_{\perp};P_{z}) =\frac{e}{2M}\,(\mathbf{e}_{z}\times\mathbf{\sigma}_{\perp})\int\frac{ \mathrm{d}Q}{2\pi}\,QJ_{0}(Qb)\,\frac{P_{z}}{P^{0}}\,\frac{P^{0}+M(1+\tau/2)} {(P^{0}+M)(1+\tau)}\,G_{M}(Q^{2}), \tag{161}\] \[\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}^{(D)}(\mathbf{b}_{\perp};P_{z}) =-\frac{e}{2M}\,\frac{P_{z}}{2M}\,\hat{\mathbf{b}}_{\perp}\int\frac{ \mathrm{d}Q}{2\pi}\,Q^{2}J_{1}(Qb)\,\frac{P_{z}}{P^{0}}\,\frac{G_{M}(Q^{2})}{( P^{0}+M)(1+\tau)},\] \[\mathbf{\mathcal{P}}_{\perp,\mathrm{EF}}^{(Q)}(\mathbf{b}_{\perp};P_{z}) =\frac{e}{2M}\left[(\mathbf{e}_{z}\times\hat{\mathbf{b}}_{\perp})(\hat{\bm {b}}_{\perp}\cdot\mathbf{\sigma}_{\perp})-\tfrac{1}{2}\,(\mathbf{e}_{z}\times\mathbf{ \sigma}_{\perp})\right]\] \[\int\frac{\mathrm{d}Q}{2\pi}\,Q^{3}J_{2}(Qb)\,\frac{P_{z}}{P^{0}} \,\frac{G_{M}(Q^{2})}{4M(P^{0}+M)(1+\tau)}.\]
Expressions in Eqs. (161) and (160) are very similar and follow simply from the relation between the momentum-space amplitudes \(\widetilde{\mathbf{\mathcal{P}}}_{\mathrm{EF}}=\mathbf{\beta}\times\widetilde{\mathbf{M}}_ {\mathrm{EF}}\) in Eq. (61), which obviously holds also for the individual multipole contributions. In Fig. 14, we show the multipole decomposition of the transverse EF magnetization and polarization distributions inside a transversely polarized proton with average momentum \(P_{z}=1\,\,\mathrm{GeV}\). As the result of the
non-vanishing average momentum which breaks the \(z\mapsto-z\) symmetry, Wigner rotations generate a dipole contribution on top of the quadrupole contribution. However, discrete spacetime symmetries prevent the appearance of \(\mathbf{\sigma}_{\perp}\) in the dipole contribution, explaining why the latter does not depend on the target polarization.
According to the LF expressions in Eqs. (79-81),
\[\mathcal{P}^{+}_{\rm LF}(\mathbf{b}_{\perp};P^{+})=\mathcal{P}^{-}_{\rm LF}(\mathbf{b}_ {\perp};P^{+})=M^{-}_{\rm LF}(\mathbf{b}_{\perp};P^{+})=0. \tag{100}\]
For the transverse LF magnetization distribution in Eq. (82), we apply the same procedure as in the EF and decompose it into two terms \(\mathbf{M}_{\perp,\rm LF}=\mathbf{M}^{(M)}_{\perp,\rm LF}+\mathbf{M}^{(D)}_{\perp,\rm LF}\), where the monopole
Figure 15: Monopole (upper panels) and dipole (lower panels) contributions, see Eqs. (101) and (102), to the (scaled) light-front magnetization (left panels) and polarization (right panels) distributions inside a proton polarized along the \(x\)-direction. Based on the parametrization for the nucleon electromagnetic form factors given in Ref. [127].
and dipole contributions are respectively given by
\[\begin{split}&\mathbf{M}^{(M)}_{\perp,\text{LF}}(\mathbf{b}_{\perp};P^{+})= \frac{e}{2M}\,\frac{M^{2}}{2(P^{+})^{2}}\,\mathbf{\sigma}_{\perp}\int\frac{\text{d} Q}{2\pi}\,QJ_{0}(Qb)\,G_{M}(Q^{2}),\\ &\mathbf{M}^{(D)}_{\perp,\text{LF}}(\mathbf{b}_{\perp};P^{+})=\frac{e}{2M} \,\frac{M^{2}}{2(P^{+})^{2}}\,(\mathbf{e}_{z}\times\hat{\mathbf{b}}_{\perp})\int\frac{ \text{d}Q}{2\pi}\,\frac{Q^{2}}{2M}\,J_{1}(Qb)\,G_{M}(Q^{2}).\end{split} \tag{114}\]
Likewise, the transverse LF polarization distributions from Eq. (82) can be decomposed into two terms \(\mathbf{\mathcal{P}}_{\perp,\text{LF}}=\mathbf{\mathcal{P}}_{\perp,\text{LF}}^{(M)}+ \mathbf{\mathcal{P}}_{\perp,\text{LF}}^{(D)}\), where the monopole and dipole contributions are respectively given by
\[\begin{split}&\mathbf{\mathcal{P}}_{\perp,\text{LF}}^{(M)}(\mathbf{b}_{ \perp};P^{+})=\frac{e}{2M}\,(\mathbf{e}_{z}\times\mathbf{\sigma}_{\perp})\int\frac{ \text{d}Q}{2\pi}\,QJ_{0}(Qb)\,\frac{G_{M}(Q^{2})}{1+\tau},\\ &\mathbf{\mathcal{P}}_{\perp,\text{LF}}^{(D)}(\mathbf{b}_{\perp};P^{+})= -\frac{e}{2M}\,\hat{\mathbf{b}}_{\perp}\int\frac{\text{d}Q}{2\pi}\,\frac{Q^{2}}{2M }\,J_{1}(Qb)\,\frac{G_{M}(Q^{2})}{1+\tau}.\end{split} \tag{115}\]
Like in the EF case, we observe similar structures in the LF magnetization and polarization distributions which follow this time from \(\widetilde{\mathcal{P}}_{\perp,\text{LF}}^{i}=-\frac{P^{+}}{P^{-}}\,\epsilon_ {\perp}^{ij}\widetilde{M}_{\perp,\text{LF}}^{j}\) in Eq. (81). The multipole contributions to the transverse LF polarization distribution in Eq. (115) are \(P^{+}\)-independent and coincide with the IMF limit of the corresponding EF contributions in Eq. (113). In particular, the EF quadrupole contribution vanishes in the IMF in agreement with the absence of LF quadrupole contribution. By contrast, the multipole contributions to the transverse LF magnetization distribution in Eq. (114) differ from the IMF limit of the corresponding EF contributions in Eq. (112) by a factor \(P^{-}/P^{+}\). In Fig. 15, we show the multipole contributions to the transverse LF (scaled) magnetization and polarization distributions inside a transversely polarized proton. As expected, they look similar to the corresponding EF distributions in Fig. 14, albeit with differences in magnitude.
|
2306.07854
|
Quantum Optics as Applied Quantum Electrodynamics is back in town
|
We start this short note by remembering the beginnings of the Warsaw School
of Quantum Optics, evidently stimulated by Iwo Bialynicki-Birula at the Warsaw
University, and then Centre for Theoretical Physics of Polish Academy of
Sciences, and Adam Kujawski and Zofia Bialynicka-Birula at the Institute of
Physics of Polish Academy of Sciences. In the theoretical approaches of the
Warsaw School Quantum Field Theory was always present, and Quantum Optics was
considered to be Applied Quantum Electrodynamics (QED). All of us who grew up
in this fantastic community have carried and are still carrying the gospel to
others. In particular, now QED began her run on the red carpet of Super
Instense Laser Matter Interactions, Attosecond-physics, and Ultrafast Laser
Physics, in general. We will elaborate on the recent progress in this
direction, and on the open questions towards future investigations. This paper
celebrates the 90th birthday of Prof. Iwo Bialynicki-Birula, our QED guru!
|
Philipp Stammer, Maciej Lewenstein
|
2023-06-13T15:34:01Z
|
http://arxiv.org/abs/2306.07854v1
|
# Quantum Optics as Applied Quantum Electrodynamics is back in town
###### Abstract
We start this short note by remembering the beginnings of the Warsaw School of Quantum Optics, evidently stimulated by Two Bialynicki-Birula at the Warsaw University, and then Centre for Theoretical Physics of Polish Academy of Sciences, and Adam Kujawski and Zofia Bialynicka-Birula at the Institute of Physics of Polish Academy of Sciences. In the theoretical approaches of the Warsaw School Quantum Field Theory was always present, and Quantum Optics was considered to be Applied Quantum Electrodynamics (QED). All of us who grew up in this fantastic community have carried and are still carrying the gospel to others. In particular, now QED began her run on the red carpet of Super Instense Laser Matter Interactions, Attosecond-physics, and Ultrafast Laser Physics, in general. We will elaborate on the recent progress in this direction, and on the open questions towards future investigations. This paper celebrates the 90th birthday of Prof. Iwo Bialynicki-Birula, our QED guru!
## I Introduction
### Memories
On the occasion like this, it is appropriate to start the paper with some personal memories, in this case by M. Lewenstein: Me and one of my best friends, Marek Kus were supposed to do our Diplomas at the Department of Physics of Warsaw University in the academic year 1978-1979. Like many other top theory students our preference was Katedra Metod Matematycznych Fizyki (KMMF), led by Prof. Krzysztof Maurin. I even had a favorite supervisor: Krzysztof Gawedzki. When I asked him about the possibility he told me literally: "Panie Macku, Quantum Field Theory is difficult, and Renormalization Group even harder", and he left Poland starting his Odyssey via Harvard, Princeton, IHES, and ENS Lyon. Still, we wanted to go to KMMF, but the Dean of the Department, Prof. Jerzy Pniewski issued a rule that there would be no Diplomas in KMMF this year. We had to look for something comparably challenging, and we chose Zaklad Teorii Pola i Fizyki Statyycznej of Prof. Iwo Bialynicki/Birula, the author of the seminal handbook of Quantum Electrodynamics [1]. It was indeed a Mekka of the Warsaw Statistical Physics with Jarek Piasecki, Lukasz Turski and Bogdan Cichocki, but we were interested in Quantum Field Theory (QFT). And then came two younger and very convincing guys, Kazik Rzazewski and Krzysztof Wodkiewicz, who saiud: let us do Quantum Optics (QO), which is Applied QED. And we both got seduced.
Indeed, training of QO in Warsaw was heavily biased toward QFT. Master equation approaches were not "allowed", one was using full Hamiltonian and Heisenberg equations. This has taught us very early that there are no Markov processes in Nature: everything must have long time tail corrections and more...
There is another twist to this story related to Stong Laser Field physics. On the desk of Kazik Rzazewski, I found a preprint of Luiz Davidovich that Kazik got when they shared the same bureau at ITCP with Luis. I got absolute fascinated by the Keldysh's theory of tunnel ionization, and decided to work on it. In the beginning of 1970, Pierre Agostini in Saclay published first result on, the so called, Above Threshold Ionisation. Zofia Bialynicka-Birula published a seminal paper [2] on the subject in 1984. This is the moment, when I decided to join operation.
The situation of the Super Intense Laser Matter Physics is well described in the sub-section below. We clearly face the situation when QED is on the run again. This paper is based on the thesis proposal of Philipp Stammer, a PhD student at ICFO. So, the plan is to present the motivation to bring Quantum Optics as Applied Quantum Electrodynamics back to town, and then various future projects, all related to QED of Strong Laser Fields physics, so to the clear heritage of Iwo Bialynicki-Birula.
### Quantum Optics meets Strong Laser Field Physics
For decades the interaction of intense and short laser pulses with matter has been described successfully with semi-classical methods, in which the quantum nature of the electromagnetic field was not taken into account. The characteristics of the observed features in the spectra for the processes of high harmonic generation (HHG) [3; 4] or above threshold ionization (ATI) [5; 6] were well reproduced within the semi-classical picture. Furthermore, the semi-classical approach for the process of HHG (or even fully classical [7]) provides a powerful picture by means of the so-called 3-step model to gain intuition about the electron dynamics. There, (i) an electron tunnel ionizes into the continuum through the barrier formed by the Coulomb potential of the core and the electric field (via dipole coupling), then (ii) the freed electron is driven in the presence of the electric field and can (iii) eventually recombine to the core by emitting the gained energy in terms of radiation. This description has lead to fruitful analysis in terms of quantum trajectories [8; 9; 10] within the strong field approximation [11]. The progress of the strong field and attosecond physics based on the semi-classical description as immense, but neglecting the quantum properties of the field did not allow to use a language for posing specific questions on the dynamics.
However, including the quantum electrodynamical characteristics of the field can lead to new observations in the radiation field inaccessible from the classical perspective, and further allows to ask question unamenable before, for instance to obtain insights about the quantum state of the field. In fact, recent theoretical and experimental advances have indicated that intense laser matter interaction can exhibit non-classical features. In particular, quantum optical approaches of the process of high-order harmonic generation asked for the quantum state of the harmonic field modes [12; 13], and studied the back-action on the fundamental driving field [13; 14]. Furthermore, the experimental advances in combining strong field physics with methods known from quantum optics [15; 16], allowed to conceive new experiments in which non-classical states of light can be generated from the HHG process [13; 14; 17]. This progress has then triggered subsequent analysis for quantum state engineering of light using intense laser matter interaction [18; 19; 20]. Nevertheless, and despite using Hilbert space constructs for the electromagnetic field, the investigation has yet not revealed inherent quantum signatures in the emitted radiation from the HHG process itself.
Besides these achievements in the quantum optical description of intense laser driven processes, the full quantum optical properties of the emitted radiation in the process of high harmonic generation has yet not been revealed. The radiation is obtained from classical dipole antenna like sources, and thus exhibit the same characteristics as classical coherent radiation sources. Furthermore, the quantum state of the electromagnetic field is given in terms of product coherent states, which are classical states. Those features originate from the neglected dipole moment correlations in the current theory [13; 18; 19; 21], which, if taken into account, would eventually lead to non-classical contributions in the properties of the emitted harmonic radiation. Thus, further investigation towards accessing this information, with potential hidden and interesting properties seems promising for a more detailed understanding of the HHG process and for potential applications in optical technologies. Nevertheless, by introducing conditioning measurements on the field after the HHG process leads to the generation of non-classical field states by means of optical Schrodinger cat states with high photon numbers [14; 17; 18; 19]. This suggest potential applicability of this methods in modern optical quantum technologies, and could provide a new photonic platform for information processing [22; 23]. In particular since quantum information processing often requires entangled or superposition states as a resource, there is a clear need to generate such states.
The next section provides an introduction to the current quantum optical formulation of the process of high harmonic generation. This serves to define the stage for introducing current open questions the new formalism caused. This will then allow to propose further investigation in this direction. In particular it highlights the assumptions and approximations used, which are then questioned and analyzed in the proposed future analysis.
### Quantum optical high harmonic generation
In the process of high harmonic generation, coherent radiation of higher order harmonics of the driving laser frequency is generated [4; 24]. The transfer of coherence, and energy, from the intense laser source to the harmonic field modes (initially in the vacuum) is achieved by a highly nonlinear interaction of the driving field with the HHG medium, in which the electron is used as an intermediary between the optical modes. Until recently this was mainly described in semiclassical terms, in which only the electronic degrees of freedom are quantized [4], although there have been early approaches to introduce a fully quantized description of the HHG process [21; 25; 26]. However, recent advances in the quantum optical analysis of HHG has established a new direction in the investigation of strong field physics. This allows to study the quantum mechanical properties of the harmonic radiation, or to take into account the backaction on the driving field [12; 13; 14; 17; 18; 19; 20; 27]. In particular, it was shown that conditioning procedures on
processes induced by intense laser-matter interaction can lead to the generation of high-photon number controllable non-classical field states in a broad spectral range [13; 14; 17; 18; 19].
What now follows is a brief introduction to the quantum optical description of the process of HHG. We will consider discrete field modes for the sake of simplicity, and would like to refer the reader to the full quantum-electrodynamical description including a continuum of field modes given in [19]. To describe the process of HHG in the single-atom picture (see [21] in which case this is legitimate) we assume that a single active electron is initially in the ground state \(\ket{g}\), and is driven by a strong laser field which is described by a coherent state \(\ket{\alpha}\) in the fundamental driving mode. The harmonic field modes \(q\in\{2,...,N\}\) are initially in the vacuum \(\ket{\{0_{q}\}}=\otimes_{q\geq 2}\ket{0_{q}}\). The interaction Hamiltonian describing the process in the length-gauge, and within the dipole approximation, is given by
\[H_{I}(t)=-\mathbf{d}(t)\cdot\mathbf{E}_{Q}(t), \tag{1}\]
where the electric field operator \(\mathbf{E}_{Q}(t)=-ig\sum_{q=1}^{N}\sqrt{q}\left(b_{q}^{\dagger}e^{iq\omega t} -b_{q}e^{-iq\omega t}\right)\) is coupled to the time-dependent dipole moment operator
\[\mathbf{d}(t)=U_{sc}^{\dagger}(t,t_{0})\mathbf{d}U_{sc}(t,t_{0}). \tag{2}\]
The dipole moment is in the interaction picture of the semi-classical frame \(U_{sc}(t,t_{0})=\mathcal{T}\exp\left[-i\int_{t_{0}}^{t}d\tau H_{sc}(\tau)\right]\), with respect to the Hamiltonian of the electron
\[H_{sc}(t)=H_{A}-\mathbf{d}\cdot\mathbf{E}_{cl}(t). \tag{3}\]
This semi-classical Hamiltonian is the same as traditionally considered in semi-classical HHG theory [4], where \(H_{A}=\mathbf{p}^{2}/2+V(\mathbf{r})\) is the pure electronic Hamiltonian, and
\[\mathbf{E}_{cl}(t)=\mathrm{Tr}[\mathbf{E}_{Q}(t)\ket{\alpha}\!\!\bra{\alpha}] =ig(\alpha e^{-i\omega t}-\alpha^{*}e^{i\omega t}), \tag{4}\]
is the classical part of the driving laser field. A detailed derivation of the interaction Hamiltonian \(H_{I}(t)\) can be found in [19]. It now remains to solve the time-dependent Schrodinger equation (TDSE) for the dynamics of the total system of electron and field. Since we are interested in the quantum optical dynamics of the field, and in particular on the process of HHG, we consider the field evolution conditioned on the electronic ground state (this is because the electron returns to the ground state in the HHG process). We thus project the TDSE on \(\ket{g}\), and it remains to solve
\[i\partial_{t}\ket{\Phi(t)}=-\bra{g}\mathbf{d}(t)\cdot\mathbf{E}_{Q}(t)\ket{ \Psi(t)}, \tag{5}\]
where \(\ket{\Phi(t)}=\bra{g}\Psi(t)\) with the state of the total system \(\ket{\Psi(t)}\). Taking into account that the electron is initially in the ground state, it is equivalent to solve for the operator
\[K_{HHG}=\bra{g}\mathcal{T}\exp\left[i\int_{t_{0}}^{t}dt^{\prime}\mathbf{d}(t ^{\prime})\cdot\mathbf{E}_{Q}(t^{\prime})\right]\ket{g}, \tag{6}\]
which solely acts on the initial field state \(\ket{\Phi_{i}}=\ket{\alpha}\ket{\{0_{q}\}}\). This can be solved exactly when neglecting correlations in the dipole moment of the electron [21; 18], such that we can write
\[K_{HHG}\approx\mathcal{T}\exp\left[i\int_{t_{0}}^{t}dt^{\prime}\bra{g}\mathbf{ d}(t^{\prime})\ket{g}\cdot\mathbf{E}_{Q}(t^{\prime})\right]=\prod_{q=1}^{N}e^{i \varphi_{q}}D(\chi_{q}), \tag{7}\]
where the shift in each mode is given by the respective Fourier component of the time-dependent dipole moment expectation value
\[\chi_{q}=-ig\int_{t_{0}}^{t}dt^{\prime}\bra{\mathbf{d}(t^{\prime})}e^{iq \omega t^{\prime}}. \tag{8}\]
Thus, the solution to (7) is given by a displacement operation acting on the field modes
\[\ket{\Phi}=K_{HHG}\ket{\Phi_{i}}=K_{HHG}\ket{\alpha}\otimes_{q\geq 2}\ket{0_{q}} =\ket{\alpha+\chi_{1}}\otimes_{q\geq 2}\ket{\chi_{q}}. \tag{9}\]
That the harmonic modes are described by coherent states is due to the fact that the source for the coherent radiation is related to the electron dipole moment expectation value \(\bra{\mathbf{d}(t)}=\bra{g}\mathbf{d}(t)\ket{g}\), which acts as a classical charge current
coupled to the field operator. It does thus only represent the coherent contribution to the harmonic radiation field, and no genuine quantum signature is found. Furthermore, the fact that the final state is a product coherent state over all modes is a consequence of the approximation of neglecting the dipole moment correlations. Otherwise, if going beyond the linear order in \(\mathbf{E}_{Q}(t)\), the field operators for different modes would mix when evaluating the exact propagator in (6) (see section II.3). Nevertheless, a phenomenological approach to take into account the entanglement between the field modes was performed by the authors in [17; 18].
However, we can employ conditioning schemes on certain field modes which allows for quantum state engineering of light with non-classical properties [18; 19]. In particular, it was shown experimentally that a conditioning procedure on the process of HHG can lead to coherent state superposition states (CSS) in the driving laser mode (in the IR regime) in close analogy to optical cat states [13; 14]. The experimental configuration is schematically shown in Fig. 1, in which the the conditioning on HHG is carried out, and a homodyne detection measurement of the fundamental driving field is performed [13; 19]. To formally describe the generation of those optical CSS via a conditioning operation on the HHG state \(\ket{\Phi}=\ket{\alpha+\chi_{1}}\otimes_{q\geq 2}\ket{\chi_{q}}\) from (9), M. Lewenstein recognized that it can be obtained through the projection onto \(P=\openone-\ket{\alpha}\!\!\bra{\alpha}\). This projector was phenomenologically introduced in [13], and lead to the CSS state
\[\ket{\psi}=\ket{\alpha+\chi_{1}}-\bra{\alpha}\!\!\bra{\alpha+\chi_{1}}\ket{ \alpha}. \tag{10}\]
It was then shown by P. Stammer in [17; 18] how this projector follows from a projective measurement on the harmonic field modes when further taking into account the correlations between the field modes, and also derived the actual measurement operation \(M_{\alpha}^{X}=\openone-e^{-\sum_{q\geq 2}\ket{\chi_{q}}^{2}}\ket{\alpha}\!\! \bra{\alpha}\), which converges to the projector \(M_{\alpha}^{X}\simeq P=\openone-\ket{\alpha}\!\!\bra{\alpha}\) since \(\sum_{q\geq 2}\ket{\chi_{q}}^{2}\) is on the order \(\mathcal{O}(1/N)\). The completeness relation of the associated positive operator-valued measure for the measurement operator was shown in [18] within the framework of the quantum theory of measurement. To reconstruct the quantum state of the coherent state superposition in (10) a homodyne detection measurement is performed (see Fig. 1), and the Wigner function of the state is reconstructed. The Wigner function corresponding to the CSS in (10) is shown in Fig. 2 for two different values of the displacement \(\chi_{1}\). The possibility of experimentally varying the displacement \(\chi_{1}\), for instance by changing the gas density in the HHG interaction region, allows to change the CSS from an optical "kitten"-state for small displacement (displaced first Fock state) to an optical "cat"-state for larger displacement, as shown in Fig. 2 (a) and (b), respectively. This allows to have control over the non-classical properties of the generated CSS in order to generate high-photon number optical cat states from the infrared to the extreme ultraviolet regime [13; 17]. We note, that the displacement \(\chi_{1}\) can not be arbitrarily large, since it would destroy the superposition in (10) due to the pre-factor in the second term which is given by the overlap of the two states in the superposition. However, since \(\alpha\) is the initial amplitude of the coherent state, this value has very high photon number, and thus the optical cat and kitten states can life far away in phase space while the two states in the superposition are not distinguishable.
Figure 1: Schematic illustration of the HHG conditioning experiment performed to generate optical cat states with controllable quantum features. An intense laser field drives the process of HHG, in which an entangled state of the fundamental mode and all harmonics is generated. A conditioning measurement on the harmonic field modes in the quantum spectrometer (QS) leads to a coherent state superposition in the driving field of the form (10), and is measurement with a homodyne detection scheme after overlapping with a local oscillator of varying phase delay \(\varphi\). The reconstructed Wigner functions of the homodyne measurement are shown in 2.
## II Open questions about quantum optics of high harmonic generation
In the previous section we have outlined the current state of the art of our efforts to have a quantum optical description of the process of high harmonic generation. However, there we have made assumptions about the experimental boundary conditions, and performed approximations by neglecting particular contributions. Those need to be tested. Furthermore, the quantum optical description of the light-matter interaction has not yet revealed any genuinely quantum mechanical feature in the HHG emission process itself. It turned out, that the state of the harmonic field modes \(\{q\}\) are described by product coherent states \(\left|\chi_{q}\right\rangle\) - which are purely classical. Non-classical signatures, by means of the optical cat state, emerged through the conditioning process. However, we believe that the emitted radiation in the process of HHG contains non-classical signatures once the incoherent contribution from the dipole moment correlations are taken into account, and furthermore, that the field state will be entangled.
In the following we will outline some open questions in the description of the process of high order harmonic generation from a quantum optical point of view, and provide a motivation why this should be a matter of interest for future investigations.
### On the role of the optical phase in high harmonic generation
To describe the experimental conditions of the HHG experiment, we have assumed that the radiation field which drives the process can be described by a single-mode coherent state \(\left|\alpha\right\rangle\). This would imply that the source emits continuous coherent laser light in a single mode with a well defined phase (coherent in the sense of having non-vanishing off-diagonal density matrix elements in the photon number basis). However, standard HHG experiments are performed by using a pulsed source of radiation. On the one hand this would automatically require a multi-mode description in the frequency domain due to the finite duration of the pulses (they are not just finite, but rather super short in the regime of femtoseconds). And thus we extended the theory to a continuum of modes given in Ref. [19]. Furthermore, assuming a pure coherent state description implies that the field has a well defined phase, and would thus require a phase-stabilized laser system, such that the carrier-wave and the envelope of the pulse have a fixed phase relation from shot to shot (CEP-stabilization [28]). Otherwise, for non phase-stabilized driving lasers, where the phase varies from shot to shot, one has to average over all possible phases, and take into account a proper mixed initial state
\[\rho_{\left|\alpha\right|}=\frac{1}{2\pi}\int_{0}^{2\pi}d\varphi\left|\alpha e ^{i\varphi}\right\rangle\!\!\left\langle\alpha e^{i\varphi}\right|=e^{-\left| \alpha\right|^{2}}\sum_{n}\frac{\left|\alpha\right|^{2n}}{n!}\left|n\!\right \rangle\!\!\left\langle n\right|. \tag{11}\]
In particular the experiments in Refs. [13; 14], which uses the process of HHG to generate optical cat-states do not use CEP-stable driving fields. To analyze the process of HHG, and the conditioning experiment introduced in [13], without the assumption of having a pure coherent initial state within the current quantum optical description there arise formal difficulties and interpretational inconsistencies with the well accepted picture of the HHG process.
The difficulty arising in the formal analysis is that the semi-classical frame from the interaction picture of the Hamiltonian \(H_{I}(t)\) (see section I.3) is not well defined for mixed initial states. Within a fixed semi-classical frame, which is defined via the unitary transformation \(D(\alpha)\), we have seen that HHG effectively leads to a shift in the field
Figure 2: Wigner function of the coherent state superposition in (10) for different displacement of (a) \(\chi_{1}=0.1\) (b) \(\chi_{1}=1.0\), which shows features of an optical ”kitten”-state and a ”cat”-state, respectively.
modes, i.e. \(\rho_{0}\to K_{HHG}\rho_{0}K_{HHG}^{\dagger}\) (see Eq.(9)). However, for the mixed state \(\rho_{|\alpha|}\) there is no well defined semi-classical frame defined from a unique displacement operation \(D(\alpha)\). This can also be seen from the fact that the classical part of the driving field vanishes
\[\mathbf{E}_{cl}(t)=\langle\mathbf{E}_{Q}(t)\rangle=\mathrm{Tr}\big{[}\mathbf{E }_{Q}(t)\rho_{|\alpha|}\big{]}=0, \tag{12}\]
which implies that there is a vanishing mean electric field amplitude. Hence, this conflicts with the traditionally used powerful picture of HHG in terms of the 3-step model introduced in section I.2. In this picture the presence of a non-vanishing electric field amplitude is crucial for describing the tunnel ionization process and the electron dynamics in the continuum driven by the field. The underlying physical property, for the fact that the semi-classical frame is only uniquely defined for a pure coherent initial state \(|\alpha\rangle\), is the phase of the field. A coherent state has a well defined phase, which implies that the semi-classical frame exists via
\[\mathbf{E}_{cl}(t)=\langle\mathbf{E}_{Q}(t)\rangle=\mathrm{Tr}[\mathbf{E}_{Q} (t)\,|\alpha\rangle\!\langle\alpha|]=\langle\alpha|\,\mathbf{E}_{Q}(t)\,| \alpha\rangle\propto\sin(\omega t) \tag{13}\]
and the classical picture of an electric field driving the electron process holds. However, it is now natural to ask, if the process of high harmonic generation requires non-vanishing field amplitudes as suggested by the 3-step model, and if harmonics can be generated from driving fields without optical coherence such as the phase randomized state in (11), which is diagonal in the photon number basis. Such a state with vanishing off-diagonal density matrix elements in the photon number basis does not exhibit optical coherence, and we thus ask if optical coherence in the driving field is a necessary requirement to generate high-order harmonics. For instance, the electric field expectation value of the mixed state (11) vanishes \(\langle\mathbf{E}_{Q}\rangle=\mathrm{Tr}\big{[}\mathbf{E}_{Q}\rho_{|\alpha|} \big{]}=\mathbf{E}_{cl}=0\), due to the totally arbitrary phase and thus there is no well defined semi-classical frame. This ultimately leads to the question if processes driven by sufficiently large photon number states \(|n\rangle\), which have a completely random phase due to the well defined photon number, allows for the generation of high-order harmonics. Or, even more general, if incoherent radiation can be used to drive the parametric process of HHG as recently observed for spontaneous parametric down-conversion in [29].
In many optical experiments the presence of optical coherence is not required to explain the measurement results, and the question of the requirement of optical coherence was first posed in [30]. It is thus natural to ask if the process of HHG requires optical coherence (in the sense of a non-diagonal density matrix in the photon number basis), or if an optical field with a vanishing mean electric field amplitude is sufficient to drive the HHG process? If this is not the case, and we can generate high-order harmonics with incoherent light, how do the harmonic radiation properties differ? And furthermore, how can the powerful picture of the 3-step model be understood for driving fields with vanishing mean field amplitude? Those question suggest that there is a need for further theoretical investigation about the role of the optical phase in the HHG process, and further if the conditioning experiment in [13] is sensitive to the phase of the field or not. From an experimental perspective, we are eager to observe the reconstruction of the Wigner function for CEP stabilized driving laser fields. From the theoretical point of view, the first question necessary to answer in order to describe the experimental boundary conditions: _What is the quantum state of an ultrashort few-cycle (CEP stable) laser pulse?_ One way to approach this question could be by following the arguments similar to [31; 32] or [33], just for pulses of radiation with and without CEP-stabilization.
### Theory of quantum optical coherence of high harmonic generation
In the derivation of the field state after the process of HHG we have thus far always neglected the correlations in the dipole moment of the electron, i.e. approximating (6) with (7). Consequently, we only considered a classical charge (by virtue of the dipole moment expectation value) coupled to the field operator. Therefore, we have only considered the coherent contribution to the harmonic radiation field. This has the advantage of being exactly solvable. However, as commonly known [34] the incoherent contribution of the emitted radiation can exhibit non-classical signatures and can lead to interesting observation such as photon antibunching [35]. This incoherent contribution originates from the correlations in the dipole moment. In order to access the full properties of the harmonic radiation we should not perform the approximation of neglecting the dipole moment correlations. Including those correlations one can obtain the complete properties of the light field in the process of HHG, and further allows to obtain a detailed _theory of quantum optical coherence for the process of high harmonic generation_. Furthermore, including those correlations it allows to ask for the actual quantum state of the field after HHG, going beyond the product coherent states in (9). Taking into account terms beyond linear order in \(\mathbf{E}_{Q}(t)\) would lead to a coupling of different field modes, and thus to entanglement and squeezing.
All the previous analysis was performed in the Schrodinger picture (or more precisely in the interaction picture). However, to compute the observables of the field, such as the spectra or two-time correlation functions, and eventually
finding non-classical signatures, does not necessarily require the knowledge of the field state after the interaction. That's why we will switch to the Heisenberg picture, making the field operators time-dependent, which allows to obtain two-time averages including the dipole moment correlations. We will start with the Hamiltonian of the intense-laser matter interaction (here in 1D for linear polarization)
\[H=\sum_{q}\omega_{q}b_{q}^{\dagger}b_{q}+H_{A}-dE_{Q}, \tag{14}\]
where \(H_{A}\) is the atomic Hamiltonian, and the electric field operator is given by \(E_{Q}=-ig\sum_{q}\sqrt{q}(b_{q}^{\dagger}-b_{q})\). First, we have to transform the field operator into the Heisenberg picture
\[b_{q}(t)=b_{q}e^{-i\omega_{q}t}+\sqrt{q}g\int_{0}^{t}dt^{\prime}d(t^{\prime})e^ {-i\omega_{q}(t-t^{\prime})}. \tag{15}\]
We will first compute the first order correlation function [34]
\[G(t,t+\tau)=\left\langle b_{q}^{\dagger}(t)b_{q}(t+\tau)\right\rangle=qg^{2}e^ {i\omega_{q}\tau}\int_{0}^{t}dt_{1}e^{-i\omega_{q}t_{1}}\int_{0}^{t+\tau}dt_{2 }e^{i\omega_{q}t_{2}}\left\langle g\right|d(t_{1})d(t_{2})\left|g\right\rangle, \tag{16}\]
such that we can use the Wiener-Khinchin theorem [36], stating that the auto-correlation function of a stationary random process and the spectral density of this process are a Fourier-transform pair in the ensemble average, to obtain the power spectrum given by
\[S(\omega)=\frac{1}{\pi}\operatorname{Re}\left[\int_{0}^{\infty}d\tau\lim_{t \rightarrow\infty}\left\langle b_{q}^{\dagger}(t)b_{q}(t+\tau)\right\rangle e ^{i\omega\tau}\right]. \tag{17}\]
It turns out that the power spectral density \(S(\omega)\) consists of two terms, the coherent part, and an incoherent contribution coming from the dipole moment correlations
\[G^{(1)}(t,t+\tau)= G^{(1)}_{coh}(t,t+\tau)+qg^{2}e^{i\omega_{q}\tau}\int_{0}^{t}dt_{ 1}e^{-i\omega_{q}t_{1}}\int_{0}^{t+\tau}dt_{2}e^{i\omega_{q}t_{2}}\int dp \left\langle g\right|d(t_{1})\left|p\right\rangle\left\langle p\right|d(t_{2} )\left|g\right\rangle, \tag{18}\]
where the coherent contribution (first term) comes from the dipole moment expectation value. In the stationary limit this terms reads
\[\lim_{t\rightarrow\infty}G^{(1)}_{coh}(t,t+\tau)=g^{2}q|\langle d\rangle\left( \omega_{q}\right)|^{2}e^{-i\omega_{q}\tau}, \tag{19}\]
such that the coherent contribution to the power spectrum is given by
\[S_{coh}(\omega)=g^{2}q|\langle d\rangle\left(\omega_{q}\right)|^{2}\delta( \omega-\omega_{q}). \tag{20}\]
It shows that the HHG spectrum consists of peaks at frequency \(\omega_{q}=q\omega\) (when properly taking into account the finite duration of the driving pulse, the harmonic peaks will have a finite width), with the weight of each harmonic given by the Fourier transform of the time dependent dipole moment expectation value, and it remains to compute the incoherent contribution. However, it also needs to be carefully analyzed if the Wiener-Khinchin theorem (WKT) can be used since it only holds for a stationary random process in the ensemble average (see discussion about time-dependent spectra in [37; 38]). One should also analyze if HHG is an ergodic process, which would then allow to use the WKT since the ensemble and time average agree for a stationary process and the autocorrelation function in (17) only depends on the temporal difference (stationarity in the ensemble or temporal average are not sufficient for ergodicity). Furthermore, we then want to compute the second order correlation function
\[g^{(2)}(\tau)=\lim_{t\rightarrow\infty}\frac{\left\langle b_{q}^{\dagger}(t)b_ {q}^{\dagger}(t+\tau)b_{q}(t+\tau)b_{q}(t)\right\rangle}{\left\langle b_{q}^{ \dagger}(t)b_{q}(t)\right\rangle\left\langle b_{q}^{\dagger}(t+\tau)b_{q}(t+ \tau)\right\rangle}, \tag{21}\]
since this would provide insights into possible anti-bunching signatures, i.e. \(g^{(2)}(0)<g^{(2)}(\tau)\). However, we imagine that the coherent contribution dominates the incoherent contribution, and one needs to conceive clever experiments to either separate the two processes for individual harmonics or to find the conditions in which the two contributions are on the same order of magnitude. This could eventually be realized with a two-color driving field (\(\omega\) and its second harmonic \(2\omega\)), which leads to the appearance of even harmonics in the spectrum. By varying the phase between the two driving fields, the amplitude of the even harmonics can be altered, such that there might be a regime in which the coherent and incoherent contribution can compete.
### Entanglement and squeezing in high harmonic generation
Thus far found that the field state of the harmonic modes are given by product coherent states of all filed modes (9). This is a consequence of the approximation performed in (7) (neglecting the dipole moment correlations), which effectively leads to a linear expression in the field operators \(b_{q}^{(\dagger)}\). While the commutator of the exact interaction Hamiltonian \(H_{I}(t)=-d(t)E_{Q}(t)\) at different times is an operator in the total Hilbert space of atom plus field
\[[H_{I}(t_{1}),H_{I}(t_{2})]\in\mathcal{H}_{\mathcal{A}}\otimes\mathcal{H}_{F}. \tag{22}\]
The approximate interaction Hamiltonian \(H_{I}^{app}(t)=-\left\langle d(t)\right\rangle E_{Q}(t)\) is just a complex number, i.e. \([H_{I}^{app}(t_{1}),H_{I}^{app}(t_{2})]\in\mathds{C}\), and thus when solving (7) the modes do not mix. Going beyond the linear term of the field operator \(E_{Q}(t)\) would lead, for instance, to squeezing in the field modes. Furthermore, all field modes will become entangled due to the mixing of the field operators \(b_{q}^{(\dagger)}\) of the different modes. We can thus start to evaluate the commutator of the exact interaction Hamiltonian at different times, yielding
\[[H_{I}(t_{1}),H_{I}(t_{2})]= -g^{2}\sum_{qp}\sqrt{qp}\sum_{ijk}\ket{i}\bra{j}[\left[d_{ik}(t_{ 1})d_{kj}(t_{2})-d_{ik}(t_{2})d_{kj}(t_{1})\right]\left[b_{q}^{\dagger}b_{p}^{ \dagger}e^{i\omega_{q}t_{1}}e^{i\omega_{p}t_{2}}-b_{q}^{\dagger}b_{p}e^{-i \omega_{p}t_{2}}e^{i\omega_{q}t_{1}}+\text{h.\,c.}\right]\] \[+g^{2}\sum_{q}q\sum_{ijk}[d_{ik}(t_{1})d_{kj}(t_{2})e^{-i\omega_{ q}(t_{1}-t_{2})}-d_{ik}(t_{2})d_{kj}(t_{1})e^{i\omega_{q}(t_{1}-t_{2})}] \ket{i}\bra{j}, \tag{23}\]
where we have used a discrete basis for the atomic degree of freedom \(\mathds{1}=\sum_{i}\ket{i}\bra{i}\), and introduced the transition dipole matrix elements \(d_{ij}(t)=\bra{i}d(t)\ket{i}\). Note that for the approximation of neglecting the dipole moment correlations and taking the expectation value in the electronic ground state leads to \(\sum_{ijk}d_{ik}(t_{1})d_{kj}(t_{2})\bra{g}\bra{j}\simeq\left\langle d(t_{1}) \right\rangle\bra{d(t_{2})}\), and thus the first line in (23) vanishes (where the squeezing and mixing of modes would came from), and the second line reduces to what one would get from \([H_{I}^{app}(t_{1}),H_{I}^{app}(t_{2})].\) However, for the exact interaction Hamiltonian \(H_{I}(t)=-d(t)E_{Q}(t)\), we observe that the different field modes mix, which would lead to squeezing and entanglement. One could, for instance, already observe first signatures of such non-classical states due to the higher order terms of \(E_{Q}(t)\) when taking into up to the quadratic order in the coupling \(g\propto\sqrt{\omega/V_{eff}}\) with the quantization volume \(V_{eff}\). Thus, when solving (6) by using Baker-Campbell-Hausdorff for infinitesimal time steps, one obtains an approximate solution up to quadratic order in \(g\) when only taking into account \([H_{I}(t_{1}),H_{I}(t_{2})]\propto g^{2}\), and the time-dependent transition dipole matrix elements \(d_{ij}(t)\) can be computed withing the strong field approximation [11].
## III Conclusion
Motivated by recent studies on the quantum optical description of the process of high harmonic generation from inten laser field driven atoms, we identified current challenges and how this can lead to future investigations. With the proposed studies we anticipate that more complete insights into the process of HHG will be obtained, and that the full characteristics of the radiation field are found. The current quantum optical framework treats the source of the scattered field as a classical charge current, similar to a dipole antenna, and thus only the coherent contribution is obtained through the dipole moment expectation value. Thus, the radiation properties as well as the final field state, do not indicate genuine quantum signatures in the HHG process. Only via conditioning experiments, through a post-selection procedure, we obtained non-classical signatures in the reconstructed Wigner function. It would thus be of great interest to see if already at the level of the HHG process itself, without conditioning, non-classical observations can be obtained in the radiation properties of the scattered field. Besides the proposed approaches present in this manuscript, there exist further efforts in this direction. For instance there are the following options to achieve such situations:
* So far we have considered high-order harmonics generated in atomic systems. Alternatively, one can consider HHG from solid state targets. Even in the case of "trivial" solid state systems, such as electrons in the Wannier-Bloch picture [39], one can obtain electron-field entanglement [40] since the electron can transition on one site in the lattice, but might recombine in another side. A similar mechanism, of semiconductors driven by strong coherent radiation, is studied in the recent paper [41], where the potential for generating non-classical light fields is discussed.
* An other option, besides driving HHG in simple uncorrelated solid state targets, is to look for HHG from laser driven strongly correlated materials, such as high temperature superconductors [42]. For a simple, yet pedagogical model of such mechanism, see [43; 44].
* Finally, one can use non-classical, for instance squeezed light to drive the HHG process in atoms, which leads it's fingerprints in the field observable such as the HHG spectra [45]. Which, however, also don't depict non-classical signatures in the harmonic radiation based on this observable.
## IV Acknowledgement
ICFO group acknowledges support from: ERC AdG NOQIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PCI2019-111828-2, QUANTERA DYNAMITE PCI2022-132919, Proyectos de I+D+I "Retos Colaboracion" QUSPIN RTC2019-007196-7); MICIN with funding from European Union NextGenerationEU(PRTR-C17.11) and by Generalitat de Catalunya; Fundacio Cellex; Fundacio Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU Horizon 2020 FET-OPEN OPTOlogic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 -- NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 ("La Caixa" Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013). Views and opinions expressed in this work are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them. P.S. acknowledges funding from The European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 847517.
|
2308.07320
|
PM-Gati Shakti: Advancing India's Energy Future through Demand
Forecasting -- A Case Study
|
PM-Gati-Shakti Initiative, integration of ministries, including railways,
ports, waterways, logistic infrastructure, mass transport, airports, and roads.
Aimed at enhancing connectivity and bolstering the competitiveness of Indian
businesses, the initiative focuses on six pivotal pillars known as
"Connectivity for Productivity": comprehensiveness, prioritization,
optimization, synchronization, analytical, and dynamic. In this study, we
explore the application of these pillars to address the problem of "Maximum
Demand Forecasting in Delhi." Electricity forecasting plays a very significant
role in the power grid as it is required to maintain a balance between supply
and load demand at all times, to provide a quality electricity supply, for
Financial planning, generation reserve, and many more. Forecasting helps not
only in Production Planning but also in Scheduling like Import / Export which
is very often in India and mostly required by the rural areas and North Eastern
Regions of India. As Electrical Forecasting includes many factors which cannot
be detected by the models out there, We use Classical Forecasting Techniques to
extract the seasonal patterns from the daily data of Maximum Demand for the
Union Territory Delhi. This research contributes to the power supply industry
by helping to reduce the occurrence of disasters such as blackouts, power cuts,
and increased tariffs imposed by regulatory commissions. The forecasting
techniques can also help in reducing OD and UD of Power for different regions.
We use the Data provided by a department from the Ministry of Power and use
different forecast models including Seasonal forecasts for daily data.
|
SujayKumar Reddy M, Gopakumar G
|
2023-07-30T09:36:42Z
|
http://arxiv.org/abs/2308.07320v1
|
# PM-Gati Shakti: Advancing India's Energy Future through Demand Forecasting - A Case Study
###### Abstract
PM-Gati-Shakti Initiative, integration of ministries, including railways, ports, waterways, logistic infrastructure, mass transport, airports, and roads. Aimed at enhancing connectivity and bolstering the competitiveness of Indian businesses, the initiative focuses on six pivotal pillars known as "Connectivity for Productivity": comprehensiveness, prioritization, optimization, synchronization, analytical, and dynamic. In this study, we explore the application of these pillars to address the problem of "Maximum Demand Forecasting in Delhi." Through a detailed case study, we seek to comprehend and formalize the use cases associated with this crucial forecasting task, illuminating the potential and impact of the PM-Gati Shakti scheme in shaping India's energy landscape and driving sustainable growth. Electricity forecasting plays a very significant role in the power grid as it is required to maintain a balance between supply and load demand at all times, to provide a quality electricity supply, for Financial planning, generation reserve, and many more. Forecasting helps not only in Production Planning but also in Scheduling like Import / Export which is very often in India and mostly required by the rural areas and North Eastern Regions of India. As Electrical Forecasting includes many factors which cannot be detected by the models out there, We use Classical Forecasting Techniques to extract the seasonal patterns from the daily data of Maximum Demand for the Union Territory Delhi. This research contributes to the power supply industry by helping to reduce the occurrence of disasters such as blackouts, power cuts, and increased tariffs imposed by regulatory commissions. The forecasting techniques can also help in reducing OD and UD of Power for different regions. We use the Data provided by a department from the Ministry of Power and use different forecast models including Seasonal forecasts for daily data.
Machine Learning Time-Series Forecasting Demand Forecasting PM Gati-Shakti Ministry of Power Delhi
## 1 Introduction
The Indian Energy GRID is maintained by POWERGRID [18] which has the objectives of running the GRID efficiently and installing transmission lines etc... and the second one is the National Load Dispatch Center (NLDC) [14] which concentrates on Supervision over the Regional Load Dispatch Centres, Scheduling and dispatch of electricity over inter-regional links in accordance with grid standards specified by the Authority and grid code specified by Central Commission in coordination with Regional Load Dispatch Centres, Monitoring of operations and grid security of the National Grid, etc... This research mainly focusses on this NLDC which is a Division of Ministry of Power.
Fig 1[15] describes the Yearly Installed Power Capacity in Delhi. The highest installed Capacity was 8,346.72 MW in Fiscal year 2016. which is responsible for sending the energy from Stations to sub-stations and to discuss and then to homes, industries, commercials, etc... As of 21-06-2023 The installed Capacity Sector-wise data [16] gives an overview of what type of Thermal Plants which are present in Delhi and also what types of Energy resources are present in Delhi. There are 11 Thermal Stations in Delhi with 4 400KV Substations and 42 200 KV Substations [17].
### Delhi Yearly Power Statistics
To further Analyse Delhi's Power, this paper uses the [15] RBI's handbook contains yearly state-wise data to analyze the Delhi v/s Whole India's data. Fig 2(a) gives Delhi v/s whole India's Statewise Per-capita of Power raises to 1974.4 Kilo-Watt Hour in Delhi in 2018-2019 and in India 1115.3 Kilo-Watt Hours in 2021-2022. Fig 2(b) gives an Availability of Power at Delhi raises to 3308 net Crore Units in 2019-2020 and 137402 net crore units in 2021-2022 for All India, Fig 2(c) gives the Total Installed Capacity of RES Power raises to 245 Mega Watt at 2021 for Delhi and 94434 MegaWatt at 2021 for all India. Fig 2(d) depicts the Power Requirement for Delhi has the maximum reading at 3309 net crore units in 2019-2020 and for all of India 137981 net crore units in 2021-2022.
Figure 1: Installed Capacity
### Relation to Demand Forecasting
The correlation between these variables with the Consumption of Electricity of Delhi [19] Now, as we speak Consumption by the consumers is based on the Maximum Demand which has been recorded per day in the National Load Dispatch Center (NLDC) As all these entities are highly correlated we push our limits to only use univariant analysis of Maximum Demand attained in the Daily data produced by the Delhi Consumers.
Figure 2: Statistics on the Delhi’s Power
### PM-Gati Scheme Initiative
Statistically speaking, many papers have demonstrated that Economic Factors are valuable data for Maximum Demand Forecasting. Although these findings have been successful, we present a baseline model focusing solely on one variable to capture the Auto-Correlations between days, weeks, months, and even years. This approach aims to create baseline performance models due to Seasonal Dependence. To achieve this, we employ various pre-processing techniques and utilize a selection of Machine Learning and Time Series Forecasting models that have been previously applied to create our Baseline model.
The "Integration of Ministries" initiative combines seven drivers: railways, ports, waterways, logistic infrastructure, mass transport, airports, and roads. The primary goal is to enhance connectivity and boost the competitiveness of Indian businesses. This integration is known as "Gati-Shakti," and it rests on six pillars of Connectivity for Productivity.
1. Comprehensiveness
2. Prioritization
3. Optimization
4. Synchronization
5. Analytical
6. Dynamic
Given these Pillars, we try to formalize and study the use cases which are provided for this problem Statement, "Maximum Demand Forecasting in Delhi", Here's a case study to understand...
Let us say that the Ministry of Steel[20] created an initiative to increase domestic production that would lower dependence on imported Steel and would result in considerable savings of foreign exchange. As this seems reasonable, there are some pros and cons related to this one being less Foreign Exchange obviously and a con being Increased Electricity Consumption. Data explains from the Ministry of Coal to the Ministry of Power to increase the installed Capacity or Power Generation can be captured through Comprehensiveness.
As there's an increase in demand for Electricity, the Ministry of Power tries to solve this problem but as the transparency of Ministries in PM-Gati-Shakti provides to consolidate the increase in demand like the Ministry of Railways tries to optimize routes on weekends and majorly metro trains which are in Delhi can be optimized to consolidate this increase in demand which can be captured through Prioritization, Optimization while maintaining a Holistic Approach. Note that this is a scenario that can be predicted but there will be many scenarios as possible in-order to facilitate the Maximum Demand Forecasting.
## 2 Related Work
This Section consists of Background analysis a.k.a Literature review of the Demand Forecasting Techniques adopted and Analyzing the recent reports by the Ministries of Government. This section focuses on many different attributes which are needed to be considered by the previous research done by the individuals.
### Deep Learning
Fig 3 (a) gives us a clear understanding of a number of Deep Learning papers which are mostly based on Indian Authors published in IEEE. Anil et al [1] use the Levenberg-Marquardt back propagation algorithm ANN on the day ahead Short term load Forecasting on the state of Uttar Pradesh trained on hourly data with the MAPE score of average MAPE 3.05, This work suggests using the ANN model to check with our dataset too. Navneet et al [2] use the New Delhi ADEL data to forecast the load by using different Neural Network Architectures in which ELMANN Neural Network Architecture has given good accuracy. Dharmoju et al [3] provided a sector of Residental buildings by the United States Dataset by using LSTM (Long Short Term Memory) model for monthly forecasting. Shaswat et al [4] use a Temporal Fusion Architecture to capture the interactions which are scaled between 0 and 1 for daily data which achieves 4.15% more than the existing models and this is for the whole of India which is not region specific. Saravanan et al [5] use economic factors like GDP, national income, consumer price index, etc.. with that they used Principle component Analysis followed by ANN which gives the highest accuracy of MAPE score of 0.43. Vishnu et al [6] concentrate on the work on Renewable Energy Resources and devised two major LSTM (Long Short Term Memory) models.
### Machine Learning
Fig 3 (b) gives us a clear understanding of a number of Machine Learning papers and their analysis in all the papers (as ids). Christos et al [7] which also forecasts the Peak Demand in the electrical sector of the Producers side. They used a Netherland dataset from 3 regions and analyzes comparatively with all the existing Forecasting methods such as ARIMA, Ridge, and Lasso Regression and the results suggested that the Bi-directional LSTM. Saravanan et al [8] formalize a set of 64 if-else statements and the variables include per-capita GDP, population, and Import/Export and they have achieved the MAPE of 2.3. Mannish et al [9] devised an Ensemble Approach for the DISCOMs in the region Delhi for the Post Covid Scenario and their proposed model includes combining XGBoost, LightGBM, and CatBoost algorithms which achieved an average MAPE of 5.0. Banga et al [10] compared many Machine Learning Algorithms for the dataset which considers 29 attributes and the Facebook Prophet model outscores daily and hourly datasets with MAPE scores of 0.4 and 0.2.
### Regression Based Learning
Fig 3 (c) gives us a clear understanding of a number of Regression papers or Auto Regression papers compared with all the papers (as ids). note these papers are based completely on uni-variant/single-variant datasets. Carlos et al [11] analyzed a time series dataset for Brazilian Electricity Demand Forecasting and divides Brazil into 2 regions and forecasts the Electricity demand according by using ARIMA models. Kakoli et al [12] forecasted the electricity demand for the state Assam in the Northeast Region and the results suggest that to use the Seasonal ARIMA model with the formula given below SARIMA(0,1,1)(0,1,1,7) with the MAPE of 10.7. Srinivasa et al [13] provided a forecasting method that is formulated monthly for the whole of India without considering the states and regions. It has been found that the MSARIMA model outperforms CEA forecasts in both in-sample static and out-of-sample dynamic forecast horizons in all five regional grids in India.
Figure 3: Literature Survey Framework Analysis
### Summary of Literature Survey
## 3 Methodology
### Data Overview
The Features of the data which has been provided below
1. Date (DD/MM/YYYY)
2. Max.Demand met during the day (MW)
3. Shortage during maximum Demand (MW)
4. Energy Met (MU)
5. Drawal Schedule (MU)
6. OD(+)/UD(-) (MU)
7. Max OD (MW)
8. Energy Shortage (MU)
Energy Shortage (MU) feature is not available every day. This feature is recorded from 2017-05-09 as per the reports generated by the PSP by POSOCO. The data is available here [21]. This paper focuses on Univariate Analysis not to make it as complex, but to consider Max Demand met during the day (MW) as a single column.
As the column "Max Demand met During the day" is the major feature that we consider. The data starts from 2013-04-01 to 2023-05-31 which consists of 3713 days but the data points only consist of 3640 with the missing data we use the Imputation Techniques dataset and Non-Imputation Techniques dataset (no_null).
For Imputation Techniques, this paper considers Mean, Median, Mode, and Linear Interpolation Imputation data that has been imputed. So, as combined this generates 5 datasets where the models are applied to compare the performance of which Imputation is good.
Table 1 depicts the datasets which are created from the univariant data taken from one of the features in the dataset. ("Max. Demand met during the day (MW) "). As explained in the Methodology section the dataset is divided into train and test to take the MAPE score. For ARIMA models we generate train MAPE and test MAPE to check the overfitting criteria also with AIC, BIC, log(p), etc...
## 4 Forecasting Models
This paper develops a Time series forecasting model ARIMA which is known to be an Auto-Regressive Integrated Moving Average. the models which include AR, MA and ARMA, and ARIMA, and develop a model list from these regression types using the parameters. The major parameters included in the arima model are p, q, and d where p is the parameter for Auto-regressive co-efficient which says about how many days have the co-relation between today's date.
\begin{table}
\begin{tabular}{|c|c|} \hline
1 & dropna-dataset \\ \hline
2 & mean Imputation dataset \\ \hline
3 & median Imputation dataset \\ \hline
4 & mode Imputation dataset \\ \hline
5 & linear-Interpolation Imputation dataset \\ \hline \end{tabular}
\end{table}
Table 1: Datasets names which are created from the reports
The real-world data tends to be always non-Stationary. A signal is said to be stationary if its statistical properties like mean, standard deviation, trend, etc... doesn't change over time. To check if the time series is stationary or not, we use Augmented-Dickey Fuller Test where the null hypothesis is "the time series contains a unit root and is non-stationary". The results of the Augmented Dickey-Fuller Test for each of the Imputation datasets are given in the below sections.
### Datasets Analysis and ADF test results
The Figure below gives a plot of the whole dataset without dividing into train and test.
The ADF test for no-imputation dataset test-statistic = -5.45 p-value = 2.55e-06 for no-imputation dataset for first difference to try to change the time-series to stationary. test-statistic = -10.403 p-value = 1.88e-18 for Second Difference, which is not suggested as the p-value is zero (over-differencing) test-statistic = -21.152 p-value = 0.0
This figure below depicts the Mean Imputation dataset Plot.
The ADF test for Mean imputation dataset test-statistic = -5.393 p-value = 3.49e-06 for first difference to try to change the time-series to stationary. test-statistic = -10.073 p-value = 1.23e-17 for Second Difference, which is not suggested as the p-value is zero (over-differencing) test-statistic = -21.617 p-value = 0.0
This figure below depicts the Median Imputation dataset Plot.
The ADF test for Median imputation dataset test-statistic = -5.363 p-value = 4.042e-06, for first difference to try to change the time-series to stationary. test-statistic = -10.075 p-value = 1.223e-17 for Second Difference, which is not suggested as the p-value is zero (over-differencing) test-statistic = -21.686 p-value = 0.0
This figure below depicts the Mode Imputation dataset Plot.
The ADF test for Mode imputation dataset test-statistic = -5.227 p-value = 7.73e-06 for first difference to try to change the time-series to stationary. test-statistic = -10.258 p-value = 4.31e-18 for Second Difference, which is not suggested as the p-value is zero (over-differencing) test-statistic = -22.121 p-value = 0.0
This figure below depicts the Linear Interpolation Imputation dataset Plot.
The ADF test for Linear Interpolation imputation dataset test-statistic = -5.390 p-value = 3.53e-06 for first difference to try to change the time-series to stationary test-statistic = -10.072 p-value = 1.24e-17 for Second Difference, which is not suggested as the p-value is zero (over-differencing) test-statistic = -21.44 p-value = 0.0
## 5 Analysis using ACF and PACF plots
### No Imputation
The Auto-Correlation Function and Partial Auto-Correlation Function Graph for the original dataset
The ADF test for first difference to try to change the time-series to stationary.
### Mean Imputation
The Auto-Correlation Function and Partial Auto-Correlation Function Graph for the original dataset
The ADF test for first difference to try to change the time-series to stationary.
### Median Imputation
The Auto-Correlation Function and Partial Auto-Correlation Function Graph for the original dataset
The ADF test for first difference to try to change the time-series to stationary.
### Mode Imputation
The Auto-Correlation Function and Partial Auto-Correlation Function Graph for the original dataset
The ADF test for first difference to try to change the time-series to stationary.
### Interpolation Imputation
The Auto-Correlation Function and Partial Auto-Correlation Function Graph for the original dataset
The ADF test for first difference to try to change the time-series to stationary.
### Auto Regression and Moving Average model
In time series forecasting, the autoregressive moving average model of order \((p,q)\), denoted as ARMA(\(p,q)\), is a popular approach. The ARMA(\(p,q\)) model combines the autoregressive (AR) model of order \(p\) and the moving average (MA) model of order \(q\). The ARMA(\(p,q\)) model assumes that the value of the time series at a given point is linearly dependent on the previous \(p\) values of the series and the previous \(q\) error terms. The formula for the ARMA(\(p,q\)) model is as follows:
\[X_{t}=c+\phi_{1}X_{t-1}+\phi_{2}X_{t-2}+\ldots+\phi_{p}X_{t-p}+\theta_{1} \varepsilon_{t-1}+\theta_{2}\varepsilon_{t-2}+\ldots+\theta_{q}\varepsilon_{t -q}+\varepsilon_{t}\]
In this formula:
* \(X_{t}\) represents the value of the time series at time \(t\).
* \(c\) is the intercept or constant term.
* \(\phi_{1},\phi_{2},\ldots,\phi_{p}\) are the coefficients of the autoregressive terms that capture the relationship between the current and previous values.
* \(X_{t-1},X_{t-2},\ldots,X_{t-p}\) represent the lagged values of the time series.
* \(\theta_{1},\theta_{2},\ldots,\theta_{q}\) are the coefficients of the moving average terms that capture the relationship between the current value and the previous error terms.
* \(\varepsilon_{t-1},\varepsilon_{t-2},\ldots,\varepsilon_{t-q}\) represent the lagged error terms of the time series.
* \(\varepsilon_{t}\) is the error term at time \(t\), which represents the random fluctuations or noise in the series.
To estimate the parameters (\(\phi_{1},\phi_{2},\ldots,\phi_{p},\theta_{1},\theta_{2},\ldots,\theta_{q}\)) and the intercept (\(c\)) of the ARMA(\(p,q\)) model, various estimation techniques can be used, such as maximum likelihood estimation.
Once the parameters are estimated, the ARMA(\(p,q\)) model can be used for forecasting by substituting the lagged values and lagged error terms of the time series into the formula to predict future values.
Note that the ARMA(\(p,q\)) model assumes stationarity of the time series, and it is a flexible model that can capture both autoregressive and moving average components in the data.
### Seasonal Auto-Regressive Models
The Seasonal Autoregressive Integrated Moving Average (SARIMA) model is a time series forecasting model that extends the Autoregressive Integrated Moving Average (ARIMA) model to account for seasonality. SARIMA combines the components of ARIMA with seasonal differencing and seasonal autoregressive and moving average terms.
The SARIMA(p, d, q)(P, D, Q, s) model is defined by the following equations:
Autoregressive (AR) component: AR(p): \(Y_{t}=\phi_{1}Y_{t-1}+\phi_{2}Y_{t-2}+\ldots+\phi_{p}Y_{t-p}+\varepsilon_{t}\)
Integrated (I) component: I(d): \(Y_{t}^{\prime}=(1-B)^{d}Y_{t}\), where \(B\) is the backshift operator (\(BY_{t}=Y_{t-1}\))
Moving Average (MA) component: MA(q): \(Y_{t}=\theta_{1}\varepsilon_{t-1}+\theta_{2}\varepsilon_{t-2}+\ldots+\theta_{p} \varepsilon_{t-q}+\varepsilon_{t}\)
Seasonal Autoregressive (SAR) component: SAR(P): \(Y_{t}=\Phi_{1}Y_{t-s}+\Phi_{2}Y_{t-2s}+\ldots+\Phi_{P}Y_{t-Ps}+\varepsilon_{t}\)
Seasonal Moving Average (SMA) component: SMA(Q): \(Y_{t}=\Theta_{1}\varepsilon_{t-s}+\Theta_{2}\varepsilon_{t-2s}+\ldots+\Theta_{ Q}\varepsilon_{t-Qs}+\varepsilon_{t}\)
where: \(Y_{t}\) is the observed time series at time \(t\)\(\varepsilon_{t}\) is the error term (also known as the residual) at time \(t\)\(p,d,q\) are the non-seasonal AR, I, MA orders, respectively \(P,D,Q\) are the seasonal SAR, I, SMA orders, respectively \(s\) is the seasonal period or frequency (e.g., 12 for monthly data, 4 for quarterly data, etc.) \(\phi_{1},\phi_{2},\ldots,\phi_{p}\) are the non-seasonal autoregressive coefficients \(\theta_{1},\theta_{2},\ldots,\theta_{q}\) are the non-seasonal moving average coefficients \(\Phi_{1},\Phi_{2},\ldots,\Phi_{P}\) are the seasonal autoregressive coefficients \(\Theta_{1},\Theta_{2},\ldots,\Theta_{Q}\) are the seasonal moving average coefficients
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Models & Order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline AR/MA models & 1,0,0 & 19.483 & 26.125 & 50534.716 & 50553.290 \\ & 2,0,0 & 19.516 & 26.126 & 50534.362 & 50559.127 \\ & 1,1,0 & 21.903 & 26.602 & 50575.065 & 50587.447 \\ & 1,2,0 & 90.594 & 27.274 & 52837.437 & 52829.454 \\ & 0,0,1 & 16.617 & 21.619 & 56254.968 & 56273.542 \\ & 0,1,1 & 21.811 & 26.594 & 50572.554 & 50584.936 \\ & 0,2,1 & 21.9037 & 26.602 & 50578.459 & 50590.841 \\ \hline ARMA models & 8,0,8 & 18.157 & 26.332 & 50067.375 & 50178.816 \\ & 8,1,8 & 18.814 & 26.453 & 50148.885 & 50254.131 \\ & 9,0,7 & 18.201 & 26.226 & 50067.729 & 50179.170 \\ & 9,1,7 & 18.332 & 26.453 & 50055.038 & 50160.283 \\ & 8,0,9 & 18.135 & 26.280 & 50068.705 & 50186.337 \\ & 8,1,9 & 18.274 & 26.464 & 50059.330 & 50170.766 \\ \hline auto-arima & 5,1,3 & 18.844 & 26.454 & 50190.242 & 50245.961 \\ \hline \end{tabular}
\end{table}
Table 2: ARIMA Model Comparison Results for dropna
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Models & Order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline AR/MA models & 1,0,0 & 19.203 & 25.740 & 52180.550 & 52199.184 \\ & 2,0,0 & 19.363 & 25.748 & 52142.980 & 52167.825 \\ & 1,1,0 & 21.636 & 26.268 & 52187.390 & 52199.812 \\ & 1,2,0 & 86.201 & 27.038 & 54527.913 & 54540.351 \\ & 0,0,1 & 16.646 & 21.271 & 57600.941 & 57619.575 \\ & 0,1,1 & 21.336 & 26.241 & 52170.583 & 52183.005 \\ & 0,2,1 & 21.636 & 26.268 & 52239.944 & 52252.365 \\ \hline ARMA models & 9,0,8 & 18.288 & 26.032 & 51635.740 & 51753.753 \\ & 9,1,8 & 18.115 & 26.121 & 51681.885 & 51793.682 \\ & 8,0,8 & 18.506 & 25.958 & 51674.400 & 51786.202 \\ & 8,1,8 & 18.421 & 26.104 & 51641.077 & 51746.663 \\ & 8,0,9 & 18.407 & 25.939 & 51668.680 & 51786.693 \\ & 8,1,9 & 18.502 & 26.152 & 51701.538 & 51813.335 \\ \hline auto-arima & 5,1,3 & 18.676 & 26.098 & 51805.777 & 51861.675 \\ \hline \end{tabular}
\end{table}
Table 4: ARIMA model Comparison Results for Median Imputation
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Models & Order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline AR/MA models & 1,0,0 & 19.186 & 25.749 & 52184.042 & 52202.676 \\ & 2,0,0 & 19.352 & 25.756 & 52144.192 & 52169.037 \\ & 1,1,0 & 21.628 & 26.276 & 52188.319 & 52200.741 \\ & 1,2,0 & 86.037 & 27.046 & 54529.566 & 54542.004 \\ & 0,0,1 & 16.616 & 21.290 & 57600.735 & 57619.368 \\ & 0,1,1 & 21.322 & 26.248 & 52171.044 & 52183.466 \\ & 0,2,1 & 21.628 & 26.276 & 52243.646 & 52256.067 \\ \hline ARMA models & 9,0,8 & 18.266 & 26.027 & 51635.298 & 51753.311 \\ & 9,1,8 & 18.069 & 26.135 & 51687.745 & 51799.541 \\ & 8,0,9 & 18.400 & 25.965 & 51663.391 & 51781.404 \\ & 8,1,9 & 18.366 & 26.173 & 51695.921 & 51807.718 \\ & 8,0,8 & 18.518 & 25.977 & 51670.186 & 51781.988 \\ & 8,1,8 & 18.324 & 26.093 & 51637.395 & 51742.981 \\ \hline auto-arima & 5,1,3 & 18.563 & 26.085 & 51820.469 & 51876.368 \\ \hline \end{tabular}
\end{table}
Table 3: ARIMA model Comparison Results for Mean Imputation
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Models & Order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline AR/MA models & 1,0,0 & 19.552 & 26.051 & 51429.997 & 51448.631 \\ & 2,0,0 & 19.573 & 26.051 & 51430.941 & 51455.786 \\ & 1,1,0 & 21.936 & 26.516 & 51472.149 & 51484.571 \\ & 1,2,0 & 90.903 & 27.171 & 53740.953 & 53753.391 \\ & 0,0,1 & 16.678 & 21.549 & 57337.414 & 57356.048 \\ & 0,1,1 & 21.870 & 26.510 & 51470.598 & 51483.020 \\ & 0,2,1 & 21.936 & 26.516 & 51473.203 & 51485.624 \\ \hline ARMA models & 8,0,8 & 18.409 & 26.271 & 50801.251 & 50913.053 \\ & 8,1,8 & 18.683 & 26.385 & 50838.995 & 50944.581 \\ & 9,0,8 & 18.223 & 26.175 & 50814.366 & 50932.379 \\ & 9,1,8 & 18.440 & 26.385 & 50807.144 & 50918.941 \\ & 9,0,7 & 18.157 & 26.114 & 50836.848 & 50948.650 \\ & 9,1,7 & 19.432 & 26.363 & 50840.746 & 50946.332 \\ \hline auto-arima & 5,1,4 & 19.114 & 26.389 & 51004.208 & 51066.318 \\ \hline \end{tabular}
\end{table}
Table 6: ARIMA model Comparison Results for Linear Interpolation Imputation
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline order & Seasonal-order & train\_MAPE & test\_MAPE & AIC & BIC \\ \hline (1, 0, 0) & (3,0,6,7) & 26.673 & 20.554 & 50233.507 & 50301.610 \\ (0,0,0) & (1,0,1,7) & 26.301 & 17.40 & 54957.193 & 54975.766 \\ (0,0,0) & (1,1,1,7) & 26.461 & 16.882 & 54823.939 & 54842.507 \\ (0,0,0) & (3,0,6,7) & 25.505 & 15.670 & 54736.140 & 54798.052 \\ (0,0,0) & (3,1,6,7) & 26.826 & 15.854 & 54735.359 & 54797.251 \\ \hline \end{tabular}
\end{table}
Table 7: SARIMA model Comparison Results for dropna
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Models & Order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline AR/MA models & 1,0,0 & 19.025 & 25.966 & 52851.514 & 52870.148 \\ & 2,0,0 & 19.256 & 25.977 & 52780.739 & 52805.583 \\ & 1,1,0 & 21.531 & 26.566 & 52831.433 & 52843.854 \\ & 1,2,0 & 85.088 & 27.498 & 55261.639 & 55274.078 \\ & 0,0,1 & 16.796 & 21.587 & 57849.604 & 57868.238 \\ & 0,1,1 & 21.013 & 26.513 & 52795.815 & 52808.237 \\ & 0,2,1 & 21.531 & 26.566 & 52925.747 & 52938.168 \\ \hline ARMA models & 9,0,8 & 18.004 & 26.273 & 52357.307 & 52475.320 \\ & 9,1,8 & 22.428 & 26.522 & 52691.287 & 52803.084 \\ & 9,0,9 & 18.237 & 26.157 & 52376.970 & 52501.194 \\ & 9,1,9 & 16.651 & 26.512 & 52356.583 & 52474.591 \\ & 8,0,9 & 18.343 & 26.220 & 52412.001 & 52530.014 \\ & 8,1,9 & 18.698 & 26.342 & 52578.576 & 52690.373 \\ \hline auto-arima & 3,1,4 & 18.401 & 26.340 & 52506.970 & 52562.869 \\ \hline \end{tabular}
\end{table}
Table 5: ARIMA model Comparison Results for Median Imputation
## 6 Results and Conclusion
Table 1 to 11 depicts the results of the Classical Time-series Forecasting methods. SARIMA(0,0,0)(6,1,3,7) is the best model which is provided by the MAPE scores of the test data. As the Base models concluded we try to integrate more data into the PM Gati-Shakti Scheme to validate the case study given in the section of Introduction. The future work also focuses on using Reinforcement Learning for model selection for much larger data with integrated ministries in the Union Territory of Delhi.
In conclusion, our study highlights the importance of integrating more data into the PM Gati-Shakti Scheme in order to validate the findings presented in the Introduction section. The base models provide a preliminary understanding of the scheme's potential, but further data incorporation is crucial for robust conclusions. By expanding the scope of our analysis to encompass a wider range of variables and factors, we can enhance the accuracy and reliability of the case study.
Furthermore, our future work will focus on employing Reinforcement Learning techniques for model selection. This approach is particularly relevant when dealing with a larger dataset that integrates ministries within the Union Territory of Delhi. Reinforcement Learning algorithms can effectively evaluate and select the most suitable models by considering the complex interactions and dependencies between different variables. By leveraging the power of machine learning and advanced analytics, we can make informed decisions that lead to better outcomes and enhanced efficiency within the PM Gati-Shakti Scheme.
In summary, our research emphasizes the need for data integration and the application of Reinforcement Learning in the context of the PM Gati-Shakti Scheme. These steps will contribute to a more comprehensive understanding of the scheme's impact and enable evidence-based decision-making for the integration of ministries in the Union Territory of Delhi. By continuously improving our analytical approaches, we can enhance the effectiveness of the scheme and drive positive socio-economic outcomes.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline order & Seasonal-order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline (1, 0, 0) & (3,0,6,7) & 20.554 & 26.673 & 50233.507 & 50301.610 \\ (0,0,0) & (1,0,1,7) & 17.407 & 26.301 & 54957.193 & 54975.766 \\ (0,0,0) & (1,1,1,7) & 16.882 & 26.461 & 54823.939 & 54842.507 \\ (0,0,0) & (3,0,6,7) & 15.670 & 25.505 & 54736.140 & 54798.052 \\ (0,0,0) & (3,1,6,7) & 15.854 & 26.826 & 54735.359 & 54797.251 \\ \hline \end{tabular}
\end{table}
Table 10: SARIMA model Comparison Results for Mode Imputation
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline order & Seasonal-order & train\_MAPE & test\_MAPE & AIC & BIC \\ \hline (1, 0, 0) & (6,0,2,7) & 25.884 & 17.775 & 51748.896 & 51811.008 \\ (0,0,0) & (1,0,1,7) & 25.973 & 17.382 & 56160.584 & 56179.218 \\ (0,0,0) & (1,1,1,7) & 26.141 & 16.831 & 56026.752 & 56045.380 \\ (0,0,0) & (6,0,2,7) & 25.355 & 15.398 & 55932.485 & 55988.386 \\ (0,0,0) & (6,1,2,7) & 26.404 & 16.173 & 55954.966 & 56010.850 \\ \hline \end{tabular}
\end{table}
Table 8: ARIMA model Comparison Results for mean Imputation
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline order & Seasonal-order & test\_MAPE & train\_MAPE & AIC & BIC \\ \hline (1, 0, 0) & (3,0,6,7) & 20.491 & 26.316 & 51825.927 & 51894.250 \\ (0,0,0) & (1,0,1,7) & 17.379 & 25.965 & 56152.138 & 56170.772 \\ (0,0,0) & (1,1,1,7) & 16.830 & 26.132 & 56018.390 & 56037.018 \\ (0,0,0) & (3,0,6,7) & 15.717 & 25.119 & 55924.073 & 55986.185 \\ (0,0,0) & (3,1,6,7) & 15.950 & 26.439 & 55951.882 & 56013.975 \\ \hline \end{tabular}
\end{table}
Table 9: SARIMA model Comparison Results for Median Imputation
## Acknowledgments
This project was supported by National Institute of Technology, Calicut a center of PM-Gati-Shakti Scheme initiated by National Institute of Industrial Engineering.
|
2303.05645
|
Classifying Protoplanetary disks Infrared Spectrum and Analysis by
c-C$_3$H$_2$ C$_5$H$_5$ C$_9$H$_7$ C$_{12}$H$_8$ C$_{23}$H$_{12}$ and
C$_{53}$H$_{18}$ to be Capable Template for Biological Molecule
|
Protoplanetary disk around a just born young star contains a lot of cosmic
dust. especially polycyclic-aromatic-hydrocarbon (PAH), which would become
basic component to create biological organics. This study classified many
astronomically observed infrared spectra of protoplanetary disks to three
typical spectra. Type-A show well known astronomical bands of 6.2, 7.8, 8.6 and
11.3 micrometer. Whereas Type-B included unknown complex bands. Type-(A+B) was
their mixed type. We tried to find specific molecule by Density Functional
Theory (DFT) calculation. We found that Type-A could be explained by large PAH
molecules of (C$_{23}$H$_{12}$) and (C$_{53}$H$_{18}$), which are
hexagon-pentagon combined molecular structure. Background molecule of Type-B
was smaller ones of (c-C$_3$H$_2$), (C$_5$H$_5$), (C$_9$H$_7$) and
(C$_{12}$H$_8$). Type-(A+B) was reproduced well by mixing those molecules of A
and B. Astronomical detailed observation shows that central star of Type-A has
larger mass and higher temperature than that of Type-B. This suggests that at
very early stage of our solar system, our protoplanetary disk had been made up
by Type-B molecules. It was interesting that (C$_5$H$_5$) and (C$_9$H$_7$) of
Type-B molecules has similar molecular structure with biological nucleic-acid
on our earth. Type-B molecules was supposed to become the template for
synthesizing biological organics and finally for creating our life.
|
Norio Ota, Aigen Li
|
2023-03-10T01:32:10Z
|
http://arxiv.org/abs/2303.05645v1
|
Classifying Protoplanetary-disk's Infrared Spectrum and Analysis by c-CaH\({}_{2}\), C\({}_{5}\)H\({}_{5}\), C\({}_{9}\)H\({}_{7}\), C\({}_{12}\)H\({}_{8}\), C\({}_{28}\)H\({}_{12}\) and C\({}_{58}\)H\({}_{18}\) to be Capable Template for Biological Molecules
###### Abstract
Protoplanetary disk around a just born young star contains a lot of cosmic dust. especially polycyclic-aromatic-hydrocarbon (PAH), which would become basic component to create biological organics. This study classified many astronomically observed infrared spectra of protoplanetary disks to three typical spectra. Type-A show well known astronomical bands of 6.2, 7.8, 8.6 and 11.3 micrometer. Whereas Type-B included unknown complex bands. Type-(A+B) was their mixed type. We tried to find specific molecule by Density Functional Theory (DFT) calculation. We found that Type-A could be explained by large PAH molecules of (C\({}_{28}\)H\({}_{12}\)) and (C\({}_{53}\)H\({}_{18}\)), which are hexagon-pentagon combined molecular structure. Background molecule of Type-B was smaller ones of (c-C\({}_{3}\)H\({}_{2}\)), (C\({}_{5}\)H\({}_{5}\)), (C\({}_{5}\)H\(\acute{\imath}\)) and (C\({}_{12}\)H\({}_{8}\)). Type-(A+B) was reproduced well by mixing those molecules of A and B. Astronomical detailed observation shows that central star of Type-A has larger mass and higher temperature than that of Type-B. This suggests that at very early stage of our solar system, our protoplanetary disk had been made up by Type-B molecules. It was interesting that (C\({}_{5}\)H\({}_{5}\)) and (C\({}_{5}\)H\(\acute{\imath}\)) of Type-B molecules has similar molecular structure with biological nucleic-acid on our earth. Type-B molecules was supposed to become the template for synthesizing biological organics and finally for creating our life.
PAH, protoplanetary disk, infrared spectrum, DFT
## 1 Introduction
Protoplanetary disk was circumstellar dust cloud of just born star younger than 10 million years. Central stars are type of Herbig Ae/Be or T Tauri star. Such protoplanetary disk includes Polycyclic Aromatic Hydrocarbon (PAH), which may had been basic component to create biological organics as like nucleic-acid amino-acid. In this paper, we try to classify many observed infrared spectra of protoplanetary disks by Seok and Li[1]. Also, we like to find specific PAH by Density Functional Theory (DFT) to give identical infrared spectrum to astronomically observed one.
In our recent paper[2], we found specific PAH to explain astronomically well observed mid infrared spectrum by DFT calculation. Astronomical PAH would be floating in interstellar and circumstellar space under ultra-low-density condition of 1-100 molecules/cm[3]. It is suitable situation for DFT calculation giving solution on such almost isolate molecule[3][4]. The interstellar gas and dust show common mid-infrared emission at 3.3, 6.2, 7.6, 7.8, 8.6, 11.2, and 12.7\(\mu\)m, which are ubiquitous peaks observed at many astronomical objects[3, 4, 5, 6, 7, 8, 9, 10]. Current common understanding is that these astronomical spectra come from the vibrational modes of PAH. There are many spectroscopy data[12, 13, 14, 15, 16, 17, 18, 19, 20, 21] and DFT analysis[10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. However, despite long-term efforts, until now there is not any identified specific PAH. Our interest was the void-defect induced PAH, which was deformed to a featured structure having few hydrocarbon pentagon rings among hexagon networks. It was surprising that those calculated bands of specific molecules of (C\({}_{23}\)H\({}_{12}\)) and (C\({}_{53}\)H\({}_{18}\)) coincided well with astronomically observed bands. Our recent paper[2] would become the first report to indicate the specific PAH in space.
Biological basic molecules may come from interstellar space. To check such idea, we observed and classified infrared spectrum of protoplanetary disks. In this paper, we will classify astronomically observed spectra[1] to three typical spectra of Type-A, Type-B and their mixed Type-(A+B). After that, we like to identify specified PAH by DFT calculation. Especially, small model molecules of C\({}_{6}\)H\({}_{6}\), C\({}_{10}\)H\({}_{8}\), and C\({}_{13}\)H\({}_{9}\) will give us important information for the background molecules. It will be interesting discussion that void induced and deeply ionized small size PAH may give similar molecular structure with biological molecules and may play the template molecule for creating the nuclear acids and amino acid.
## 2 Classification of observed infrared spectrum
Recently, just born young stars' infrared spectrum attracts many scientists, because it is an analogy of baby age of our solar system and planets. We like to understand how planet system would be created in the Universe. Moreover, it would be key to suppose where and when our biological basic material as like deoxyribonucleic acid and amino acid come from and how synthesized. Astronomically observed infrared spectrum of many protoplanetary disks around the Herbig Ae/Be and T Tauri stars were reported by Acke et al. in 2010[22]. Also in 2017, Seok and Li reported more than 60 observed spectra[1]. Typical examples are noted again in Table 1. Here in this study, we newly classified those spectra to three types, Type-A, Type-B and their mixed type of Type-(A+B) as shown in Fig.1.
In Type-A, we could notice typical astronomical PAH bands, which were marked by blue letter as 6.2\(\mu\)m (a), 7.8\(\mu\)m (b), 8.6\(\mu\)m (c), 11.2\(\mu\)m (d) and 12.7\(\mu\)m (e). Each band wavelength was marked by blue dotted line. On top panel, we can see a typical example of HD97300 (name of central star). In our previous paper[23], those bands could be identified by hydrocarbon pentagon hexagon combined molecules as like (C\({}_{23}\)H\({}_{12}\))2\({}^{\rm{a}}\) and/or (C\({}_{53}\)H\({}_{13}\))1\({}^{\rm{a}}\).
Figure 1: Typical observed infrared spectra for Type-A, Type-B and a mixed type of Type-(A+B).
It is interesting to map those IR types in relation of central star's mass (solar unit) and effective temperature Teff (K). Type-A (blue dot) takes a wide range of Mass and Teff as surrounded by blue circle, whereas Type-B has lighter and lower temperature central star. Our early solar system would be Type-B as an object of 1 solar mass and effective temperature of 4000\(-\)5000K.
## 3 Candidates for Background Molecules
### Model molecules
This paper focuses to find background molecules for explaining spectra of Type-B and Type-(A+B). It is common understand that, as illustrated in Fig. 3, just born young star emit high energy electron and proton, also emit high energy photon, which may attack hydrocarbon molecules hidden in a cloud of protoplanetary disk.
Model molecules hidden in the protoplanetary disk are illustrated in Fig. 4. Starting mother molecules are typical PAH of (C\({}_{6}\)H\({}_{6}\)), (C\({}_{10}\)H\({}_{8}\)), (C\({}_{13}\)H\({}_{9}\)), (C\({}_{24}\)H\({}_{12}\)), and (Cs\({}_{4}\)H\({}_{18}\)) having hydrocarbon hexagon rings. In this paper, we apply one assumption of single void-defect on initial molecule. In interstellar and circumstellar space, high speed stellar wind, mainly proton and electron, may attack on PAH. As illustrated on top of Fig. 3, high speed particle attacks mother molecule (C\({}_{6}\)H\({}_{6}\)) and kick out one carbon atom. Initial void will be created and immediately make a void induced molecule as like (CsHs). For every size initial molecule, DFT calculation suggested serious structure change. For example, in case of (C\({}_{24}\)H\({}_{12}\)) two carbon pentagon rings are created among hexagon-ring network. For those hydrocarbon pentagon hexagon combined molecules, we also supposed photoionization by central stars, which will make deep photoionization for hydrocarbon pentagon hexagon combined molecules.
### Calculation methods
In calculation, we used DFT[22, 20] with the unrestricted B3LYP functional[24]. We utilized the Gaussian09 software package[23] employing an atomic orbital 6-31G basis set[26]. Unrestricted DFT calculation was done to have the spin dependent atomic structure. The required convergence of the root-mean-square density matrix was 10\({}^{-8}\). Based on such optimized molecular configuration, fundamental vibrational modes were calculated, such as C-H and C-C stretching modes, C-H bending modes and so on, using the same Gaussian09 software package. This calculation also gives harmonic vibrational frequency and intensity in infrared region. The standard scaling is applied to the frequencies by employing a scale factor of 0.965 for PAH from the laboratory experimental value on coronene (C\({}_{24}\)H\({}_{12}\))[27]. Correction due to anharmonicity was not applied to avoid uncertain fitting parameters. To each spectral line, we
Figure 4: Creation of model molecules. Mother molecule will be attacked by high energy particle from the central star making initial void. Void induced molecule will be illuminated by central star and makes photoionized cations.
Figure 3: Image of young central star, emitting electron, proton and photon to attack on PAH molecules in the cloud of protoplanetary disk.
Figure 2: Classifying IR types in central star’s mass (solar unit) vs. effective temperature.
assigned a Gaussian profile with a full width at half maximum (FWHM) of 4cm\({}^{-1}\).
## 4 Photoionization from (CaHs) to (c-CaHs) and chain-Cs
Astrochemical evolution step of small PAH started from benzene (CeHe) was illustrated in Fig.5. One astronomical hypothesis is high speed particle attack on mother molecule (CeHo) to create a void defect. Void induced open configuration would be suddenly transformed to a cyclic hydrocarbon pentagon (CsHs). Next hypothesis is photo-irradiation from the central star introducing deep photoionization on (CsHs). Neutral (CsHs) will be excited to mono-cation (CsH\({}_{5}\))\({}^{1+}\) by 8.5eV, to di-cation (CsHs)\({}^{2+}\) by 23.6eV, also tri-cation (CsH\({}_{5}\))\({}^{3+}\) by 46.1eV. Calculation shows that from n=0 to 3 of (CsH\({}_{5}\))\({}^{n+}\), molecule keeps pentagon configuration. At ionization step n=4, there occurs dehydrogenation to cyclic pure carbon (Cs) and five hydrogen atoms. At n=6 shown in a green frame, there happens surprising creation of cyclic (c-C\({}_{3}\)H\({}_{2}\)) marked by red ellipse, also separation to two carbon atoms and three hydrogen atoms. (c-C\({}_{3}\)H\({}_{2}\)) is the smallest PAH.
Calculated infrared spectrum of (C\({}_{3}\)H\({}_{5}\))\({}^{n+}\)(n=0, 1,and 2) were illustrated in Fig. 6. Stable spin state was Sz=1/2 for cases of neutral (C\({}_{3}\)H\({}_{5}\))\({}^{0}\) and di-cation (C5H5)\({}^{2+}\), of which spin cloud was shown by red for up-spin and by blue for down-spin at density surface of 10e/nm\({}^{3}\). As shown in Fig. 7, deeper photoionization on (c-C\({}_{3}\)H\({}_{2}\)) creates chain-C\({}_{3}\)H\({}_{2}\) and finally decompose to pure carbon C\({}_{3}\) and two hydrogen atoms.
Figure 5: Deep photoionization of void induced molecule (C\({}_{5}\)H\({}_{3}\)) to create the smallest PAH (c-C\({}_{3}\)H\({}_{2}\))
Figure 6: Atomic configuration and infrared spectrum of ionized (C\({}_{5}\)H\({}_{5}\))\({}^{n+}\) for n=0,1 and 2. Spin density was illustrated by red for up-spin and blue for down spin.
Figure 7: Deeper ionization from c-C\({}_{3}\)H\({}_{2}\) to chain-C3
## 5 Large PAHs: (CoH), (C\({}_{12}\)H\({}_{3}\)), (CasHi\({}_{2}\)) and (CasHi\({}_{2}\))
To find PAH explaining Type=B infrared spectrum, we expand DFT calculation to various size molecules.
(1) (C\({}_{9}\)H\({}_{2}\))
Starting from mother molecule of (C\({}_{10}\)Hs), void induced molecule (CsH\(\cdot\)) was studied. As illustrated in Fig. 8, deeply ionized molecule show dehydrogenation at a step of (C\({}_{9}\)H\({}_{2}\))\({}^{6+}\), and finally decomposed to several chain-hydrocarbons and atoms. Calculated atomic configuration and infrared spectrum of ionized (C\({}_{9}\)H\({}_{2}\))\({}^{n+}\) for n=0, 1 and 2 were shown in Fig. 9.
(2) (C\({}_{12}\)Hs)
Void induced molecule (C\({}_{12}\)Hs) was created from mother molecule (C\({}_{13}\)H\({}_{2}\)). As illustrated in Fig. 10, deeply ionized molecule shows de-hydrogenation and decomposition to complex chain-hydrocarbon. Fig. 11 show calculated atomic configuration, spin distribution and infrared spectra.
(3) (C\({}_{23}\)H\({}_{12}\))
Void induced molecule of (C\({}_{23}\)H\({}_{12}\)) was created from mother molecule (C\({}_{24}\)Hs), which was well studied and reported in a previous paper[20]. Fig. 12 show calculated atomic configuration and infrared spectra of (C\({}_{23}\)H\({}_{12}\))\({}^{n+}\) from n=0 to 2.
Detailed infrared bands were analyzed as shown in Fig. 14. For example, atomic vibrational mode of 6.2\(\upmu\)m band was carbon-to-carbon stretching mode and for 7.6\(\upmu\)m band carbon-to-hydrogen bending mode.
## 6 Molecules for reproducing Type-A spectrum.
We tried to find suitable molecules for reproducing Type-A astronomically observed spectrum. Fig. 15 show comparison with the Type-A observed spectrum of HD97300 (central star) and the calculated spectra of various size molecules. For easy comparison, red dotted arrow shows no good coincident wavelength between Type-A observed band and DFT calculated one. In cases of (c-C\({}_{3}\)H\({}_{2}\))[0] and (C\({}_{5}\)H\({}_{5}\))[0], we cannot find any coincidence between observation and calculation. Whereas in case of medium size molecules of (C\({}_{5}\)H\({}_{7}\))\({}^{1+}\) and (C\({}_{12}\)Hs)\({}^{2+}\), we noticed many non- coincident bands as checked by red arrows, but find some coincident bands as like at 6.2, 7.6 and 8.6\(\upmu\)m. It should be noted that larger molecules of (C\({}_{23}\)H\({}_{12}\))\({}^{2+}\) show good coincidence between calculated bands and observed Type-A bands. Especially, the largest molecule of (C\({}_{53}\)H\({}_{18}\))\({}^{1+}\) show very good coincidence with Type-A. Fig. 16 shows detailed charge state variation of calculated spectrum of (C\({}_{53}\)H\({}_{18}\))\({}^{n+}\) for cases of N=0,1 and 2. Observed spectrum may be some weighting sum of those charge state.
Figure 14: Detailed atomic configuration and spin density map of mono-cation (H\({}_{53}\)H\({}_{18}\))\({}^{1+}\). For every band, vibrational modes were analyzed as C-C stretching or C-H bending as marked by arrow.
Figure 12: Calculated atomic configuration, spin density and infrared spectrum of ionized (C\({}_{23}\)H\({}_{12}\))\({}^{n+}\).
Figure 13: Calculated atomic configuration, spin density and infrared spectrum of ionized (C\({}_{53}\)H\({}_{18}\))\({}^{n+}\).
## 7 Molecules reproducing Type-B spectrum.
We tried to find suitable molecules for reproducing Type-B astronomically observed spectrum. Fig. 17 show comparison with the Type-B spectrum of AKSco (central star) and the calculated spectra of various size molecules. In cases of (e-C3H2)\({}^{0}\) and (C5H5)\({}^{0}\), we could find that calculated four or five bands were well reproduce well Type-B bands. It should be noted that in case of medium size molecules of (C\({}_{9}\)H\({}^{2}\))\({}^{1+}\) and (C\({}_{12}\)H\({}_{8}\))\({}^{2+}\), calculated bands show complete reproduction of observed 12 bands of Type-B. Whereas for larger molecules of (C\({}_{23}\)H\({}_{12}\))\({}^{2+}\) and (C\({}_{53}\)H\({}_{18}\))\({}^{1+}\), we noticed many non- coincident bands as marked by red dotted arrows. It is obvious that background molecules of Type-B were a sum of smaller and medium size molecules. Fig. 18 shows detailed charge state variation of calculated spectrum of (C\({}_{12}\)H\({}_{8}\)). Observed spectrum may be some weighting sum of those charge state.
Figure 16: Observed infrared spectrum of Type A of star HD97300 on a top panel, which were compared with calculated infrared spectrum of (C\({}_{53}\)H\({}_{18}\))\({}^{n+}\)(n=0, 1 and 2).
Figure 15: Comparison with calculated infrared bands with Type-A observed bands. Red arrow shows no good coincident band.
## 8 Reproducing Type-(A+B) spectrum.
As noted in section 2, astronomically observed Type-(A+B) would be a sum of observed Type-A and Type-B. We tried to reproduce Type-(A+B) by using two typical molecules, that is, (C\({}_{23}\)H\({}_{12}\))\({}^{2+}\) for Type-A and (C12H8)\({}^{2+}\) for Type-B. On a top panel of Fig. 19, observed spectrum of HD72106 could be well reproduced by a sum of 60% of A and 40% of B. Also, HD142527 was well reproduced by a sum of 40% of A and 60% of B. Finally, HD37806 was reproduced by a sum of 20% of A and 80% of B.
## 9 Capable template molecules for creating primitive biological components.
Small molecules of (C\({}_{5}\)H\({}_{5}\)) and (C\({}_{12}\)H\({}_{8}\)) would be background molecules for Type-B spectrum. Our solar system had been Type-B at the baby age. On the very early stage on the earth, biological basic molecules would be created by some chemical revolution mechanism. Here, we could suppose one interesting hypothesis that cosmic PAH dust may become the template for creating primitive biological molecules as like Cytosine:(C\({}_{4}\)H\({}_{5}\)N\({}_{3}\)O) and Guanine:(C\({}_{5}\)H\({}_{5}\)N\({}_{5}\)O). As illustrated in Fig. 19, we could notice resemblance of molecular configuration between cosmic dust PAH
Figure 17: Comparison with calculated infrared bands with Type-B observed bands. Red arrow shows no good coincident band.
Figure 18: Charge state variation of calculated infrared spectrum of (C\({}_{12}\)H\({}_{8}\)).
and biological nucleic acid. In the water of pond/sea, we could suppose chemical revolution step by introducing some contents of NH\({}_{3}\)OH under high temperature environment of hot springs and/or hot spot. Of course, we need detailed experiment on earth and study to find astronomical evidence.
## 10 Conclusion
It is important to find specific molecule included in just born star's protoplanetary disks, especially specific PAH, which would become primitive component to create biological organics.
(1) Many astronomically observed infrared spectra of protoplanetary disks by Seok & Li\({}^{10}\) were classified to three typical spectra. Type-A show well known bands of 6.2, 7.8, 8.6 and 11.3 micrometer. Whereas Type-B was unknown complex bands and Type-(A+B) their mixed bands.
(2) We tried to find specific molecule by Density Functional Theory (DFT). Model molecules were various size mother PAHs starting from C\({}_{5}\)H\({}_{5}\), C\({}_{10}\)H\({}_{8}\), C\({}_{13}\)H\({}_{9}\), C\({}_{23}\)H\({}_{12}\) and C\({}_{54}\)H\({}_{18}\). By the central star, those mother molecules would be attacked by high energy particles and photon, to be deeply ionized pentagon hexagon combined PAH.
(3) We found that Type-A could be explained by large molecules of (C\({}_{23}\)H\({}_{12}\)) and (C\({}_{53}\)H\({}_{18}\)).
(4) Background molecule of Type-B was smaller ones of (c-C\({}_{3}\)H\({}_{2}\)), (C\({}_{5}\)H\({}_{3}\)), (C\({}_{5}\)H\({}_{7}\)) and (C\({}_{12}\)H\({}_{8}\)). Type-(A+B) was well reproduced by mixing those molecules.
(5) Central star's mass vs. effective temperature was mapped. Star of Type-A show larger mass and higher temperature than that of Type-B. At very early stage of our solar system (Teff\(\sim\)5000K, 1 solar mass) may be Type-B.
(6) It was interesting that (C\({}_{5}\)H\({}_{5}\)) and (C\({}_{9}\)H\({}_{7}\)) has similar molecular structure with biological nuclear acid on our earth. Background molecules of Type-B would become the template molecule for synthesizing
Figure 19: Reproduction of Type-(A+B) observed spectrum by a sum of two typical molecules of (C\({}_{23}\)H\({}_{12}\))\({}^{2+}\) for A, and (C\({}_{12}\)H\({}_{8}\))\({}^{2+}\) for B.
Figure 20: Image of creating biological components on very early planet. Cosmic PAH dust would become the template molecule for creating basic biological molecules.
biological organics and finally creating our life.
## Acknowledgement
Aigen Li is supported in part by NSF AST-1311804 and NASA NNX14AF68G.
|
2304.01484
|
Mapping Degeneration Meets Label Evolution: Learning Infrared Small
Target Detection with Single Point Supervision
|
Training a convolutional neural network (CNN) to detect infrared small
targets in a fully supervised manner has gained remarkable research interests
in recent years, but is highly labor expensive since a large number of
per-pixel annotations are required. To handle this problem, in this paper, we
make the first attempt to achieve infrared small target detection with
point-level supervision. Interestingly, during the training phase supervised by
point labels, we discover that CNNs first learn to segment a cluster of pixels
near the targets, and then gradually converge to predict groundtruth point
labels. Motivated by this "mapping degeneration" phenomenon, we propose a label
evolution framework named label evolution with single point supervision (LESPS)
to progressively expand the point label by leveraging the intermediate
predictions of CNNs. In this way, the network predictions can finally
approximate the updated pseudo labels, and a pixel-level target mask can be
obtained to train CNNs in an end-to-end manner. We conduct extensive
experiments with insightful visualizations to validate the effectiveness of our
method. Experimental results show that CNNs equipped with LESPS can well
recover the target masks from corresponding point labels, {and can achieve over
70% and 95% of their fully supervised performance in terms of pixel-level
intersection over union (IoU) and object-level probability of detection (Pd),
respectively. Code is available at https://github.com/XinyiYing/LESPS.
|
Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, Shilin Zhou
|
2023-04-04T02:55:57Z
|
http://arxiv.org/abs/2304.01484v3
|
Mapping Degeneration Meets Label Evolution: Learning Infrared Small Target Detection with Single Point Supervision
###### Abstract
Training a convolutional neural network (CNN) to detect infrared small targets in a fully supervised manner has gained remarkable research interests in recent years, but is highly labor expensive since a large number of per-pixel annotations are required. To handle this problem, in this paper, we make the first attempt to achieve infrared small target detection with point-level supervision. Interestingly, during the training phase supervised by point labels, we discover that CNNs first learn to segment a cluster of pixels near the targets, and then gradually converge to predict groundtruth point labels. Motivated by this "mapping degeneration" phenomenon, we propose a label evolution framework named label evolution with single point supervision (LESPS) to progressively expand the point label by leveraging the intermediate predictions of CNNs. In this way, the network predictions can finally approximate the updated pseudo labels, and a pixel-level target mask can be obtained to train CNNs in an end-to-end manner. We conduct extensive experiments with insightful visualizations to validate the effectiveness of our method. Experimental results show that CNNs equipped with LESPS can well recover the target masks from corresponding point labels, and can achieve over 70% and 95% of their fully supervised performance in terms of pixel-level intersection over union (\(IoU\)) and object-level probability of detection (\(P_{d}\)), respectively. Code is available at [https://github.com/XinyiYing/LESPS](https://github.com/XinyiYing/LESPS).
## 1 Introduction
Infrared small target detection has been a longstanding, fundamental yet challenging task in infrared search and tracking systems, and has various important applications in civil and military fields [49, 57], including traffic monitoring [24, 54], maritime rescue [52, 53] and military surveillance [7, 47]. Due to the rapid response and robustness to fast-moving scenes, single-frame infrared small target (SIRST) detection methods have always attracted much more attention, and numerous methods have been proposed. Early methods, including filtering-based [9, 40], local contrast-based [3, 16] and low rank-based [11, 43] methods, require complex handcrafted features with carefully tuned hyper-parameters. Recently, compact deep learning has been introduced in solving the problem of SIRST detection [24, 45, 54]. However, there are only a few attempts, and its potential remains locked, unlike the extensive explorations of deep learning for natural images. This is mainly due to potential reasons, including lack of large-scale, accurately annotated datasets and high stake application scenarios.
Infrared small targets are usually of very small size, weak, shapeless and textureless, and are easily submerged in diverse complex background clutters. As a result, directly adopting existing popular generic object detectors like RCNN series [13, 14, 39, 19], YOLO series [25, 37, 38] and SSD [29] to SIRST detection cannot produce satisfactory performance. Realizing this, researchers have been focusing on developing deep networks tailored for
Figure 1: An illustration of mapping degeneration under point supervision. CNNs always tend to segment a cluster of pixels near the targets with low confidence at the early stage, and then gradually learn to predict GT point labels with high confidence.
infrared small targets by adequately utilizing the domain knowledge. However, most existing deep methods for SIRST detection [8, 24, 54] are fully supervised, which usually requires a large dataset with accurate target mask annotations for training. Clearly, this is costly [5, 26].
Therefore, a natural question arises: _Can we develop a new framework for SIRST detection with single point supervision_? In fact, to substantially reduce the annotation cost for object detection tasks, weakly supervised object detection methods with point supervision [56, 4, 26, 5] have been studied in the field of computer vision. Although these weakly supervised methods achieve promising results, they are not designed for the problem of SIRST detection, and the class-agnostic labels (_i.e.,_ only foreground and background) of infrared small targets hinder their applications [58, 42]. Therefore, in this work, we intend to conduct the first study of weakly supervised SIRST detection with single-point supervision.
A key motivation of this work comes from an interesting observation during the training of SIRST detection networks. That is, with single point labels serving as supervision, CNNs always tend to segment a cluster of pixels near the targets with low confidence at the early stage, and then gradually learn to predict groundtruth (GT) point labels with high confidence, as shown in Fig. 1. It reveals the fact that region-to-region mapping is the intermediate result of the final region-to-point mapping1. We attribute this "mapping degeneration" phenomenon to the special imaging mechanism of infrared system [24, 54], the local contrast prior of infrared small targets [8, 3], and the easy-to-hard learning property of CNNs [44], in which the first two factors result in extended mapping regions beyond the point labels, and the last factor contributes to the degeneration process.
Footnote 1: “region-to-region mapping” represents the mapping learned by CNNs from target regions in images to a cluster of pixels near the targets, while “region-to-point mapping” represents the mapping from target regions in images to the GT point labels.
Based on the aforementioned discussion, in this work, we propose a novel framework for the problem of weakly supervised SIRST detection, dubbed label evolution with single point supervision (LESPS). Specifically, LESPS leverages the intermediate network predictions in the training phase to update the current labels, which serve as supervision until the next label update. Through iterative label update and network training, the network predictions can finally approximate the updated pseudo mask labels, and the network can be simultaneously trained to achieve pixel-level SIRST detection in an end-to-end2 manner.
Footnote 2: Different from generic object detection [59, 32], “end-to-end” here represents achieving point-to-mask label regression and direct pixel-level inference in once training.
Our main contributions are summarized as: (1) We present the first study of weakly supervised SIRST detection, and introduce LESPS that can significantly reduce the annotation cost. (2) We discover the mapping degeneration phenomenon, and leverage this phenomenon to automatically regress pixel-level pseudo labels from the given point labels via LESPS. (3) Experimental results show that our framework can be applied to different existing SIRST detection networks, and enable them to achieve over 70% and 95% of its fully supervised performance in terms of pixel-level intersection over union (\(IoU\)) and object-level probability of detection (\(P_{d}\)), respectively.
## 2 Related Work
**SIRST Detection.** In the past decades, various methods have been proposed, including early traditional paradigms (_e.g.,_ filtering-based methods [40, 9], local contrast-based methods [3, 15, 16, 17, 33, 34], low rank-based methods [50, 6, 28, 43, 51, 11]) and recent deep learning paradigms [24, 20, 21, 45, 52, 53, 7, 54, 4, 4]. Compared to traditional methods, which require delicately designed models and carefully tuned hyper-parameters, convolutional neural networks (CNNs) can learn the non-linear mapping between input images and GT labels in a data-driven manner, and thus generalize better to real complex scenes. As the pioneering work, Wang _et al._[45] first employed a generative adversarial network to achieve a better trade-off between miss detection and false alarm. Recently, more works focus on customized solutions of infrared small target. Specifically, Dai _et al._[7] specialized an asymmetric contextual module, and further incorporated local contrast measure [8] to improve the target contrast. Li _et al._[24] preserved target information by repetitive feature fusion. Zhang _et al._[54] aggregated edge information to achieve shape-aware SIRST detection. Zhang _et al._[53, 52] explored cross-level correlation and transformer-based method [18] to predict accurate target mask. Wu _et al._[46] customized a UIU-Net framework for multi-level and multi-scale feature aggregation. In conclusion, existing works generally focus on compact architectural designs to pursue superior performance in a fully supervised manner. However, due to the lack of a large number of public datasets [24, 7, 45] with per-pixel annotations, the performance and generalization of CNNs are limited. In addition, per-pixel manual annotations are time-consuming and labor-intensive. Therefore, we focus on achieving good pixel-level SIRST detection with weaker supervision and cheaper annotations.
**Weakly Supervised Segmentation with Points.** Recently, point-level annotation has raised more attention in dense prediction tasks such as object detection [56, 4, 12], crowd counting [1, 48, 23, 30] and image segmentation [55, 10, 26, 31, 2, 10]. We mainly focus on image segmentation in this paper. Specifically, Bearman _et al._[2] made the first attempt to introduce an objectiveness potential into a pointly supervised training
loss function to boost segmentation performance. Qian _et al._[36] leveraged semantic information of several labeled points by a distance metric loss to achieve scene parsing. Zhang _et al._[55] proposed an inside-outside guidance approach to achieve instance segmentation by five elaborate clicks. Cheng _et al._[5] designed to provide ten randomly sampled binary point annotations within box annotations for instance segmentation. Li _et al._[26] encoded each instance with kernel generator for panoptic segmentation to achieve 82% of fully-supervised performance with only twenty randomly annotated points. In contrast to these approaches employing complicated prior constraints to segment large generic objects with rich color and fine textures by several elaborate points, we fully exploit the local contrast prior of infrared small target to progressively evolve pseudo masks by single coarse point without any auxiliaries in an end-to-end manner.
## 3 The Mapping Degeneration Phenomenon
In this section, we first describe the mapping degeneration phenomenon together with our intuitive explanation. Then we conduct experiments under single-sample and many-sample training schemes to demonstrate the generality of degeneration, and investigate the influence of generalization on degeneration.
As shown in Fig. 1, given an input image and the corresponding GT point label, we employ U-Net [41] as the baseline SIRST detection network for training. It can be observed that, in the early training phase, network predicts a cluster of pixels near the targets with low confidence. As training continues, the network prediction finally approximates GT point label with gradually increased confidence. We name this phenomenon as "mapping degeneration", and attribute the following reasons to this phenomenon. 1) _Special imaging mechanism of infrared systems_[24, 54]: Targets only have intensity information without structure and texture details, resulting in highly similar pixels within the target region. 2) _High local contrast of infrared small targets_[3, 8]: Pixels within the target region are much brighter or darker with high contrast against the local background clutter. 3) _Easy-to-hard learning property of CNNs_[44]: CNNs always tend to learn simple mappings first, and then converge to difficult ones. Compared with region-to-point mapping, region-to-region mapping is easier, and thus tends to be the intermediate result of region-to-point mapping. In conclusion, the unique characteristics of infrared small targets result in extended mapping regions beyond point labels, and CNNs contribute to the mapping degeneration process.
It is worth noting that the mapping degeneration phenomenon is a general phenomenon in various scenes with infrared small targets. Specifically, we use the training datasets (including 1676 images and their corresponding centroid point label, see details in Section 5.1) to train U-Net under a single-sample training scheme (_i.e.,_ training one CNN on each image). For quantitative analyses, we employ the \(IoU\) results between positive pixels in predictions (_i.e.,_ pixels with confidence higher than half of its maximum value) and GT mask label. Average \(IoU\) results of 1676 CNNs at each epoch are shown by the blue curve in Fig. 2(a), while the number of training samples with maximum \(IoU\) during training phase falling in a threshold range of \([i,i+0.1],(i=0,0.1,\cdots,0.9)\) is illustrated via blue bars in Fig. 2(b). It can be observed from the zoom-in curve and bars that mapping degeneration is a general phenomenon with point supervision, and U-Net can achieve \(IoU>0.5\) on more than 60% of the training images.
In addition, we conduct experiments to train U-Net under a many-sample training scheme (_i.e.,_ training one CNN using all images which contain abundant targets with various sizes and shapes) to investigate the effect of generalization on mapping degeneration. Average \(IoU\) results of 1676 images are shown by orange curve in Fig. 2(a). It can be observed that many-sample training scheme needs more time to converge. Moreover, Fig. 2(b) shows that orange bars are slightly lower than blue ones on larger \(IoU\) values (_i.e.,_ 0.5-1.0). It is demonstrated that generalization decelerates but aggravates mapping degeneration. Figure 2(c) shows some zoom-in target regions of images and their predictions under these two training schemes. It can be observed that CNNs can effectively segment a cluster of target pixels under both training schemes in a size-aware manner.
Therefore, an intuitive assumption arises: Can we leverage the intermediate results of CNNs to regress masks? A simple early stopping strategy seems to be a positive answer but is indeed unpractical since mapping degeneration is influenced by various factors, including target intensity, size, shape, and local background clutter (see details in Section 5.2.1). Consequently, there is no fixed optimal stopping epoch for all situations. These
Figure 2: Quantitative and qualitative illustrations of mapping degeneration in CNNs.
observations motivate us to design a special label evolution framework to well leverage the mapping degeneration for pseudo mask regression.
## 4 The Label Evolution Framework
Motivated by mapping degeneration, we propose a label evolution framework named label evolution with single point supervision (LESPS) to leverage the intermediate network predictions in the training phase to update labels. As training continues, the network predictions approximate the updated pseudo mask labels, and network can simultaneously learn to achieve pixel-level SIRST detection in an end-to-end manner. Here, we employ a toy example of 1D curves for easy understanding. As shown in Fig. 3, sub-figures on the left of the dotted line represent the network predictions. Note that, the black curves denote the intermediate predictions within LESPS, while the gray curves represent virtual results produced by the network without label update. On the right of the dotted line, the first and second columns of sub-figures represent current labels and updated labels, respectively, and black arrows represent each round of label update. The overall framework can be summarized as follows. With point label serving as supervision, in the \(1^{st}\) round label update after initial training, the predictions are used to update the current point label to generate the \(1^{st}\) updated label, which is then used to supervise the network training until the \(2^{nd}\) round label update. Through iterative label updates and network training, CNNs can incorporate the local contrast prior to gradually recover the mask labels. From another viewpoint, label evolution consistently updates the supervision to prevent mapping degeneration, and promotes CNNs to converge to the easy region-to-region mapping.
Taking the \(n^{th}\) update as an example, given the current label \(L_{n}\) and the network prediction \(P_{n}\), we perform label update for each target, which consists of three steps: candidate pixel extraction, false alarm elimination, and weighted summation between candidate pixels and current labels. Specifically, the \(d\times d\) local neighborhoods of the \(i^{th}\) target in label \(L_{n}\) and prediction \(P_{n}\) are cropped based on the centroid of the positive pixels3 in label (_i.e.,_\(\hat{L}_{n}\)). Then to reduce error accumulation for label update (see Section 5.2.2 for details), we employ an adaptive threshold (the red dotted line in Fig. 3) to extract the local neighborhood candidate pixels (predictions higher than the red dotted line in Fig. 3). The process can be defined as:
Footnote 3: The value of a pixel is higher than 0.5, which represents that the pixel is more likely to be positive than negative [8, 24]
\[C_{n}^{i}=P_{n}^{i}\odot(P_{n}^{i}>T_{adapt}), \tag{1}\]
where \(C_{n}^{i}\) is the candidate pixels, and \(\odot\) represents element-wise multiplication. \(T_{adapt}\) is the adaptive threshold that correlated to the current prediction \(P_{n}^{i}\) and the positive pixels in label \(\hat{L}_{n}^{i}\), and can be calculated according to:
\[T_{adapt}=max(P_{n}^{i})(T_{b}+k(1-T_{b})\hat{L}_{n}^{i}/(hwr)), \tag{2}\]
where \(h\), \(w\) are the height and width of input images, and \(r\) is set to 0.15% [7, 24]. As shown in Fig. 4 (a), \(T_{b}\) is the minimum threshold, and \(k\) controls the threshold growth rate. An increasing number of \(\hat{L}_{n}^{i}\) leads to the increase of the threshold, which can reduce error accumulation of low contrast targets and strong background clutter.
To eliminate false alarms by local neighborhood noise, we exclude the eight connective regions of candidate pixels that have no intersection with positive pixels of labels, as shown in Fig. 4 (b). This process is defined as:
\[E_{n}^{i}=C_{n}^{i}\odot F_{n}^{i}, \tag{3}\]
where \(E_{n}^{i}\) is the candidate pixels after false alarm elimination, and \(F_{n}^{i}\) is the mask against false alarm pixels.
We then perform average weighted summation between candidate pixels \(E_{n}^{i}\) and current label \(L_{n}^{i}\) to achieve label update. The process can be formulated as:
\[L_{n+1}^{i}=L_{n}^{i}\odot(1-N_{n}^{i})+\frac{L_{n}^{i}+E_{n}^{i}}{2}\odot N_ {n}^{i}, \tag{4}\]
where \(L_{n+1}^{i}\) is the updated label in the \(n^{th}\) round, which serves as new supervision for training in the \(n+1^{th}\) round,
Figure 4: (a) Adaptive threshold \(T_{adapt}\) with respect to positive pixels \(\hat{L}_{n}^{i}\) and hyper-parameters \(k\), \(T_{b}\). Pink dotted line represents the constant \(hwr\). (b) An illustration of false alarm elimination. Red circle and dot represent positive pixels and centroid point of label. Orange circle represents false alarms.
Figure 3: An illustration of label evolution with single point supervision (LESPS). During training, intermediate predictions of CNNs are used to progressively expand point labels to mask labels. Black arrows represent each round of label updates.
and \(N_{n}^{i}=(P_{n}^{i}>T_{adapt})\odot F_{n}^{i}\). Note that, the first term represents GT labels below red dotted lines, and the second term represents the average weighted summation between predictions and GT labels above red dotted lines.
It is worth noting that we provide three conditions to ensure network convergence: 1) Average weighted summation between predictions and labels promotes CNNs to converge as predictions approximate labels. 2) Pixel-adaptive threshold increases with the increase of positive pixels in updated labels, which slows down or suspends the label update. 3) As label evolution introduces more target information for training, CNNs grow to mature, and learn to distinguish targets from backgrounds.
We start label evolution after initial evolution epoch \(T_{epoch}\), and perform label update every \(f\) epoch until the end of training. Note that, our epoch-based threshold \(T_{epoch}\) is a coarse threshold to ensure that networks attend to targets instead of background clutter.
## 5 Experiments
In this section, we first describe the implementation details, and then make comprehensive analyses of the mapping degeneration phenomenon and our label evolution framework. In addition, we apply our method to the state-of-the-art SIRST detection methods with point supervision, and make comparisons with their fully supervised counterparts. Moreover, we make comparisons of our dynamic updated pseudo labels with fixed pseudo labels, and discuss the calculation of loss function.
### Implementation Details
Three public datasets NUAA-SIRST [8], NUDT-SIRST [24], and IRSTD-1K [54] are used in our experiments. We followed [24] to split the training and test sets of NUAA-SIRST and NUDT-SIRST, and followed [54] to split IRSTD-1K. We employed two pixel-level metrics (_i.e.,_ intersection over union (\(IoU\)) and pixel accuracy (\(PA\))) and two target-level metrics (_i.e.,_ probability of detection (\(P_{d}\)) and false-alarm rate (\(F_{a}\))) for performance evaluation.
During training, all images were normalized and randomly cropped into patches of size 256\(\times\)256 as network inputs. We augmented the training data by random flipping and rotation. Due to the extreme positive-negative sample imbalance (less than 10 vs. more than 256\(\times\)256) in SIRST detection with point supervision, we employed focal loss4[27] to stabilize the training process. All the networks were optimized by the Adam method [22]. Batch size was set to 16, and learning rate was initially set to 5\(\times\)10\({}^{-4}\) and reduced by ten times at the 200\({}^{th}\) and 300\({}^{th}\) epochs. We stopped training after 400 epochs. All models were implemented in PyTorch [35] on a PC with an Nvidia GeForce 3090 GPU.
Footnote 4: Focal loss is calculated between current evolved and GT labels to supervise the network training until the next round label update.
### Model Analyses
#### 5.2.1 Analyses of Mapping Degeneration
We use synthetic images (simulated targets and real backgrounds [24]) to investigate the mapping degeneration phenomenon with respect to different characteristics of targets (_i.e.,_ intensity and size5) and point labels (_i.e.,_ locations and numbers). We employ U-Net [41] as the baseline detection network, and use centroid point as GT label if not specified. We calculate \(IoU\) between positive pixels in predictions and GT mask labels of each epoch for quantitative analyses. In addition, we visualize the zoom-in target regions of simulated images with GT point labels (_i.e.,_ red dots) and corresponding CNN predictions (in the epoch reaching maximum \(IoU\)). To reduce training randomness, we show the average \(IoU\) results and visualization results over 100 runs.
Footnote 5: Shape, and local background clutter are investigated in supplemental material.
**Target Intensity.** We simulate Gaussian-based extended targets with different peak values (_i.e.,_ 10, 50, 100, 200, 500) to investigate the influence of target intensity on mapping degeneration. Quantitative results in Fig. 5(a) show that intensity higher than 100 leads to a positive correlation between intensity and maximum \(IoU\), while
Figure 5: \(IoU\) and visualize results of mapping degeneration with respect to different characteristics of targets (_i.e.,_(a) intensity, (b) size,) and point labels (_i.e.,_(c) locations and (d) numbers). We visualize the zoom-in target regions of input images with GT point labels (_i.e.,_ red dots in images) and corresponding CNN predictions (in the epoch reaching maximum \(IoU\)).
lower intensity leads to a negative one. In addition, curve "intensity10" reaches maximum \(IoU\) at around epoch 150 while others are less than 50, which demonstrates that over-small intensity decelerates degeneration. Visualization results show that our method can well highlight target regions under various intensities.
**Target Size.** We simulate Gaussian-based extended targets with different radii (_i.e.,_ 3, 5, 7, 9, 13) to investigate the influence of target size on mapping degeneration. Quantitative results in Fig. 5(b) show that larger target size leads to lower maximum \(IoU\) and less time to reach maximum \(IoU\). That is because, size discrepancy between targets and GT point labels increases as target size increases, which aggravates and accelerates mapping degeneration. Visualization results show that CNNs can predict a cluster of pixels in a size-aware manner, and the peak values of predictions decrease as target size increases.
**Locations of Point Label.** We simulate a Gaussian-based extended target (with intensity 500 & radius 13), and place point labels at different distances away from the target centroid to investigate the influence of point label locations on mapping degeneration. Results in Fig. 5(c) show that small perturbations of label locations (less than 3 pixels) have a minor influence on the maximum \(IoU\) results. However, severe location perturbations (larger than 3 pixels) can lead to a significant maximum \(IoU\) drop, and the drop is more obvious when the point label is close to the edge. Note that, the same targets with different label locations reach maximum \(IoU\) at the same time, which demonstrates that the speed of mapping degeneration is irrelevant to the position of labels.
**Number of Points in Label.** We simulate a Gaussian-based extended target (with intensity 500 & radius 13) and set different numbers of points in labels to investigate its influence on mapping degeneration. Quantitative results in Fig. 5(d) show that as the number of points increases, CNNs can learn more target information to achieve higher maximum \(IoU\) results. In addition, the speed of mapping degeneration is irrelevant to the point number. Visualization results show that peak values of predictions increase as the number of points increases, which demonstrates that stronger supervision alleviates mapping degeneration. The conclusion well supports our label evolution framework.
#### 5.2.2 Analyses of the Label Evolution Framework
In this subsection, we conduct experiments to investigate the effectiveness and the optimal parameter settings of our label evolution framework (_i.e.,_ LESPS). We employ \(PA\) and \(IoU\) between the positive pixels in updated labels and the GT mask labels to quantitatively evaluate the accuracy and expansion degree of the current label.
**Effectiveness.** We compare the average results of NUAA-SIRST [8], NUDT-SIRST [24], and IRSTD-1K [54] datasets achieved by DNA-Net with (_i.e.,_ w/) and without (_i.e.,_ w/o) LESPS under centroid and coarse point supervision, respectively. Note that, the results of DNA-Net w/o LESPS are calculated under extremely low threshold6 (_i.e.,_ 0.15) while those of DNA-Net w/ LESPS are calculated under the standard threshold (_i.e.,_ 0.5 [24, 54]). As shown in Table 1, as compared to full supervision, the results of DNA-Net w/o LESPS are extremely low, which demonstrates that SIRST detection on single point supervision is a challenging task. In contrast, DNA-Net w/ LESPS can achieve significant performance improvement under both point supervisions in terms of all the metrics, which approximate the performance of their fully supervised counterparts. Note that, \(P_{d}\) results of DNA
\begin{table}
\begin{tabular}{|l|c c c|c c c|c c c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Centroid} & \multicolumn{3}{c|}{Coarse} & \multicolumn{3}{c|}{Full} \\ \cline{2-10} & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) \\ \hline w/o LESPS & 5.12 & 89.19 & 0.68 & 2.96 & 49.89 & 0.30 & & 75.67 & 96.18 & 22.94 \\ \hline w/LESPS & 57.34 & 91.87 & 20.24 & 56.18 & 91.49 & 18.32 & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Average \(IoU\) (\(\times 10^{2}\)), \(P_{d}\) (\(\times 10^{2}\)) and \(F_{a}\)(\(\times 10^{6}\)) values on NUAA-SIRST [8] NUDT-SIRST [24] and IRSTD-1K [54] achieved by DNA-Net with (w/) and without (w/o) LESPS under centroid, coarse point supervision together with full supervision.
Figure 6: \(PA\) (P) and \(IoU\) (I) results of LESPS with respect to (a) initial evolution epoch \(T_{epoch}\), (b) \(T_{b}\) and (c) \(k\) of evolution threshold, and (d) evolution frequency \(f\).
Figure 7: Quantitative and qualitative results of evolved target masks.
Net w/o LESPS under coarse point supervision are over half lower than those under the centroid ones, while the results of DNA-Net w/ LESPS under these two kinds of point supervision are comparable. It demonstrates that LESPS can generalize well to manual annotation errors.
In addition, we make evaluations of evolved target masks during training. Quantitative results in Fig. 7(a) show average \(IoU\) values between positive pixels of evolved target masks and GT labels in 20-time training, which demonstrates that the quality of pseudo target masks consecutively increases during training. Qualitative results in Fig. 7(b) demonstrate that networks can effectively learn to expand point labels to mask labels. Furthermore, we visualize the labels regressed by our LESPS during training together with some network predictions during inference in Figs. 8 (a) and (b). As shown in Fig. 8(a), compared with GT mask labels, the evolved labels are more closely aligned with the objects in images (_e.g.,_ GT masks of Misc_4, XDU113 exceed the target regions due to visual edge ambiguity), which demonstrates that LESPS can alleviate the manually annotated errors. Figure 8(b) shows that DNA-Net w/ LESPS can effectively achieve accurate pixel-level SIRST detection in an end-to-end manner. Please refer to the supplemental materials for more visual results.
**Initial Evolution Epoch.** We investigate the optimal value of epoch-based threshold \(T_{epoch}\). Figure 6(a) shows that small initial evolution epoch results in a significant difference between \(PA\) and \(IoU\) (_i.e.,_ 0.94 vs. 0.04 with \(T_{epoch}\)=10). That is because, early label evolution introduces many error pixels, resulting in severe error accumulation and network convergence failure. Increasing initial evolution epoch can reduce error accumulation and promote network convergence (0.60 vs. 0.54 with \(T_{epoch}\)=50). However, over-large initial evolution epoch (_i.e.,_ a high degree of mapping degeneration) results in inferior performance (0.21 vs. 0.21 with \(T_{epoch}\)=100). Therefore, \(T_{epoch}\) is set to 50 in our method.
**Evolution Threshold.** We investigate the optimal values of \(k\) and \(T_{b}\) in the evolution threshold. \(T_{b}\) is the minimum threshold, and controls evolution speed and error accumulation degree. \(k\) determines the maximum threshold, and controls the growth rate of the threshold. As shown in Fig. 6(b) and 6(c), both over-large and over-small values of \(T_{b}\) and \(k\) result in inferior performance. Therefore, we choose \(k\)=1/2 and \(T_{b}\)=0.5 in our method.
**Evolution Frequency.** We investigate the optimal value of evolution frequency \(f\). Figure 6(d) shows that evolution frequency is positively correlated to \(PA\) and \(IoU\). However, high evolution frequency needs more time for label updates. To balance performance and efficiency, we choose \(f\)=5 in our method. Interestingly, higher frequency (_i.e.,_ \(f\)=2) does not cause serve error accumulation, which also demonstrates the effectiveness of the convergence conditions of our LESPS. Please refer to the supplemental materials for more discussions of the convergence issue.
### Comparison to State-of-the-art Methods
**Comparison to SISRT detection methods.** We apply our LESPS to several state-of-the-art CNN-based methods, including ACM [7], ALCNet [8] and DNA-Net [24]. For fair comparisons, we retrained all models on NUAA-SIRST [8], NUDT-SIRST [24], and IRSTD-1K [54] datasets with the same settings. In addition, we add the results of six fully supervised CNN-based methods (MDvsFA [45], ACM [7], ALCNet [8], DNA-Net [24], ISNet [54], UIU-Net [46]) and six traditional methods (Top-Hat [40], RLCM [15], TLLCM [16], MSPCM [17], IPI [11], PSTNN [51]) as the baseline results.
Quantitative results in Table 2 show that CNN-based methods equipped with LESPS can outperform all the traditional methods. In addition, they can also achieve 71-75% \(IoU\) results and comparable \(P_{d}\) and \(F_{a}\) results of their fully supervised counterparts. Qualitative results in Fig. 9 show that CNN-based methods equipped with LESPS can produce outputs with precise target mask and low false alarm rate, and can generalize well to complex scenes. Please refer to supplemental materials for more quantitative
Figure 8: Visualizations of regressed labels during training and network predictions during inference with centroid and coarse point supervision.
Figure 9: Visual detection results of different methods achieved on NUAA-SIRST [8], NUDT-SIRST [24] and IRSTD-1K [54] datasets. Correctly detected targets and false alarms are highlighted by red and orange circles, respectively.
and qualitative results.
**Comparison to other pseudo labels.** We compare our dynamic updated pseudo labels with fixed pseudo labels generated by input intensity threshold and local contrast-based methods [15, 16, 34]. Specifically, given a GT point label, we only preserve the eight connected regions of detection results that have union pixels with the GT point label. Then, we employ the pseudo labels to retrain DNA-Net [24] from scratch. As shown in Table 3, compared with fixed pseudo labels, dynamic updated pseudo labels by LESPS can achieve the highest \(IoU\) values with comparable \(P_{d}\) and reasonable \(F_{a}\) increase.
### Discussion of Loss Function
In this subsection, we investigate the loss function of computing negative loss on different background points. Average results of different baseline methods under centroid point supervision are shown in Table 4. Extremely limited handcrafted background points7 leads to many false alarms. Random sample8 introduces more background points and well alleviates class imbalance, resulting in great performance improvements. However, the above simple versions introduce huge false alarms (34-1.8K times of all points), which are not practical for real applications, but inspire further thorough investigation in the future.
Footnote 7: Points are sampled near targets, and are fixed during training.
## 6 Conclusion
In this paper, we proposed the first work to achieve weakly-supervised SIRST detection using single-point supervision. In our method, we discovered the mapping degeneration phenomenon and proposed a label evolution framework named label evolution with single point supervision (LESPS) to automatically achieve point-to-mask regression. Through LESPS, networks can be trained to achieve SIRST detection in an end-to-end manner. Extensive experiments and insightful visualizations have fully demonstrated the effectiveness and superiority of our method. In addition, our method can be applied to different networks to achieve over 70% and 95% of its fully supervised performance on pixel-level \(IoU\) and object-level \(P_{d}\), respectively. We hope our interesting findings and promising results can inspire researchers to rethink the feasibility of achieving state-of-the-art performance in SIRST detection with much weaker supervision.
\begin{table}
\begin{tabular}{|l|c|c c|c c|c c|c c|c c|} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{ACM} & \multicolumn{3}{c|}{ALCNet} & \multicolumn{3}{c|}{DNA-Net} \\ \cline{2-13} & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) \\ \hline hand\({}_{1}\) & 0.54 & 95.79 & 47.06 & 0.16 & 95.19 & 26.26 & 1.43 & 97.80 & 26.91 \\ hand\({}_{2}\) & 0.12 & **97.24** & 295.17 & 0.15 & 96.62 & 246.08 & 5.41 & **98.18** & 8.48 \\ hand\({}_{5}\) & 0.11 & 96.36 & 316.73 & 0.08 & **97.29** & 363.25 & 3.68 & 98.13 & 7.29 \\ rand\({}_{1}\) & 8.06 & 93.45 & 3.56 & 8.57 & 92.97 & 3.21 & 18.74 & 94.69 & 0.58 \\ rand\({}_{2}\) & 10.78 & 92.72 & 2.22 & 10.78 & 91.16 & 92.71 & 22.85 & 94.81 & 0.42 \\ rand\({}_{3}\) & **13.39** & 92.66 & 1.35 & **11.87** & 93.26 & 0.89 & **24.80** & 95.00 & 3.4 \\ All (Ours) & 3.95 & 87.15 & **0.02** & 4.08 & 88.93 & **0.02** & 5.12 & 89.18 & **0.01** \\ \hline \end{tabular}
\end{table}
Table 4: Average \(IoU\)(\(\times 10^{2}\)), \(P_{d}\)(\(\times 10^{2}\)), \(F_{a}\)(\(\times 10^{3}\)) values on NUAA-SIRST [8], NUDT-SIRST [24] and IRSTD-1K [54] datasets of DNA-Net trained with pseudo labels generated by input intensity threshold, LCM-based methods [15, 16, 34] and LESPS under centroid and coarse point supervision.
\begin{table}
\begin{tabular}{|l|c|c c c|c c c|c c c|c c c|} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Description} & \multicolumn{3}{c|}{NUAA-SIRST [3]} & \multicolumn{3}{c|}{NUDT-SIRST [24]} & \multicolumn{3}{c|}{IRESTD-1K [54]} & \multicolumn{3}{c|}{Average} \\ \cline{3-14} & & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) & \(IoU\) & \(P_{d}\) & \(F_{a}\) \\ \hline Top-Har [40] & Filtering & 7.14 & 79.84 & 1012.00 & 20.72 & 78.41 & 166.70 & 10.06 & 75.11 & 1432.00 & 12.64 & 77.79 & 870.23 \\ RLCM [15] & Local Contrast & 21.02 & 80.61 & 199.15 & 15.14 & 66.35 & 163.00 & 14.62 & 65.66 & 17.95 & 16.06 & 68.70 & 98.77 \\ TLLCM [16] & Local Contrast & 11.03 & 79.47 & 7.27 & 7.06 & 62.01 & 46.12 & 5.36 & 63.97 & 4.93 & 7.22 & 65.45 & 21.42 \\ MSPCN [34] & Local Contrast & 12.38 & 83.27 & 17.77 & 5.86 & 55.87 & 115.96 & 7.33 & 60.27 & 15.24 & 7.23 & 61.53 & 55.13 \\ IPI [11] & Low Rank & 25.67 & 85.55 & 11.47 & 17.76 & 74.49 & 41.23 & 27.92 & 81.37 & 16.18 & 23.78 & 80.47 & 22.96 \\ PSTN [51] & Low Rank & 22.40 & 77.95 & 29.11 & 14.85 & 66.13 & 41.72 & 42.57 & 71.99 & 35.26 & 20.61 & 72.02 & 36.18 \\ \hline MDysFai [45] & CNN Full & 61.77 & 92.40 & 64.90 & 45.38 & 86.03 & 200.71 & 35.40 & 88.56 & 99.22 & 47.52 & 88.10 & 121.61 \\ ISNet [54] & CNN Full & 72.04 & 94.68 & 42.46 & 71.27 & 96.93 & 96.84 & 60.61 & 94.28 & 61.28 & 67.97 & 95.30 & 66.86 \\ UIU-Net [46] & CNN Full & 69.90 & 95.82 & 51.20 & 75.91 & 96.83 & 18.61 & 61.11 & 92.93 & 26.87 & 68.97 & 95.19 & 32.23 \\ \hline CNN Full & 64.92 & 90.87 & 12.76 & 57.42 & 91.75 & 37.73 & 57.49 & 91.58 & 43.36 & 59.94 & 91.40 & 32.12 \\ \hline ACM [7] & CNN CentroGrid+ & 49.23 & 89.35 & 40.95 & 42.09 & 91.11 & 38.24 & 41.48 & 88.89 & 60.46 & 44.25 & 89.78 & 46.55 \\ CNN Coarse+ & 47.81 & 88.21 & 40.75 & 40.64 & 81.11 & 49.45 & 40.37 & 92.59 & 64.81 & 42.94 & 87.30 & 51.67 \\ \hline CNN Full & 67.91 & 92.78 & 37.04 & 61.78 & 91.32 & 36.36 & 62.03 & 90.91 & 42.46 & 63.91 & 91.67 & 38.62 \\ ALCNet [8] & CNN CentroGrid+ & 50.62 & 92.02 & 36.84 & 41.58 & 92.28 & 67.01 & 44.90 & 90.57 & 84.68 & 45.70 & 91.62 & 62.84 \\ CNN Coarse+ & 51.00 & 90.87 & 42.40 & 44.14 & 92.80 & 32.10 & 46.75 & 92.26 & 66.34 & 43.70 & 91.98 & 46.27 \\ \hline CNN Full & 76.86 & 96.96 & 22.5 & 87.42 & 98.31 & 24.5 & 62.73 & 93.27 & 21.81 & 75.67 & 96.1
|
2308.01989
|
Coupling the thermal acoustic modes of a bubble to an optomechanical
sensor
|
We report experimental observations of the volume acoustic modes of air
bubbles in water, including both the fundamental Minnaert breathing mode and a
family of higher-order modes extending into the megahertz frequency range.
Bubbles were placed on or near optomechanical sensors having a noise floor
substantially determined by ambient medium fluctuations, and which are thus
able to detect thermal motions of proximate objects. Bubble motions could be
coupled to the sensor through both air (i.e., with the sensor inside the
bubble) and water, verifying that sound is radiated by the high-order modes. We
also present evidence for elastic-Purcell-effect modifications of the sensor's
vibrational spectrum when encapsulated by a bubble, in the form of
cavity-modified linewidths and line shifts.
|
K. G. Scheuer, F. B. Romero, R. G. DeCorby
|
2023-08-03T18:59:23Z
|
http://arxiv.org/abs/2308.01989v3
|
# Observation of the thermal acoustic breathing modes of a bubble
###### Abstract
We report experimental observations of the volume acoustic modes of air bubbles in water, including both the fundamental Minnaert breathing mode and a family of higher-order modes extending into the megahertz frequency range. Bubbles were placed on or near optomechanical sensors having a noise floor substantially determined by ambient medium fluctuations, and thus able to detect thermal Brownian motions of proximate objects. Bubble motions could be coupled to the sensor through both air (_i.e._, with the sensor inside the bubble) and water, verifying that sound is radiated by the high-order modes. In some cases, we also observed hybridization between the mechanical modes of the sensor and those of nearby bubbles. Finally, and more speculatively, we found evidence for Purcell-effect modification of the sensor vibrational spectrum when encapsulated by a bubble.
DOI: XXX Owing in large part to their inherent beauty and transient nature, bubbles have long been a source of fascination [1]. They also host rich physics, with important technological implications [2], especially regarding their interactions with acoustic waves. Natural oscillations of entrained gas bubbles produce audible signals, such as the familiar sound of running water [2, 3]. When they are actively driven by an external pressure wave, the cyclic collapse of bubbles can result in extremely energetic processes such as the cavitation-induced damage of solid objects [2]. Other phenomena associated with bubble cavitation include the emission of light, (_i.e._, sonoluminescence [4]) and the catalysis of reactions (_i.e._, sonochemistry [5]). Moreover, there is growing interest in bubble-mediated nonlinear interactions between phonons and photons [6]. For such a common-place object, they continue to attract a remarkable amount of research.
Many acoustic properties of bubbles can be explained in terms of the well-known Minnaert 'breathing mode' [1, 3, 7]. For a spherical air-bubble in water at atmospheric pressure, for example, the resonant frequency of this mode can be approximated as \(f_{\rm M}\sim 3.3/R\), where \(R\) is the radius of the bubble. It follows that the associated acoustic wavelength (in both air and water) is much larger than the bubble dimensions (\(\lambda>\!>R\)) [3]. In fact, the Minnaert breathing mode is the lowest-order solution of the so-called Rayleigh-Plesset (RP) equation [7], which (_a priori_) assumes that the pressure inside the bubble is a spatially uniform function of time only. The RP equation also permits higher-order solutions - known as'shape modes' [2, 8] - which are essentially capillary modes mediated by surface tension. These shape modes do not interact strongly with an acoustic radiation field [2], at least not within a linear approximation [9].
Minnaert's original derivation effectively treated the bubble as a mass-spring system, where the compression/rarefication of the confined gas plays the role of the spring and the inertia of the surrounding liquid contributes the mass. However, from a higher level perspective, a bubble might be viewed as an elastic body (the gas sphere) bounded by a viscous, compressible fluid. Alternatively, one can think of the bubble as an enclosure or "room" [10] within a homogeneous fluid medium. In either case, one would then expect a bubble to support a family of volume acoustic modes, for which the pressure (and related parameters) inside the bubble are functions of both time _and position_. The volume modes of small bubbles are expected to occur at very high ultrasonic frequencies and would thus be subject to higher acoustic attenuation. Perhaps as a result, they have not been widely studied in the literature.
A notable exception is the theoretical work by Devaud _et al._[3], who solved for a family of radial breathing modes, drawing an analogy to the Fabry-Perot modes of a spherical-mirror optical cavity. Remarkably, their work demonstrated that the Minnaert resonance is in fact the lowest-order mode within this set. Its low resonant frequency (_i.e._, a wavelength much larger than the bubble dimensions) is attributed to the dispersion imparted by the curved bubble interface. An analogous effect can occur in optical cavities with dispersive mirrors [11].
While the higher-order acoustic modes are anticipated, and confirmed by Devaud's work, to date there is a scarcity of experimental evidence. Here, we report direct observation of these high-order volume breathing modes in the \(\sim\)1-10 MHz ultrasonic range, for single tethered bubbles coupled to on-chip optomechanical sensors [12]. The noise floor of these sensors is determined mainly by thermomechanical noise, and in fact they are very nearly ambient-medium-noise-limited [13, 14, 15, 16] over a wide frequency range. As a result, resonant vibrational energy confined within the acoustic modes of the bubble, and driven only by Brownian motion, is imprinted directly on the noise of the sensor. The measured acoustic mode spectrum, for a range of bubble sizes, is shown to be in good agreement with numerical predictions. We also provide evidence that these high-frequency modes are coupled to propagating acoustic waves.
To provide context, we start by considering the archetypal case of a spherical air bubble in a water medium. As mentioned, Devaud _et al._[3] provided a quasi-analytical derivation of the radially symmetric volume modes for this system. Inside the air bubble, these modes can be expressed as \(p(r)=(\lambda/r)\cdot\sin(q_{\rm a}\cdot r)\), where \(p\) is pressure, \(r\) is the radial coordinate, \(\lambda\) is a constant amplitude, \(q_{\rm a}=(\omega/c)\) is the wave number, and \(c\) is the sound velocity in air. They furthermore showed that the resonant frequencies for a bubble of radius \(R\) can be expressed:
\[x_{a}=q_{a}R\approx 0.0623,4.49,7.73,...\approx\frac{(2n+1)\pi}{2}\;\;. \tag{1}\]
The lowest-order resonance is the well-known Minnaert breathing mode, characterized by a nearly homogeneous internal pressure. The others are higher-order, radially symmetric breathing modes. Notably, the first higher-order (radially symmetric) resonance is predicted to lie at \(\sim\) 72x higher frequency than the fundamental Minnaert resonance. As an example, for a typical millimeter-scale air bubble in water with Minnaert frequency \(f_{\rm M}\sim\) 10 kHz, this places the next radial resonance at \(\sim\) 720 kHz.
A simpler model would treat the bubble as an "acoustic room" with hard boundaries [10], an approximation which can be justified by the large acoustic impedance mismatch between the air and the water, and between the air and the underlying substrate for a tethered bubble. While Flanagan's solutions are for a hemispherical room [10], they can be extended to the full sphere by invoking image theory since the equatorial plane is a perfectly reflecting hard boundary. In this simple system, the air cavity hosts pure eigen-modes with pressure distribution given by:
\[p(r,\theta,\varphi,t)\sim\left\{\begin{matrix}sin(m\varphi)\end{matrix}\right\} \cdot\left\{L_{l}^{m}(\cos\theta)\cdot\left\{l_{l}\left(\frac{\omega_{lm}\cdot r }{c}\right)\right\}\;\;, \tag{2}\]
where \(\varphi\) and \(\theta\) are the azimuthal and polar angles, respectively, \(L_{l}\)\({}^{m}\) is a Legendre function of the first kind (degree \(l\), order \(m\)) defined in Ref. [10], and \(J_{l}\) is a spherical Bessel function. The resonance frequencies are determined by the positive integers \(l\) and \(n\), and the modes are degenerate for values of the positive integer \(m\) (\(m\leq l\)). Solutions include a subset of purely radial modes:
\[p_{n}(r)\sim\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
between the numerical and 'acoustic room' [10] models, including the ordering and pressure profiles of the modes. The Minnaert resonance, which is not captured by the hard-boundary model, is well-predicted by the other two models.
We turn our attention next towards the experimental observation of these breathing modes. This was achieved by placing a tethered bubble over an optomechanical sensor, effectively positioning a sensor inside a bubble, as illustrated in Fig. 2. Sensor chips were first covered with a "puddle" of DI water and then air bubbles were injected using a syringe and needle [8]. A technique was developed for accurately positioning a bubble over an individual sensor, by first tethering it to the substrate plane and then dragging it with the dispensing needle. A video demonstrating this process has been included as supplementary information.
The optomechanical sensors used here are 100-\(\upmu\)m-diameter 'buckled dome' Fabry-Perot cavities, described in detail elsewhere [12]. The upper buckled mirror functions as the mechanical oscillator, and its vibrational motion is imprinted on the light reflected from an interrogation laser tuned near an optical cavity resonance. As mentioned above, these sensors operate at the thermomechanical noise limit [14] with a significant contribution from ambient medium noise [16]. As a result, they are exquistiely sensitive to their local environment [17,18]. For the measurements described below, the reflected laser light was delivered to a high-speed photodetector and power spectral density (PSD) plots were generated from sampled noise signals. The laser power is sufficiently low to ensure that back-action effects can be neglected, so that the laser is simply a passive probe of the vibrational motion of the mirror [18]. Further details concerning the experimental measurement scheme and data processing are provided in the supplementary information file.
Results for a relatively small bubble (\(R\sim 95\)\(\upmu\)m) are shown in Fig. 3. The interrogation laser was coupled to a particular sensor, and sensor noise spectra were recorded at fixed laser power but with variations in the medium external to the sensor. The blue curve in Fig. 3(a) is the spectrum measured in "bulk" air, and it shows a typical [12] series of resonant peaks (e.g. at \(\sim\)2.5 and 6 MHz) associated with the inherent vibrational modes of the buckled mirror. The red curve is the spectrum measured with a small, tethered air bubble positioned over the sensor as shown in Fig. 3(c). Clearly, the noise spectrum is modified relative to the bulk case, most apparently by the appearance of several new resonant peaks. These peaks are well correlated with the acoustic-mode eigen-frequencies of the tethered bubble as predicted by a COMSOL numerical model and indicated by the dashed vertical lines. The correspondingly predicted spatial pressure distributions are also shown, evincing strong analogies with the higher-order modes solved for the spherical bubble case above.
Slight discrepancies between the experimental and theoretical eigen-frequencies can be attributed to uncertainties in estimating bubble dimensions from the microscope images, and also to the neglect of hybridization between sensor and bubble vibrational modes in the numerical model. Nevertheless, the global alignment is very good, and was consistently observed across multiple bubble-sensor combinations (see Fig. 4 below and the SI file for additional examples), allowing us to confidently assert that these measurements are in fact revealing the high-frequency volume modes of the bubbles. While not easily seen in the wide-range plot, the Minnaert resonance was also imprinted on the noise spectrum of the sensor as shown in the zoomed-in plot of Fig. 3(b). The position of this resonance is in excellent agreement with the COMSOL model and also with analytical predictions [19] for a tethered bubble (see the SI file for additional discussion), and assuming a contact angle of 30 degrees as estimated from the microscope images. In this case, \(f_{\rm{TM}}\sim 0.83\)\(f_{\rm{M}}\)[19], where \(f_{\rm{TM}}\) is the fundamental breathing mode of the tethered bubble of radius \(R\).
Aside from the clear signatures of bubble acoustic modes, the data in Fig. 2 also contains compelling evidence that the vibrational spectrum of the optomechanical sensor is itself strongly modified. We attribute this to a significant modification of the local density of states (LDOS) in the acoustic environment of the sensor [20-22]. To draw an analogy with electromagnetic systems, the optomechanical sensor can be viewed as the "acoustic emitter" while the bubble represents an acoustic resonant cavity encapsulating the emitter.
Figure 2: Schematic cross-sectional view of a tethered bubble centered over a buckled dome Fabry-Perot optomechanical sensor. The motion of the flexible upper mirror is imprinted onto the laser light reflected from the optical cavity. The motion of the bubble is detected via the effect it has on the mirror motion.
Notably, the data reveal: i. a suppression/inhibition of background acoustic emission over nearly the entire \(\sim\)1-10 MHz range, except at bubble resonances, ii. a narrowing of the dominant "emission lines" at \(\sim\) 2.5 and 6 MHz, corresponding to an increase in the associated lifetimes, and iii. a slight red-shift of the "emitter" frequencies consistent with a cavity-modified Lamb-shift [22]. We observed similar behavior for multiple small-bubble/sensor combinations, and a more detailed example is provided in the SI file. These observations are consistent with a redistribution of the vibrational energy of the sensor, caused by the modified acoustic environment. The degree of modification is quite remarkable given the fact that the mechanical oscillator is coupled to other thermal baths, in particular the underlying substrate. It is further evidence that the noise floor of these sensors is substantially limited by their external medium [16]. A more complete study is left for future work.
Analogous results for a larger bubble (\(R\sim\) 313 \(\upmu\)m) are shown in Fig. 4. Compared to the smaller bubble results, a much denser spectrum of higher-order acoustic modes is revealed (see Fig. 4(a)). As above, the noise spectrum contains a signature of the low-frequency Minnaert resonance, and its location is again in good agreement with predictions (see Fig. 4(b)). The acoustic mode spectrum of the bubble is further revealed by normalizing the data obtained with the bubble in place to the data obtained for the bulk air case, as shown in Fig. 4(c). The numerically predicted eigen-frequencies of the six lowest-order acoustic modes, including the Minnaert resonance, are indicated by the dashed vertical lines, and show excellent alignment with the experimental peaks. Evidence of acoustic modes extending up to nearly 10 MHz is apparent, but assignment of individual modes becomes difficult due to the high density of modes predicted above \(\sim\) 1 MHz. The quality factor of the lower-order modes is as high as \(\sim\) 70 (see the SI file for further discussion), in good agreement with theoretical predictions considering radiation, thermal, and viscous damping [3]. We speculate that the lower quality of the modes above 1 MHz might in part be attributable to the dramatic rise of ultrasound attenuation in air for frequencies in this range [23].
Finally, we mention experiments aimed at observing the radiation of energy into the water medium. To this end, air bubbles were either tethered to the chip surface near (but not overtop of) a sensor of interest, or suspended from the needle in close proximity to the sensor of interest. In both cases, signatures of bubble acoustic modes were observed in the sensor noise spectrum (see the SI file for examples). These features are less prominent than those shown above, which is expected because only a fraction of the energy circulating inside the bubble is coupled externally. Nevertheless, the results confirm that these higher-order modes are exchanging energy with the external radiation field [3].
Figure 3: (a) Power spectral densities for the same sensor in air (blue curve) and encapsulated by a bubble (red curve). The shot noise floor for the same average received power at the photodetector is also shown (green curve). The vertical dashed lines indicate numerically predicted eigen-frequencies for the lowest-order bulk acoustic modes of the bubble. Corresponding acoustic pressure distributions at the bubble/substrate interface are also shown. The innermost concentric circle is the contact line of the bubble, and the adjacent circle is its outer circumference. A top-down-view microscope image of the bubble and surrounding sensors is shown as the inset (scale bar – 200 \(\upmu\)m). (b) A zoomed-in version of the data presented in (a), showing the Minnaert resonance near 30 KHz. Theoretical and numerical predictions are also plotted as solid and dashed vertical lines, respectively. The error bar represents a \(\pm\) 10% deviation in the estimated bubble diameter. (c) A side profile microscope image of the tethered bubble (scale bar – 500 \(\upmu\)m).
In summary, we described an experimental study of the acoustic resonances of air bubbles in water, for frequencies extending into the megahertz ultrasound range. The results illustrate that the vibrational properties of a bubble go beyond the Minnaert breathing mode and capillary modes predicted by the Rayleigh-Plesset equation. It would be fair to consider if these higher-order volume modes are mainly of theoretical interest, or rather might have practical implications. Undoubtedly, the situations where they are expected to be manifest are fewer than that for the Minnaert resonance, simply because of the increased attenuation of sound with frequency, especially in air. Moreover, the Minnaert resonance is unique in the sense that it involves prominent motion of the relatively massive water medium, which ties it more directly to the dramatic effects associated with acoustic cavitation. For the higher-order modes, their characteristic oscillation frequencies are such that the dominant motion is that of the gas molecules inside the bubble (_i.e._, a standing pressure wave).
Nevertheless, the internal state and dynamics of a gas bubble is an incredibly complex physical problem, especially in scenarios involving the collapse of oscillating bubbles [7]. It seems plausible that any complete description will need to include consideration of the higher-order acoustic modes. Notably, energy storage by acoustic modes of a bubble has been posited as a potential contributor to the extreme conditions leading to single-bubble sonoluminescence [24], although the same authors subsequently discounted this theory [25] due to a lack of experimental corroboration (and consistent with the conclusions outlined in Geers _et al._[26]). Notwithstanding this point of view, the role of acoustic modes remains a matter of ongoing debate [27]. Moreover, it is possible that higher-order acoustic breathing modes could play a role in emerging fields, including phonon-photon interactions mediated by bubbles [6; 7] and the use of bubbles in biosensing and related applications [28; 29; 30]. In any case, it seems likely that the acoustics of bubbles will continue to yield new surprises.
Figure 4: Analogous results to those presented in Fig. 3 but for a larger bubble. (a) Power spectral densities showing the sensor in air (blue), encapsulated by a bubble (red), and the shot noise (green). The grey bands represent regions dominated by mechanical modes inherent to the sensor. (b) A zoomed version of the data presented in (a), showing the Minnaert resonance near 10 KHz. Theoretical and numerical predictions are also plotted as solid and dashed vertical lines, respectively. The error bar represents a \(\pm\) 10% deviation in the estimated bubble diameter. (c) A normalized version of the data in part a. is shown, to further isolate the acoustic modes of the bubble. The first 6 numerically predicted eigenfrequencies are also plotted as vertical dashed lines with their corresponding acoustic pressure distributions shown below the plot. (d) Microscope images showing the top and side views of the bubble (scale bars – 200 μm and 500 μm, respectively).
This research was funded by the Government of Alberta (Innovation Catalyst Grant), Alberta Innovates, the Natural Sciences and Engineering Research Council of Canada (CREATE 495446-17), and the Alberta EDT Major Innovation Fund (Quantum Technologies).
|
2308.08013
|
Minimal zero entropy subshifts can be unrestricted along any sparse set
|
We present a streamlined proof of a result essentially present in previous
work of the author, namely that for every set $S = \{s_1, s_2, \ldots\} \subset
\mathbb{N}$ of zero Banach density and finite set $A$, there exists a minimal
zero-entropy subshift $(X, \sigma)$ so that for every sequence $u \in
A^\mathbb{Z}$, there is $x_u \in X$ with $x_u(s_n) = u(n)$ for all $n \in
\mathbb{N}$. Informally, minimal deterministic sequences can achieve completely
arbitrary behavior upon restriction to a set of zero Banach density.
As a corollary, this provides counterexamples to the Polynomial Sarnak
Conjecture which are significantly more general than some recently provided in
word of Kanigowski, Lema\'{n}czyk, and Radziwi\l\l and of Lian and Shi, and
shows that no similar result can hold under only the assumptions of minimality
and zero entropy.
|
Ronnie Pavlov
|
2023-08-15T20:04:36Z
|
http://arxiv.org/abs/2308.08013v1
|
# Minimal zero entropy subshifts can be unrestricted along any sparse set
###### Abstract.
We present a streamlined proof of a result essentially present in [5], namely that for every set \(S=\{s_{1},s_{2},\ldots\}\subset\mathbb{N}\) of zero Banach density and finite set \(A\), there exists a minimal zero-entropy subshift \((X,\sigma)\) so that for every sequence \(u\in A^{\mathbb{Z}}\), there is \(x_{u}\in X\) with \(x_{u}(s_{n})=u(n)\) for all \(n\in\mathbb{N}\). Informally, minimal deterministic sequences can achieve completely arbitrary behavior upon restriction to a set of zero Banach density.
As a corollary, this provides counterexamples to the Polynomial Sarnak Conjecture of [1] which are significantly more general than some recently provided in [3] and [4] and shows that no similar result can hold under only the assumptions of minimality and zero entropy.
Key words and phrases:Sarnak conjecture, minimal, subshift, zero entropy 2020 Mathematics Subject Classification: Primary: 37B10; Secondary: 37B05 The author gratefully acknowledges the support of a Simons Foundation Collaboration Grant.
## 1. Introduction
The well-known **Sarnak conjecture** states that the Mobius function \(\mu\) is uncorrelated with all deterministic sequences. A sequence is called **deterministic** if it is the image under a continuous function of the trajectory of a point in a **topological dynamical system** with zero **entropy** (see Section 2 for definitions of this and other concepts not defined in this introduction). More formally,
**Conjecture 1** (Sarnak Conjecture).: _If \((X,T)\) is a topological dynamical system with zero entropy, \(x_{0}\in X\), and \(f\in C(X)\), then_
\[\frac{1}{N}\sum_{n=1}^{N}\mu(n)f(T^{n}x_{0})\to 0.\]
Although this problem is still open, there are many recent works on the topic, which have made significant progress and resolved it for some classes of dynamical systems. In [1], a potential stronger 'polynomial' (meaning that only polynomial iterates of \(x_{0}\) are taken rather than all) version of the Sarnak Conjecture was conjectured. In order to rule out some degenerate examples, the assumption of **minimality** was added on \((X,T)\), meaning that for every \(x\in X\), the set \(\{T^{n}x\}\) is dense.
**Conjecture 2** (Polynomial Sarnak Conjecture ([1], Conjecture 2.3)).: _If \((X,T)\) is a minimal topological dynamical system with zero entropy, \(x_{0}\in X\), \(f\in C(X)\), and \(p:\mathbb{N}\to\mathbb{N}_{0}\) is a polynomial, then_
\[\frac{1}{N}\sum_{n=1}^{N}\mu(n)f(T^{p(n)}x_{0})\to 0.\]
This conjecture is now known to be false; recently Kanigowski, Lemanczyk, and Radziwill([3]) and Lian and Shi ([4]) have separately provided counterexamples. However, these counterexamples are specific to the case \(p(n)=n^{2}\) (though they could perhaps be generalized) and make strong usage of the nice arithmetic properties of this function. The first is a skew product and the second is a symbolically defined dynamical system called a Toeplitz subshift.
The purpose of this note is to show that even much weaker versions of Conjecture 2 are false, because minimal zero entropy systems can achieve **any** possible behavior (i.e., not just correlation with \(\mu\)) along **any** prescribed set \(S\subset\mathbb{N}\) of zero **Banach density** (i.e., not just the image of a polynomial). One such result had already been proved by the author in [5], which already immediately refutes the Polynomial Sarnak Conjecture.
**Theorem 3** ([5], Corollary 5.1).: _Assume that \(d\in\mathbb{N}\), \((w_{n})\) is an increasing sequence of positive integers where \(w_{n+1}<(w_{n+1}-w_{n})^{d+1}\) for large enough \(n\), and \((z_{n})\) is any sequence in \(\mathbb{T}:=\mathbb{Z}/\mathbb{N}\). Then there exists a totally minimal, totally uniquely ergodic, topologically mixing zero entropy map \(S\) on \(\mathbb{T}^{2d+4}\) so that, if \(\pi\) is projection onto the final coordinate, \(\pi(S^{w_{n}}\mathbf{0})=z_{n}\) for sufficiently large \(n\)._
(We don't further work with the properties of unique ergodicity and topological mixing, and so don't provide definitions here. However, we do note that Theorem 3 shows that even adding these hypotheses to Conjecture 2 would not make it true.) We note that the entropy of the transformation \(S\) was never mentioned in [5]. However, \(S\) is defined as a suspension flow of a product of a toral rotation and a skew product \(T\) under a roof function \(1<g<3\). The skew product \(T\) is of the form \((x_{1},x_{2},x_{3},\ldots,x_{m})\mapsto(x_{1}+\alpha,x_{2}+f(x_{1}),x_{3}+x_{2 },\ldots,x_{m}+x_{m-1})\) for a continuous self-map \(f\) of \(\mathbb{T}\). Since its first coordinate is an irrational rotation, known to have zero entropy, the map \(T\) also has zero entropy by Abramov's skew product entropy formula. Then \(S\) has zero entropy as well, by Abramov's suspension flow entropy formula.
**Remark 4**.: Here are a few more relevant facts about the construction from [5]:
1. The map \(S\) is **distal**, meaning that for all \(x\neq y\), \(\{d(T^{n}x,T^{n}y)\}_{n}\) is bounded away from \(0\).
2. The roof function \(g\) is \(C^{\infty}\) and the function \(f\), though not \(C^{\infty}\) as constructed in [5], can easily be made so; it is just a uniformly convergent infinite series of 'bump functions,' which can easily be chosen \(C^{\infty}\).
The second fact may be of interest since the authors of [3] prove a positive result for convergence along prime iterates of similar skew products \((x,y)\mapsto(x+\alpha,y+f(x))\) under the assumption that the function \(f\) is real analytic, provide some counterexamples with continuous \(f\), and ask whether this assumption could be weakened to \(C^{\infty}\). Though the constructions are not exactly the same, and though the primes absolutely do not satisfy the assumption of Theorem 3, (2) might suggest that \(C^{\infty}\) is not always sufficient for good averaging of skew products along sparse sequences.
We note that Theorem 3 clearly applies to any sequence \(w_{n}=p(n)\) for a non-constant polynomial \(p:\mathbb{N}\to\mathbb{N}_{0}\) (possibly omitting finitely many terms), and so,
by simply defining \(z_{n}\) to be \(\frac{1}{2}\) when \(\mu(n)=1\) and \(0\) otherwise, one achieves
\[\frac{1}{N}\sum_{n=1}^{N}\mu(n)\pi(S^{p(n)}\mathbf{0})=\frac{0.5|\mu^{-1}(\{1\}) \cap\{1,\ldots,N\}|}{N},\]
which does not approach \(0\) as \(N\to\infty\), disproving the Polynomial Sarnak Conjecture for every nonconstant \(p\). The same is true of any function \(p\) with polynomial growth, even for degree less than \(2\), e.g. \(p(n)=\lfloor n^{1.01}\rfloor\). However, Theorem 3 does not apply to more slowly growing \(p\) such as \(\lfloor n\ln n\rfloor\). The author proved a different result (Corollary 3.1) in [5] using **subshifts**; a subshift is a closed shift-invariant subset of \(A^{\mathbb{Z}}\) (for some finite alphabet \(A\)) endowed with the left-shift transformation. Corollary 3.1 of [5] states that given any sequence of zero Banach density (regardless of growth rate), there exists a minimal subshift whose points can achieve arbitrary behavior along that sequence. However, entropy was not mentioned there, and although the proof there can indeed yield a zero entropy subshift, it's not easy to verify; the construction is quite complicated in order to achieve \((X,T)\) which is totally minimal, totally uniquely ergodic, and topologically mixing.
In this note, we present a streamlined self-contained proof of the following result, which shows that minimal zero entropy subshifts can realize arbitrary behavior along any sequence of zero Banach density.
**Theorem 5**.: _For any \(S=\{s_{1},s_{2},\ldots\}\subset\mathbb{N}\) with \(d^{*}(S)=0\) and any finite alphabet \(A\), there exists a minimal zero entropy subshift \(X\subset A^{\mathbb{Z}}\) so that for every \(u\in A^{\mathbb{N}}\), there is \(x_{u}\in X\) where \(x_{u}(s_{n})=u(n)\) for all \(s\in S\)._
We note that this proves that even with substantially weaker hypotheses, nothing in the spirit of the Polynomial Sarnak Conjecture can hold under only the assumptions of minimality and zero entropy. Even if \(p\) is only assumed to have range of zero Banach density and \(\rho:\mathbb{N}\to\mathbb{Z}\) is only assumed to have \(\limsup\frac{1}{N}\sum_{n=1}^{N}|\rho(n)|>0\) (equivalently, \(\rho\) takes nonzero values on a set of positive upper density), one can define a subshift \(X\) on \(\{-1,0,1\}\) and \(x_{u}\in X\) as in Theorem 5 for \(u(n)=\operatorname{sgn}(\rho(n))\). Then, for \(f\in C(X)\) defined by \(x\mapsto x(0)\), the limit supremum of the averages
\[\frac{1}{N}\sum_{n=1}^{N}\rho(n)f(\sigma^{p(n)}x_{u})=\frac{1}{N }\sum_{n=1}^{N}\rho(n)x_{u}(p(n))=\frac{1}{N}\sum_{n=1}^{N}\rho(n)u(n)=\\ \frac{1}{N}\sum_{n=1}^{N}\rho(n)\mathrm{sgn}(\rho(n))=\frac{1}{ N}\sum_{n=1}^{N}|\rho(n)|\]
is positive by assumption.
We remark that when \(\rho=\mu\) is the Mobius function, this means that
\[\frac{1}{N}\sum_{n=1}^{N}\mu(n)f(\sigma^{p(n)}x_{u})\]
can be made to approach \(\frac{6}{\pi^{2}}\) (for \(x_{u}\) in a minimal zero-entropy subshift), a slight improvement of [4] which showed that it could attain values arbitrarily close to \(\frac{6}{\pi^{2}}\).
## 2. Definitions
A **topological dynamical system**\((X,T)\) is defined by a compact metric space \(X\) and homeomorphism \(T:X\to X\). A **subshift** is a topological dynamical
system defined by some finite set \(A\) (called the **alphabet**) and the restriction of the **left shift** map \(\sigma:A^{\mathbb{Z}}\to A^{\mathbb{Z}}\) defined by \((\sigma x)(n)=x(n+1)\) to some closed and \(\sigma\)-invariant \(X\subset A^{\mathbb{Z}}\) (with the induced product topology). A subshift \((X,\sigma)\) is **minimal** if for every \(x\in X\), \(\{\sigma^{n}x\}_{n\in\mathbb{Z}}\) is dense in \(X\).
A **word** over \(A\) is any finite string of symbols from \(A\); a word \(w=w(1)\ldots w(n)\) is said to be a **subword** of a word or infinite sequence \(x\) if there exists \(i\) so that \(w(1)\ldots w(n)=x(i+1)\ldots x(i+n)\). The **language**\(L(X)\) of a subshift \((X,\sigma)\) is the set of all subwords of sequences in \(X\), and for any \(n\in\mathbb{N}\) we denote \(L_{n}(X)=L(X)\cap A^{n}\). For two words \(u=u(1)\ldots u(m)\) and \(v=v(1)\ldots v(n)\), denote by \(uv\) their **concatenation**\(u(1)\ldots u(m)v(1)\ldots v(n)\).
We do not give a full definition of **topological entropy** here, but note that it is a number \(h(X,T)\in[0,\infty]\) associated to any TDS \((X,T)\) which is conjugacy-invariant. We will only need the following definition for subshifts: for any \((X,\sigma)\),
\[h(X,\sigma)=\lim\frac{\ln|L_{n}(X)|}{n}.\]
The **Banach density** of a set \(S\subset\mathbb{N}\) is
\[d^{*}(S):=\lim_{n\to\infty}\sup_{k\in\mathbb{N}}\frac{|S\cap\{k,\ldots,k+n-1\} |}{n}.\]
## 3. Proof of Theorem 5
Proof.: As in [5], we adapt the block-concatenation construction of Hahn and Katznelson ([2]).
We construct \(X\) iteratively via auxiliary sequences \(m_{k}\) of odd positive integers, \(A_{k}\subset A^{m_{k}}\), and \(w_{k}\in A_{k}\). Define \(m_{0}=1\), \(A_{0}=A\), and \(w_{0}=0\) (which we assume without loss of generality to be in \(A\)). Now, suppose that \(m_{k}\), \(A_{k}\), and \(w_{k}\) are defined. Define \(m_{k+1}>\max(3m_{k}|A_{k}|,12(\ln 2)(4/3)^{k+1})\) to be an odd multiple of \(3m_{k}\) large enough that \(|S\cap I|/|I|<(3m_{k})^{-1}\) for all intervals \(I\) of length \(m_{k+1}\) (using the fact that \(d^{*}(S)=0\)). Define \(A_{k+1}\) to be the set of all concatenations of \(\frac{m_{k+1}}{m_{k}}\) words in \(A_{k}\) in which every word in \(A_{k}\) is used at least once and in which at least one-third of the concatenated words are equal to \(w_{k}\). Define \(Y_{k}\) to be the set of shifts of biinfinite (unrestricted) concatenations of words in \(A_{k}\), define \(Y=\bigcap_{k}Y_{k}\), and define \(X\) to be the subshift of \(Y\) consisting of sequences in which every subword is a subword of some \(w_{k}\).
We claim that \((X,\sigma)\) is minimal. Indeed, consider any \(x\in X\) and \(w\in L(X)\). By definition, \(w\) is a subword of \(w_{k}\) for some \(k\). By definition, \(w_{k}\) is a subword of every word in \(A_{k+1}\). Finally, \(x\) is a shift of a concatenation of words in \(A_{k+1}\), each of which contains \(w_{k}\), and therefore \(w\). So, \(x\) contains \(w\), and since \(w\in L(X)\) was arbitrary, the orbit of \(x\) is dense. Since \(x\in X\) was arbitrary, \((X,\sigma)\)**is minimal**.
We also claim that \((X,\sigma)\) has zero entropy. We see this by bounding \(|A_{k}|\) from above. For every \(k\), each word in \(A_{k+1}\) is defined by an ordered \((m_{k+1}/m_{k})\)-tuple of words in \(A_{k}\), where at least one-third are \(w_{k}\). The number of such tuples can be bounded from above by
\[\binom{m_{k+1}/m_{k}}{m_{k+1}/3m_{k}}|A_{k}|^{2m_{k+1}/3m_{k}}\leq 2^{m_{k+1}/m_{ k}}|A_{k}|^{2m_{k+1}/3m_{k}}.\]
Therefore,
\[\frac{\ln|A_{k+1}|}{m_{k+1}}\leq\frac{\ln 2}{m_{k}}+\frac{2}{3}\frac{\ln|A_{k} |}{m_{k}}.\]
Now, it's easily checked that \(\frac{\ln|A_{k}|}{m_{k}}\leq\ln|A|(3/4)^{k}\) for all \(k\) by induction. The base case \(k=0\) is immediate. For the inductive step, if we assume that \(\frac{\ln|A_{k}|}{m_{k}}\leq\ln|A|(3/4)^{k}\), then recalling that \(m_{k}>12(\ln 2)(4/3)^{k}\),
\[\frac{\ln|A_{k+1}|}{m_{k+1}}<\frac{1}{12}(3/4)^{k}+\frac{2}{3}\ln|A|(3/4)^{k} \leq\frac{\ln|A|}{12}(3/4)^{k}+\frac{2}{3}\ln|A|(3/4)^{k}=\ln|A|(3/4)^{k+1}.\]
Therefore, for all \(k\), \(|A_{k}|\leq e^{\ln|A|(3/4)^{k}m_{k}}\). Finally, we note that every word in \(L_{m_{k}}(X)\) is a subword of a concatenation of a pair of words in \(A_{k}\), so determined by such a pair and by the location of the first letter. Therefore, \(|L_{m_{k}}(X)|\leq m_{k}|A_{k}|^{2}<m_{k}e^{2\ln|A|(3/4)^{k}m_{k}}\). This clearly implies that
\[h(X)=\lim_{k\to\infty}\frac{\ln|L_{m_{k}}(X)|}{m_{k}}\leq\limsup_{k\to\infty} \frac{\ln m_{k}}{m_{k}}+2\ln|A|(3/4)^{k}=0,\]
i.e. **X has zero entropy**.
It remains, for \(u\in A^{\mathbb{N}}\), to construct \(x_{u}\in X\) with \(x_{u}(s_{n})=u(n)\) for all \(s_{n}\in S\). The construction of \(x_{u}\) proceeds in steps, where it is continually assigned letters from \(A\) on portions of \(\mathbb{Z}\), with undefined portions labeled by \(*\). Formally, define \(x^{(0)}\in A\sqcup\{*\}^{\mathbb{Z}}\) by \(x^{(0)}(s_{n})=u(n)\) for \(s\in S\) and \(*\) for all other locations.
Now partition \(\mathbb{Z}\) into the intervals \(((i-0.5)m_{1},(i+0.5)m_{1})\) (herein, all intervals are assumed to be intersected with \(\mathbb{Z}\)). For every \(i\) for which \(S\cap((i-0.5)m_{1},(i+0.5)m_{1})\neq\varnothing\), consider the \(m_{1}\)-letter word \(x^{(0)}(((i-0.5)m_{1},(i+0.5)m_{1}))\). By definition of \(m_{1}\), \(|S\cap((i-0.5)m_{1},\ldots,(i+0.5)m_{1})|<m_{1}/3m_{0}=m_{1}/3\), and so at most one-third of the letters in this word are non-\(*\). Fill the remaining locations by assigning the first \(m_{1}/3\) as \(w_{0}=0\). At least \(m_{1}/3\) letters remain, which is larger than \(|A_{0}|=|A|\) by definition of \(m_{1}\). Fill those in an arbitrary way which uses all letters from \(A\) at least once. The resulting \(m_{1}\)-letter word is in \(A_{1}\) by definition, call it \(w_{i}^{(1)}\). Now, define \(x^{(1)}\) by setting \(x^{(1)}(((i-0.5)m_{1},(i+0.5)m_{1}))=w_{i}^{(1)}\) for all \(i\) as above (i.e. those for which \(S\cap((i-0.5)m_{1},(i+0.5)m_{1})\neq\varnothing\)) and \(*\) elsewhere. Note that \(x^{(1)}\) is an infinite concatenation of words in \(A_{1}\) and blocks of \(*\) of length \(m_{1}\) and that \(x^{(1)}\) contains \(*\) on any interval \(((i-0.5)m_{1},(i+0.5)m_{1})\) which is disjoint from \(S\).
Now, suppose that \(x^{(k)}\) has been defined as an infinite concatenation of words in \(A_{k}\) and blocks of \(*\) of length \(m_{k}\) which contains \(*\) on any interval \(((i-0.5)m_{k},(i+0.5)m_{k})\) which is disjoint from \(S\). We wish to extend \(x^{(k)}\) to \(x^{(k+1)}\) by changing some \(*\) symbols to letters in \(A\). Consider any \(i\) for which \(S\cap((i-0.5)m_{k+1},\ldots,(i+0.5)m_{k+1})\neq\varnothing\). The portion of \(x^{(k)}\) occupying that interval is a concatenation of words in \(A_{k}\) and blocks of \(*\) of length \(m_{k}\) (we use here the fact that \(m_{k+1}\) is odd), and the number which are words in \(A_{k}\) is bounded from above by the number of \(j\in((i-0.5)m_{k+1}/m_{k},(i+0.5)m_{k+1}/m_{k})\) for which \(((j-0.5)m_{k},(j+0.5)m_{k})\) is not disjoint from \(S\), which in turn is bounded from above by \(|S\cap((i-0.5)m_{k+1},(i+0.5)m_{k+1})|\), which by definition of \(m_{k+1}\) is less than \(m_{k+1}/3m_{k}\). Therefore, at least two-thirds of the concatenated \(m_{k}\)-blocks comprising \(x^{(k)}(((i-0.5)m_{k+1},(i+0.5)m_{k+1}))\) are blocks of \(*\). Fill the first \(m_{k+1}/3m_{k}\) of these with \(w_{k}\). Then at least \(m_{k+1}/3m_{k}\) blocks remain, which is more than \(|A_{k}|\) by definition of \(m_{k+1}\). Fill these in an arbitrary way which uses each word in \(|A_{k}|\) at least once. By definition, this creates a word in \(A_{k+1}\), which we denote by \(w_{i}^{(k+1)}\). Define \(x^{(k+1)}(((i-0.5)m_{k+1},(i+0.5)m_{k+1}))=w_{i}^{(k+1)}\) for any \(i\) as above (i.e. those for which \(S\cap((i-0.5)m_{k+1},(i+0.5)m_{k+1})\neq\varnothing\)) and as \(*\) elsewhere. Note that \(x^{(k+1)}\)
is an infinite concatenation of words in \(A_{k+1}\) and blocks of \(*\) of length \(m_{k+1}\) which contains \(*\) on any interval \(((i-0.5)m_{k+1},(i+0.5)m_{k+1})\) which is disjoint from \(S\).
We now have defined \(x^{(k)}\in(A\sqcup\{*\})^{\mathbb{Z}}\) for all \(k\in\mathbb{N}\). Since each is obtained from the previous by changing some \(*\)s to letters from \(A\), they approach a limit \(x_{u}\) which agrees with \(x^{(0)}\) on all locations where \(x^{(0)}\) had letters from \(A\), i.e. \(x_{u}(s_{n})=u(n)\) for all \(n\in\mathbb{N}\). Since \(S\neq\varnothing\), \(S\cap(-0.5m_{k},0.5m_{k})\neq\varnothing\) for all large enough \(k\), and so \(x^{(k)}((-0.5m_{k},0.5m_{k}))\) has no \(*\), meaning that \(x_{u}\in A^{\mathbb{Z}}\).
It remains only to show that \(x_{u}\in X\). By definition, \(x_{u}\) is a concatenation of words in \(A_{k}\) for every \(k\), so \(x_{u}\in Y=\bigcap_{k}Y_{k}\) as in the definition of \(X\). Finally, every subword \(w\) of \(x_{u}\) is contained in \(x_{u}((-0.5m_{k},0.5m_{k}))\) for large enough \(k\), and this word is in \(A_{k}\) by definition. Since all words in \(A_{k}\) are subwords of \(w_{k+1}\), \(w\) is also. Therefore by definition, \(\mathbf{x_{u}}\in\mathbf{X}\) and \(\mathbf{x_{u}}(\mathbf{s_{n}})=\mathbf{u}(\mathbf{n})\) for all \(n\), completing the proof.
**Remark 6**.: We observe that the assumption of zero Banach density cannot be weakened in Theorem 5. Assume for a contradiction that \(S\subset\mathbb{N}\) has \(d^{*}(S)=\alpha>0\), and that every \(u\in A^{\mathbb{N}}\) could be assigned \(x_{u}\) as in Theorem 5. By definition of Banach density, there exist intervals \(I_{n}\) with lengths approaching infinity so that \(|S\cap I_{n}|/|I_{n}|>\alpha/2\) for all \(n\). For every \(n\), since all possible assignments of letters from \(A\) to locations in \(S\cap I_{n}\) give rise to sequences in \(X\), \(|L_{|I_{n}|}(X)|\geq 2^{|S\cap I_{n}|}>|A|^{\alpha|I_{n}|/2}\). Then,
\[h(X)=\lim_{n}\frac{\ln|L_{|I_{n}|}(X)|}{|I_{n}|}\geq\limsup\frac{\ln|A|^{ \alpha|I_{n}|/2}}{|I_{n}|}=\alpha(\ln|A|)/2>0.\]
Therefore, no such \(X\), minimal or otherwise, can have zero entropy.
|
2303.12339
|
Chemosensitivity testing of revived fresh-frozen biopsies using digital
speckle holography
|
Enrolling patients in clinical trials to obtain fresh tumor biopsies to
profile anticancer agents can be slow and expensive. However, if flash-frozen
biopsies can be thawed to produce viable living tissue with relevant biodynamic
profiles, then a large reservoir of tissue-banked samples could become
available for phenotypic library building. Here, we report biodynamic profiles
acquired from revived flash-frozen canine B-cell lymphoma biopsies using
digital speckle holography. We compared the thawed-tissue drug-response
spectrograms to spectrograms from fresh tissues in a study of canine B-cell
lymphoma. By compensating for tissue trauma in the thawed sample, patient
clustering of both the fresh and thawed samples were found to be in general
agreement with clinical outcomes. This study indicates that properly frozen
tumor specimens are a viable proxy for fresh specimens in the context of
chemosensitivity testing, and that thawed samples from tissue banks contain
sufficient viable cells to evaluate phenotypic drug response.
|
Zhen Hua, John Turek, Mike Childress, David Nolte
|
2023-03-22T06:32:07Z
|
http://arxiv.org/abs/2303.12339v1
|
# Chemosensitivity testing of revived fresh-frozen biopsies using digital speckle holography
###### Abstract
Enrolling patients in clinical trials to obtain fresh tumor biopsies to profile anticancer agents can be slow and expensive. However, if flash-frozen biopsies can be thawed to produce viable living tissue with relevant biodynamic profiles, then a large reservoir of tissue-banked samples could become available for phenotypic library building. Here, we report biodynamic profiles acquired from revived flash-frozen canine B-cell lymphoma biopsies using digital speckle holography. We compared the thawed-tissue drug-response spectrograms to spectrograms from fresh tissues in a study of canine B-cell lymphoma. By compensating for tissue trauma in the thawed sample, patient clustering of both the fresh and thawed samples were found to be in general agreement with clinical outcomes. This study indicates that properly frozen tumor specimens are a viable proxy for fresh specimens in the context of chemosensitivity testing, and that thawed samples from tissue banks contain sufficient viable cells to evaluate phenotypic drug response.
**Keywords:** Tissue Dynamics Spectroscopy, Digital Holography, Coherence-domain imaging, Optical Coherence Tomography, Dynamic Light Scattering, Doppler Spectroscopy, Intracellular dynamics, Flash-frozen tissue, Tissue trauma. +
Footnote †: journal: [email protected]
## 1 Introduction
### Digital holographic optical coherence imaging
Biodynamic profiling is an optical imaging technology related to _en face_ OCT [1] using partially coherent speckle generated by broad-area illumination with coherence detection through digital holography [2]. Biodynamic profiling penetrates up to 1 mm into living tissue and returns high-content information in the form of dynamic light scattering across a broad spectral range. The fluctuation frequencies relate to Doppler frequency shifts caused by light scattering from subcellular constituents that are in motion [3]. The speeds of intracellular dynamics range across nearly four orders of magnitude from nanometers per second (cell membrane motion) to tens of microns per second (organelles and vesicles movement). For a near-infrared backscattering geometry these speeds correspond to Doppler frequencies from 0.01 Hz to 10 Hz. Dynamic light scattering in living tissues has been used to identify intracellular transport signatures of diffusive relative to directed motion [4], for the detection of apoptosis [5], and extracellular restructuring [6].
### Intracellular dynamics of living tissue
Intracellular dynamics in living tissue are dominated by active transport driven by bioenergetic processes far from thermal equilibrium [7]. Cells are highly dynamic systems that continuously undergo internal reconfiguration through random and/or coordinated molecular and mechanical responses. Intracellular dynamics are fundamental processes that support a broad range of functions such as cell migration and division [8]. These intracellular processes are derived from, and often influence, physiological conditions of the cells. Quantitative measurement of intracellular processes would thus aid in building a better understanding of the underlying mechanisms of cellular states and functions.
Dynamic light scattering combined with coherence-gated optical sectioning has led to the development of biodynamic imaging (BDI) [9] and related techniques such as tissue dynamics spectroscopy (TDS) [10]. Biodynamic profiling techniques are sensitive probes of the response of living tissue to applied drugs and therapeutics [9; 11], which has been extended to profiling how living biopsy samples respond to standard-of-care anticancer treatments. Biodynamic studies of chemosensitivity in patients obtain living biopsy samples through a conventional diagnostic process. However, a canine B-cell lymphoma and a human ovarian cancer trial required several years to enroll approximately 20 in each study [12; 13]. This slow rate of enrollment limits the numbers of samples that can be obtained. To identify biodynamic signatures of drug sensitivity or resistance in the face of sample-to-sample and
patient-to-patient heterogeneity requires phenotypic profiles of at least 50 independent samples depending on the variability of the biodynamic spectral fingerprints.
If flash-frozen biopsies could be revived and measured, then a large reservoir of tissue-banked samples could become available for phenotypic library building. In this paper, we demonstrate that fresh-frozen biopsy samples can be thawed, and their health stabilized sufficiently to measure biodynamic spectral signatures of their responses to applied therapeutics. Two tissue types are studied here: canine B-cell lymphoma, for which we have both fresh and frozen tissues as well as the patient clinical outcomes. Biodynamic intracellular processes occur in the thawed tissues that do not match fresh tissue, but these effects can be partially compensated to allow a comparison between fresh and thawed tissues in the case of the canine lymphoma.
## 2 Biodynamic profiling system and experimental methods
### Biodynamic profiling system
The experimental setup of the biodynamic imaging system is shown in Fig 1 based on a Mach-Zehnder interferometer and off-axis digital holography. The bandwidth of the light source (Superlum, Cork, Ireland) is 50 nm, the wavelength is 840 nm, and the coherence length is \(\sim\)15 \(\upmu\)m. The scattering from the specimen serves as the signal while the reflection from the first beam splitter serves as the reference arm. The crossing angle between the reference beam and the signal beam is two degrees and can be changed by tuning the orientation of the final beam splitter or the optical path delay (mirror system). A neutral density (ND) filter is placed on the reference arm to reduce the intensity of the reference. The CCD camera is placed on the focal plane of the third lens.
### power spectrogram format
A sequence of OCI frames is captured, representing one observation set of the living target. By capturing several sequences, changes in the time-dependent behavior of the target are detected over many hours. For a sequence of OCI frames, the temporal normalized standard deviation (NSD) of the intensity \(I\) is defined at each pixel (x,y) as
\[NSD(x,y)=\frac{\sigma_{i}(x,y)}{\langle I(x,y)\rangle}=\frac{\sqrt{[\langle I ^{2}\rangle-\langle I\rangle^{2}]}}{\langle I\rangle} \tag{1}\]
The NSD map is called the motility contrast image (MCI). Different biological processes happen at characteristic speeds. All these processes result in local fluctuations in the index of refraction of cells and tissues and cause dynamic changes in the scattered speckle. The autocorrelation of the intensity I of a pixel is given by
Figure 1: Experimental Setup and Dataflow. (a) Optical coherence imaging (OCI) system configuration with low-coherence light. (L1-3: lenses. FP: Fourier plane. IP: image plane. CCD: charge-coupled device digital camera). (b) A single hologram captured on the camera plane, which is also a FP. (c) the blow-up of hologram. (d) the Optical coherence image (OCI) reconstruction and its phase conjugate produced by a two-dimensional fast Fourier transform (FFT). (e) the blow-up of OCI reconstruction.
\[A(\tau)=\langle I(0)\;I(\tau)\rangle=\langle I\rangle^{2}+[\langle I^{2}\rangle- \langle I\rangle^{2}]\;exp\left\{-\frac{\tau}{\tau_{C}}\right\} \tag{2}\]
where \(\tau_{C}\) is the correlation time of the process. For diffusion and backscatter,
\[1/\tau_{C}=q^{2}D=4Dk_{i} \tag{3}\]
with k\({}_{i}\) being the magnitude of the wavevector of the incident light in the medium and D being the diffusion coefficient. The autocorrelation can be written as
\[A(\tau)=1+(NSD)^{2}\;exp\left\{-\frac{\tau}{\tau_{C}}\right\} \tag{4}\]
The Fourier transform is a Lorentzian
\[S(\omega)=\frac{(NSD)^{2}\;\tau_{C}}{(\omega\;\tau_{C})^{2}+1} \tag{5}\]
In log-frequency space, S(\(\omega\)) has a distinct shape and exhibits a knee frequency at
\[\omega_{C}=\frac{1}{\tau_{C}}=q^{2}D \tag{6}\]
The observed process in living tissues is not strictly diffusive, presenting evidence of Levy statistics and heavy tails [16] and hence the power spectrum can be approximated as
\[S(\omega)=\frac{(NSD)^{2}/\omega_{C}}{(\omega\;/\omega_{C})^{s}+1} \tag{7}\]
### Tissue dynamics spectroscopy
The central data format of biodynamic profiling of intracellular dynamics inside living tissue is the drug-response spectrogram defined as
\[D(\omega,t;r)=logS(\omega,t;r)-logS_{0}(\omega,t_{0};r) \tag{8}\]
Figure 2: Spectra and spectrograms. (a), The power spectrum is the sum of all motion processes inside the tissues. (b), Fluctuation power spectra are acquired by performing temporal FFTs over reconstructed time series of optical coherence images of dynamic speckle monitored repeatedly over many hours. (c), The spectrogram is defined as the time dependent difference of the spectra between the baseline and the post-dose measurement. X axis is frequency, Y axis is time. The baseline measurement (prior to drug application) occurs in the first 4 hours. Post-drug-application measurement spans 10 hours
where S (o, t; r) is the spectral power density at time t for the voxel located at r = (x, y, z). The spectrogram is referenced to the baseline at time t\({}_{0}\) prior to the application of the drug. Spectrograms are typically taken at a fixed depth z (usually near the midsection of the biopsy) and averaged over (x, y) to yield an average spectrogram for the sample.
## 3 Materials and methods
Original data on Fresh samples were collected from canine B-cell lymphoma patients. The common treatment applies a CHOP regimen therapy, which is a combination of four different cancer drugs (doxorubicin, cyclophosphamide, prednisolone, vincristine). Progression-free survival (PFS) defines the chemotherapy response sensitivity (Table 1). Progression-free survival is defined as the length of time during and after the treatment that a patient lives with the disease without progression. A PFS longer than 180 days is considered as a chemotherapy sensitive response for the canine patients.
The frozen tissues were snap-frozen in liquid nitrogen within 10-15 minutes of collection from the animal. All the tissue samples were kept frozen in liquid nitrogen in a large tank in the biorepository until the time of thawing/use. Upon retrieval, the samples were rapidly thawed by agitation in a 37\({}^{\circ}\)C water bath and suspended in 37\({}^{\circ}\)C RPMI medium containing 10% fetal bovine serum and 100U penicillin/mL-100g/mL streptomycin. Small 1 mm\({}^{3}\) pieces were then assayed as previously described. For each day's experiment sixteen canine B-cell lymphoma samples are placed in a 96-well plate. There are four negative-control wells treated with 0.1% DMSO carrier and twelve wells treated with the drugs with duplicates (CHOP, prednisolone, vincristine) and triplicates (doxorubicin, cyclophosphamide).
spectrogram for the thawed samples shows a strong inhibition across the full bandwidth. This stronger suppression is caused by the tissue damage associated with the freeze-thaw process. The difference of the thawed DMSO spectrogram relative to the fresh shows strongest relative suppression in high and low frequencies for the thawed tissues.
The averaged drug-response spectrograms for the Fresh and Thawed samples are shown in Fig 4. These spectrograms have had the average negative control response subtracted. They are arranged in four groups according to clinical outcomes: 1) Fresh drug-Resistant group, 2) Thawed drug-Resistant group, 3) Fresh drug-Sensitive group, 4) Thawed drug-Sensitive group for the four drug treatments. When comparing the trend for overall spectral response between the Fresh and Thawed cohorts with respect to the chemotherapy response phenotype, there are notable differences between the Fresh and Thawed spectrograms.
Figure 4: Relative spectrogram comparison of Fresh/Thawed and drug Sensitive/Resistant for the BDI results. Each column stands for a relative spectrogram (negative control removed) comparison based on one specific drug response under four different conditions: Fresh drug-Resistant biopsy, Thawed drug-Resistant biopsy, Fresh drug-Sensitive biopsy, Thawed drug-Sensitive biopsy. X-axis is Frequency, Y-axis is measurement time. Drug has been applied after 4 hours. All the drug response spectrograms have already subtracted the corresponding negative control response spectrogram.
Figure 3: Negative control (DMSO-based response) spectrograms. X-axis is frequency; Y-axis is measurement time. The 0.1% DMSO in growth medium is applied after 4 hours. **(a)** Spectrograms for fresh biopsy, **(b)** Spectrograms for thawed sample, **(c)** The difference of spectrogram between thawed samples and fresh biopsies.
The differences between the averaged Thawed relative to the Fresh spectrograms are shown in Fig 5. These figures show a rough correspondence of spectrograms within each cohort. For instance, all the drug-Resistant spectrograms show a strong enhancement in low frequency, while all the drug-Sensitive spectrograms show a strong enhancement in high frequency. These data show a consistent trend for overall enhanced spectral responses in short-PFS phenotypes relative to long-PFS phenotypes when under treatment.
## 5 Machine learning and data clustering
The central goal of biodynamic phenotyping is to establish libraries of spectral fingerprints for living tissue response to chemotherapies that show different phenotypes between patients who are sensitive or resistant to therapy. These spectral libraries can then be used in machine-learning classifiers to predict whether prospectively-enrolled patients will be sensitive or resistant to a selected chemotherapy. Because fresh tissue samples are relatively expensive to acquire, the ability to build libraries from frozen tissues in tissue banks would be a significant resource, if the tissue damage caused by the freeze/thaw can be characterized and removed.
A key goal of this preclinical study is to construct a chemoresistance classifier that takes a set of treatment spectrograms for a single patient and predicts whether the patient will have a sensitive response to a selected treatment regimen. To accomplish this, the drug-response spectrograms for each patient are deconstructed into a set of mathematical features, each capturing either local or global patterns. The construction of the classification algorithm is based on linear separability in a feature space. The time-frequency drug-response spectrograms for each patient are deconstructed into a set of features, each capturing either local or global spectrogram patterns. Examples include overall enhancement/suppression over all frequencies (ALLF, ALLFT); localized low, mid and high (HI) frequencies; red shifts or blue shifts (SDIP); and different time dependences in response to the applied treatment (SDIP vs. SDIP2 and ALLF vs. ALLFT). Over 40 biodynamic feature vectors are defined, described in previous publications on biodynamic profiling [9; 14; 15].
Fig 6 shows a combination result of all Fresh and Thawed samples. Many of the patients from the drug-Sensitive group are clustered with the drug-Resistant group. The classification accuracy is low, as expected because of the freezing/thawing trauma.
Figure 5: Relative spectrograms of Thawed groups relative to fresh drug responses grouped by cohort. The relative spectrograms are generated using the spectrogram of Thawed samples subtracting the spectrogram of corresponding Fresh biopsies grouped within the same chemotherapy and cohort. X-axis is Frequency and Y-axis is measurement time. Drug is applied after 4 hours.
The thawed trauma can be partially eliminated by subtracting the averaged excess negative control response. Fig 7 shows the result of all Fresh and Thawed samples after the compensation. Overall, 12 out of 14 samples are classified correctly. These clustering results show a clearer distinction between the different chemotherapy response groups. It also shows that the compensation can partially remove the excess effect of the thawing damage.
Fig 6: Clustering analysis. **(a)** Selected biomarkers, which are the most important features for distinguishing the two different cohorts. **(b)** Clustered Similarity matrix for all dogs (1-7 represents Resistant groups, 8-16 represents the Sensitive groups). Each grid stands for the correlation coefficient of 32 biomarkers between the two corresponding dog samples. The matrix shows a weak block diagonal pattern with two main groups. **(c)** Similarity network for all dogs, blue dots stand for drug-response resistant cohort, while red dots present drug-response sensitive cohort. Clique coefficient is defined as the accuracy of clustering into the correct group.
## 6 Discussion & Conclusion
The challenge for many clinical drug studies is the time necessary (sometimes months to years) to follow patient response to determine effectiveness. The use of frozen banked tissue with known clinical outcome represents a possible source of material for testing, provided enough cells survive the freezing process, and they maintain their phenotype. The goal of this project was to determine if thawed samples from neoadjuvant patients could be used to accurately assess drug response phenotype. Many cells in a thawed sample will be damaged due to ice crystal formation and will not survive the thaw process. However, some percentages do survive and cells can be grown out from thawed tissues. In addition to the problem of cryo-damage to the cells, there is also the question of whether the surviving cells would maintain the drug response phenotype as noted in the pathology report.
This study demonstrates that sufficient viable cells exist in a thawed sample to assess drug-response phenotype using biodynamic profiling. The relatively short revival time for biodynamic profiling analysis may be essential for this success. A significant advantage of biodynamic profiling is that the tissue can be monitored in the multiwell plate relatively soon after thawing. A more prolonged processing and/or culture time may not provide similarly consistent data since degradative processes (apoptosis, necrosis) can be triggered by the freeze-thaw stress.
## Acknowledgements
This research was supported by NSF CBET-2200186.
|
2301.04322
|
Cooperation and Competition in Synchronous Open Quantum Systems
|
Synchronization between limit cycle oscillators can arise through entrainment
to an external drive or through mutual coupling. The interplay between the two
mechanisms has been studied in classical synchronizing systems, but not in
quantum systems. Here, we point out that competition and cooperation between
the two mechanisms can occur due to phase pulling and phase repulsion in
quantum systems. We study their interplay in collectively driven degenerate
quantum thermal machines and show that these mechanisms either cooperate or
compete depending on the working mode of the machine (refrigerator or engine).
The entrainment-mutual synchronization interplay persists with an increase in
the number of degenerate levels, while in the thermodynamic limit of
degeneracy, mutual synchronization dominates. Overall, our work investigates
the effect of degeneracy and multilevel scaling of quantum synchronization and
shows how different synchronizing mechanisms can cooperate and compete in
quantum systems.
|
Taufiq Murtadho, Sai Vinjanampathy, Juzar Thingna
|
2023-01-11T06:17:56Z
|
http://arxiv.org/abs/2301.04322v1
|
# Cooperation and Competition in Synchronous Open Quantum Systems
###### Abstract
Synchronization between limit cycle oscillators can arise through entrainment to an external drive or through mutual coupling. The interplay between the two mechanisms has been studied in classical synchronizing systems, but not in quantum systems. Here, we point out that competition and cooperation between the two mechanisms can occur due to phase pulling and phase repulsion in quantum systems. We study their interplay in collectively driven degenerate quantum thermal machines and show that these mechanisms either cooperate or compete depending on the working mode of the machine (refrigerator or engine). The entrainment-mutual synchronization interplay persists with an increase in the number of degenerate levels, while in the thermodynamic limit of degeneracy, mutual synchronization dominates. Overall, our work investigates the effect of degeneracy and multilevel scaling of quantum synchronization and shows how different synchronizing mechanisms can cooperate and compete in quantum systems.
_Introduction.-_ Synchronization is a ubiquitous phenomenon in which stable phase relations emerge between multiple limit cycle oscillators [1]. There are two main mechanisms that give rise to synchronization: i. _Entrainment_ that refers to synchronization of an oscillator by unidirectional coupling to a periodic external drive [2], and ii. _Mutual synchronization_ which refers to the adjustment of rhythms of two or more mutually coupled oscillators, such as in the widely-known Kuramoto model [3]. These two mechanisms may coexist in some systems [4; 5; 6; 7] and their interplay has also been experimentally studied in globally coupled electrochemical oscillators [8].
In the same spirit as classical synchronization, quantum synchronization is often studied through entrainment [9; 10; 11; 12; 13; 14] or mutual coupling [15; 16; 17; 18; 19; 20; 21] and has been experimentally observed recently [22; 23; 24]. However, unlike classical synchronization, the coexistence and the interplay between these two mechanisms in the quantum regime has not been investigated. Understanding this interplay is crucial in the control of various quantum technologies where both driving and interaction are important such as in superradiant lasers [16], coupled time-crystals [25], and coupled heat engines [26; 27; 28; 29].
In this work, we show that the phases of steady-state coherences follow a phase synchronization model, where the external entraining drive competes with the mutually coupled phases. This opens up the possibility of observing well-studied classical phenomena, such as synchronization-anti-synchronization transition [30] and chimera [31; 32], in the quantum regime. Our framework applies to generic quantum systems, with one or more external drives that couple the coherences that themselves are mutually coupled, either coherently or dissipatively, leading to an interplay between entrainment and mutual synchronization.
As a concrete example, we consider a degenerate multilevel generalization of the Scovil-Schulz-DuBois maser heat engine [33], where the external collective drive connects transitions between the degenerate manifold and the first-excited state [34]. The states within the degenerate manifold mutually interact to form a _stable_ collective symmetric (in-phase) and anti-symmetric (out-of-phase) superposition (mutual synchronization). At the same time, the external drive causes the phases within the degenerate manifold to be aligned in-phase with the drive (entrainment). In the engine regime, stimulated emission consumes the collective symmetric superposition state thereby enhancing the population of the _anti-symmetric_ state. Thus, there is competition between entrainment (in-phase) and mutual synchronization (out-of-phase). In the refrigerator regime, the stimulated absorption enhances the population of the collective _symmetric_ superposition state thereby always cooperating with entrainment. Our work sheds light on the synergistic interplay between entrainment and mutual synchronization in quantum systems.
_Quantum synchronization in \(D\)-level systems.-_ Quantum synchronization has been studied in systems with continuous degrees of freedom such as oscillators [9; 10; 11; 13; 15; 17; 35] and discrete degrees of freedom such as spin-1 systems [12; 14; 20]. A wide variety of measures, based on various physical and mathematical mo
tivations such as phase-space based measures [9; 12; 20], correlation measures [36], and information-theoretic measures [37] has been used to quantify synchronization.
In this work, we use the phase-space based measure built on the Husimi-Q phase space representation [38; 39] of the steady-state \(\rho^{ss}\) with respect to \(SU(D)\) coherent state [39; 40] defined as
\[Q[\rho^{ss}]=\frac{D!}{\pi^{D-1}}\left\langle\alpha_{D}|\rho^{ss}|\alpha_{D} \right\rangle, \tag{1}\]
where \(|\alpha_{D}\rangle=\sum_{n=1}^{D}\alpha_{n}\left|n\right\rangle\) is the \(SU(D)\) coherent state with coefficients
\[\alpha_{n}=\begin{cases}e^{i\phi_{n}}\cos\theta_{n}\prod_{k=1}^{n-1}\sin\theta _{k}&1\leq n<D\\ e^{i\phi_{D}}\prod_{k=1}^{D-1}\sin\theta_{k}&n=D,\end{cases} \tag{2}\]
where it is implicitly assumed that the product term is identity for \(n=1\) and the reference phase \(\phi_{1}=0\). The synchronization measure is given by the difference between integrating out the angles \(\theta_{k}\) corresponding to the population degrees of freedom and doing the same for the uniform measure, given by
\[S(\phi_{1},\cdots,\phi_{D-1}) =\int Q[\rho^{ss}]d\Theta-\frac{1}{(2\pi)^{D-1}}\] \[=\frac{1}{2^{D+1}\pi^{D-2}}\sum_{n\neq m}\rho_{nm}^{ss}e^{i(\phi _{m}-\phi_{n})}, \tag{3}\]
which lives on a \(D-1\) dimensional torus (see Append. A). The distribution \(S(\phi_{1},\cdots,\phi_{D-1})\) is zero everywhere for a diagonal steady-state which is interpreted as a limit cycle [37] possessing stable amplitudes (fix diagonal elements) but free phases. The notion of free-phase in a such diagonal limit cycle is analogous to a classical stochastic limit cycle whose phase distribution approaches a uniform distribution in the steady-state [1; 13; 14; 41; 42].
We associate the peak of \(S(\phi_{1},\cdots,\phi_{D-1})\) as a phase-space synchronization measure [12; 20; 42],
\[S_{max}=\max_{\phi_{1},\cdots,\phi_{D-1}}\frac{1}{2^{D+1}\pi^{D-2}}\sum_{n\neq m }\rho_{nm}^{ss}e^{i(\phi_{m}-\phi_{n})}. \tag{4}\]
The synchronization measure, \(S_{max}\) only depends on the steady-state coherences. However, we note that a high value of \(S_{max}\) requires all phase preferences \(\Phi_{ij}=\arg(\rho_{ij}^{ss})\) to be compatible, i.e., \(\Phi_{ij}-\Phi_{jk}=\Phi_{ik}\;\forall i\neq j\neq k\), a condition that is stronger than the mere presence of coherences.
_Degenerate thermal maser.-_ Entrainment in quantum systems is the result of an interplay between coherent driving and dissipation [10; 12]. The system we consider is depicted in Fig. 1 and consists of \((N+2)\) levels whose bare Hamiltonian is given by,
\[H_{0}=\omega_{1}\left|1\right\rangle\left\langle 1\right|+\sum_{j=2}^{N+1} \omega_{j}\left|j\right\rangle\left\langle j\right|, \tag{5}\]
with \(\omega_{j+1}>\omega_{j},\omega_{0}=0\). The upper \(N\) levels are degenerate with \(\omega_{2}=\omega_{3}=\cdots=\omega_{N+1}\). Although we work in the limit of exact degeneracy, our main results hold even in the near-degenerate scenario and will be considered in detail in an accompanying Ref. [43].
This system is driven by a monochromatic drive \(V(t)\) of frequency \(\Omega\) given by
\[V(t)=\sum_{j=2}^{N+1}\lambda_{j}e^{i\Omega t}\left|1\right\rangle\left\langle j \right|+\text{h.c.} \tag{6}\]
This drive can be rewritten as a coupling to a collective-transition mode \(\left|1\right\rangle\leftrightarrow\left|J\right\rangle=(1/\lambda_{eff}) \sum_{j}\lambda_{j}\left|j\right\rangle\) with \(\lambda_{eff}=\sqrt{\sum_{j}\left|\lambda_{j}\right|^{2}}\) being the effective coupling strength. Such collective drives are realizable in an ensemble of atoms driven by light, if the inter-atomic distance is much smaller than the wavelength of the light, such as in the case of Dicke superradiance [44].
The system is acted upon by a dissipator
\[\mathcal{D}[\rho]=\sum_{\mu=1}^{2}\left[\Gamma_{c_{\mu}}\mathcal{L}[c_{\mu}] \rho+\sum_{j=2}^{N+1}\Gamma_{h_{\mu}}\mathcal{L}[h_{\mu}^{j}]\rho\right], \tag{7}\]
which leads to a multilevel generalization of the Scovil-Schulz-DuBois maser heat engine [33; 34]. The dissipator \(\mathcal{L}[X]\rho=2X\rho X^{\dagger}-\left\{X^{\dagger}X,\rho\right\}\) is of the Lindblad form such that the hot (cold) bath with jump operators \(h_{1}^{j}=h_{2}^{j\dagger}=\left|0\right\rangle\left\langle j\right|\) (\(c_{1}=c_{2}^{\dagger}=\left|0\right\rangle\left\langle 1\right|\)) induce transitions between the ground state and the degenerated manifold (first-excited state). The associated rates follow local-detailed balance and are given by \(\Gamma_{h_{1}(c_{1})}=\gamma_{h(c)}(1+n_{h(c)})\) and \(\Gamma_{h_{2}(c_{2})}=\gamma_{h(c)}n_{h(c)}\) with \(\gamma_{h(c)}\) being the effective system-bath coupling strength and \(n_{h(c)}=\left[\exp(\beta_{h(c)}\omega_{2(1)})-1\right]^{-1}\) being the Bose-Einstein distribution at inverse temperature
Figure 1: Schematic of the degenerate quantum thermal maser, which is a generalization of the standard Scovil–Schulz-DuBois three-level thermal maser [33]. Here, \(N\) is the number of states in the degenerate manifold and here we focus on the case \(\Delta=0\). The near-degenerate case where \(\Delta\neq 0\) is discussed in the accompanying manuscript [43]
\(\beta_{h(c)}\). The action of the heat baths leads to a population inverted steady state between the first-excited state \(\ket{1}\) and the degenerated manifold \(\{\ket{j}\), \(\forall j=2,\cdots,N+1\}\) if \(n_{h}>n_{c}\). If there is population inversion, the system behaves as a maser heat engine [45]. However, if \(n_{h}<n_{c}\), population inversion is lost and the system behaves as a refrigerator by attenuating the drive [45]. We can rewrite the Hamiltonian in a frame co-rotating with the drive as \(\tilde{H}=(\Omega/2)(\sum_{j=2}^{N+1}\ket{j}\bra{j}-\ket{1}\bra{1})\) giving us the rotating frame quantum master equation,
\[\frac{d\tilde{\rho}}{dt}=-i[H_{0}-\tilde{H}+\tilde{V},\tilde{\rho}]+\mathcal{D }[\tilde{\rho}], \tag{8}\]
where \(\tilde{O}\equiv e^{-i\tilde{H}t}Oe^{i\tilde{H}t}\) (\(O=\rho,V\)) is an operator in the rotated frame with \(\tilde{V}=\sum_{j=2}^{N+1}\lambda_{j}\ket{1}\bra{j}+\text{h.c.}\).
_Competition vs cooperation.-_ Equation (8) can be solved analytically for the case of homogeneous driving strength \(\lambda_{j}=\lambda\) (\(\forall j=2,\cdots,N+1\)) and resonant driving \(\Omega=\omega_{2}-\omega_{1}\). In this case, the steady-state coherences are given by
\[\tilde{\rho}_{1j}^{ss}=i\frac{\lambda(n_{c}-n_{h})\gamma_{c} \gamma_{h}(1+n_{h})}{F(N,n_{h},n_{c},\gamma_{c},\gamma_{h},\lambda)}, \tag{9}\] \[\tilde{\rho}_{jl}^{ss}=\frac{\lambda^{2}\gamma_{c}(n_{c}-n_{h})}{ F(N,n_{h},n_{c},\gamma_{h},\gamma_{c},\lambda)}, \tag{10}\]
where \(j,l=2,\cdots,N+1\), \(j\neq l\) and the function \(F(N,n_{h},n_{c},\gamma_{c},\gamma_{h},\lambda)=AN^{2}+BN+C\) with \(A,B\), and \(C\) being positive constants that depend on all remaining parameters (see Append. B for the explicit expressions for these constants).
The _non-degenerate coherences_ (\(\tilde{\rho}_{1j}\)) are directly induced (i.e., \(\propto\lambda\)) by the drive whereas the _degenerate coherences_ (\(\tilde{\rho}_{jl}\)) are an indirect consequence (\(\propto\lambda^{2}\)) of the collective nature of the drive. Their differences are clear as one transforms back to the original frame in which \(\rho_{1j}=\tilde{\rho}_{1j}e^{-i\Omega t}\) and \(\rho_{jl}=\tilde{\rho}_{jl}\). The phase preferences induced by \(\rho_{1j}\) rotate with the driving frequency while that of \(\rho_{jl}\) remain stationary in the original frame. Both of these coherences affect the phase distributions of the states within the degenerate manifold. For these reasons, we infer that there are two synchronization mechanisms at play in this system, entrainment induced directly by the drive and mutual coupling that occurs due to the presence of a degenerate manifold. Entrainment induces phases relative to driving whose effect is the emergence of stable non-degenerate coherences \(\tilde{\rho}_{1j}^{ss}\). On the other hand, mutual coupling induces a relative phase between states in the degenerated manifold independent of the driving phase, which is reflected by stable degenerate coherences \(\tilde{\rho}_{jl}^{ss}\).
Recall that we have denoted \(\Phi_{ij}=\arg(\tilde{\rho}_{ij}^{ss})\) as the steady-state phase preferences. When there are multiple of such preferences, synchronization requires all the phase relations to be compatible, i.e. \(\Phi_{ij}-\Phi_{jk}=\Phi_{ik}\) (\(i\neq j\neq k\)). However, we find that in our system such a condition is only satisfied in the refrigerator regime where \(\Phi_{1j}=\pi/2\) (\(\forall j\)) and \(\Phi_{jl}=0\) (\(j\neq l\)). In the engine regime, we have \(\Phi_{1j}=-\pi/2\) (\(\forall j\)) and yet \(\Phi_{jl}=\pi\) (\(j\neq l\)). We interpret this as a result of an interplay between entrainment and mutual coupling. We find that entrainment always pulls the degenerate states to be _in-phase_ (Fig. 2**a**). Mutual coupling prefers _out-of-phase_ configuration in the engine regime (Fig. 2**b**), and in-phase configuration in the refrigerator regime. Consequently, we expect entrainment and mutual coupling to cooperate in the refrigerator regime and compete in the engine regime.
The competition and cooperation are obvious when we calculate the phase space synchronization measure \(S_{max}\) [see Eq. (4)]. In general, this requires optimization over
Figure 2: Interplay between entrainment and mutual coupling for \(N=2\). Panels **a** and **b** show phase quasi-distribution function \(S(\varphi_{21},\varphi_{31})\) [Eq. (3)] where \(\varphi_{ij}=\phi_{i}-\phi_{j}\) in the engine regime (\(n_{h}/n_{c}=100\)). For \(k=3\), \(S(\varphi_{21},\varphi_{31})\) shows a localized maximum when the phases are in-phase (\(\varphi_{21}-\varphi_{31}\approx 0\) in the red-region in **a**, entrainment-dominant). Whereas for \(k=0.75\), when \(S(\varphi_{21},\varphi_{31})\) is maximized the phases do not localize but their difference is out-of-phase (\(\varphi_{21}-\varphi_{31}\approx\pi\) in the red-region in **b**, mutual coupling dominant). Panel **c** shows \(S_{max}\) (solid circle) as a function of \(n_{h}/n_{c}\) with the solid line representing the analytic prediction of Eq. (11). The dashed line is the entrainment contribution to \(S_{max}\), i.e., (\(|\rho_{12}|+|\rho_{13}|\))/16\(\pi^{2}\). The vertical dotted line represents the boundary between refrigerator (\(n_{h}/n_{c}<1\)) and engine (\(n_{h}/n_{c}>1\)) regimes. Panel **d** shows \(S_{max}\) (solid circle) and (\(|\rho_{12}|+|\rho_{13}|\))/16\(\pi^{2}\) (dashed line) plotted against inhomogeneous driving strength ratio \(|\lambda_{2}/\lambda_{3}|\leq 1\) in the engine (red) and refrigerator (blue) regimes indicating competition (cooperation) between entrainment and mutual coupling is robust in the engine (refrigerator) regime. The other parameter values are \(\omega_{2}=\omega_{3}=3\omega_{1}\), \(\Omega=\omega_{2}-\omega_{1}\), \(\gamma_{c}=0.2\omega_{1}\), \(\gamma_{h}=0.05\omega_{1}\), \(n_{c}=0.5\), and \(\lambda_{2}=0.1\omega_{1}\)
\(N\) variables which we calculate analytically for \(N=2\) (see Append. C)
\[S_{max}=\frac{1}{16\pi^{2}}\times\begin{cases}|\tilde{\rho}_{12}^{ss}|+|\tilde{ \rho}_{13}^{ss}|+|\tilde{\rho}_{23}^{ss}|&\text{if }n_{k}<n_{c}\\ |\tilde{\rho}_{12}^{ss}|+|\tilde{\rho}_{13}^{ss}|-|\tilde{\rho}_{23}^{ss}|& \text{if }n_{k}>n_{c}\;\&\;k>2\\ \left(1+\frac{k^{2}}{2}\right)|\tilde{\rho}_{23}^{ss}|&\text{if }n_{k}>n_{c}\; \&\;k<2,\end{cases} \tag{11}\]
where \(k=\gamma_{h}(1+n_{h})/\lambda=|\tilde{\rho}_{12}^{ss}|/|\tilde{\rho}_{23}^{ss}| =|\tilde{\rho}_{13}^{ss}|/|\tilde{\rho}_{23}^{ss}|\) is the _dissipation-to-driving_ ratio. The set of optimal phases \((\varphi_{21}^{opt},\varphi_{31}^{opt})\equiv(\varphi_{21},\varphi_{31})|_{S= S_{max}}\) evaluated in Append. C are given by,
\[(\varphi_{21}^{opt},\varphi_{31}^{opt})=\begin{cases}\begin{pmatrix}-\dfrac{ \pi}{2},-\dfrac{\pi}{2}\end{pmatrix}&\text{if }n_{k}<n_{c}\\ \left(\dfrac{\pi}{2},\dfrac{\pi}{2}\right)&\text{if }n_{k}>n_{c}\;\&\;k>2\\ \left(\chi,\pi-\chi\right)\;\&\;(\pi-\chi,\chi)&\text{if }n_{k}>n_{c}\;\&\;k<2, \end{cases} \tag{12}\]
where \(\varphi_{ij}=\phi_{i}-\phi_{j}\) and \(\chi=\arcsin(k/2)\). Equations (11)-(12) show the effect of the coherent drive and bath couplings on the synchronous dynamics of the system. Cooperation in the refrigerator regime (\(n_{c}>n_{h}\)) is reflected by the fact that each component of the magnitude of coherence adds up in the synchronization measure \(S_{max}\), whereas in the engine case there is competition since the mutual coupling component \(|\rho_{23}^{ss}|\) reduces the effect of the entrainment contribution \(|\rho_{12}^{ss}|+|\rho_{13}^{ss}|\). In other words, the phases are either equal in some cases or they are arranged antipodally in other cases, as shown in Eq.(12).
In the engine regime, \(S_{max}\) is also divided into regimes where entrainment is dominant (\(k>2\)) and where the mutual coupling is dominant (\(k<2\)). For the entrainment dominant regime, the competition is apparent from the negative contribution of \(|\rho_{23}^{ss}|\) to \(S_{max}\). Note that this is different from the previously reported phenomenon of synchronization blockade [14; 46], in our case, \(S_{max}\) can not vanish except for \(\lambda=0\) or \(n_{h}=n_{c}\) where the steady-state is diagonal (see Append. D). The transition from entrainment to mutual coupling dominant regime is shown in Figs. 2**a**-**b** where we plot the phase distribution \(S(\varphi_{21},\varphi_{31})\) for different \(k\) values. In particular, we see that as we cross \(k=2\), the relative phases go from in-phase to out-of-phase. Moreover, the localization pattern changes from a point localization to ring localization (on a torus), wherein the latter only the relative phase \(\varphi_{23}=\varphi_{21}-\varphi_{31}\) is fixed, indicating that entrainment is lost.
The competition and cooperation observed is also robust with respect to all values of individual driving strength ratio \(\lambda_{2}/\lambda_{3}\) as shown in Fig. 2**d**. Interestingly, \(S_{max}\) is symmetric with respect to a transformation \(\lambda_{j}\rightarrow-\lambda_{j}\) which transforms \(\tilde{\rho}_{jl}^{ss}\rightarrow-\tilde{\rho}_{jl}^{ss}\) for all \(l\neq j\). This can be intuitively explained by \(S_{max}\) only depending on the norm of coherences. In this case, the phase preference of entrainment and mutual coupling is reversed, i.e. both prefer _out-of-phase_ in the refrigerator regime while mutual coupling (entrainment) prefers _in-phase_ (_out-of-phase_).
_Scaling with \(N\).-_ Calculating \(S_{max}\) boils down to performing \(N\)-variable optimization which in general is difficult for \(N>2\). However, in the refrigerator regime, assuming homogeneous driving \(\lambda_{j}=\lambda\) the problem simplifies and one can show that \(S(\{\varphi_{1j}\})\) saturates the \(l_{1}\)-norm bound [37] (see Append. A for a proof). Thus, we conclude that in the refrigerator regime \(S_{max}\propto C_{l_{1}}=\sum_{i<j}^{N+1}|\tilde{\rho}_{ij}|\) for any \(N\).
The fact that \(S_{max}\) is always proportional to the \(l_{1}\)-norm in the refrigerator regime demonstrates that entrainment and mutual coupling are always in cooperation for any \(N\) in this case. The cooperation can also be seen numerically from Fig. 3**b** where we observe scaled synchronization measure \(\mathbb{S}_{max}\equiv(2\pi)^{N}S_{max}\) always exceeds the contribution from entrainment for any \(N\) even if we relax the assumption of homogeneous driving. Moreover, as \(N\rightarrow\infty\), the gap between \(\mathbb{S}_{max}\) and entrainment contribution grows which means mutual coupling is the dominant synchronization mechanism in the large \(N\) limit. This is evident since the number of terms contributing to the degenerate coherence (\(\tilde{\rho}_{jk}\)) scale as \(N^{2}\) while those contributing to non-degenerate coherences (\(\tilde{\rho}_{1j}\)) scale linearly with \(N\). Additionally, the normal
ization of the density matrix induces an additional \(N^{-2}\) scaling for all coherences [see Eqs. (9)-(10)]. Thus, overall we predict that in the refrigerator regime, in the limit of \(N\to\infty\), \(\mathbb{S}_{max}\) is mutual coupling dominant (see Append. C) and reads,
\[\mathbb{S}_{max} = \lim_{N\to\infty}(2\pi)^{N}S_{max} \tag{13}\] \[\underset{n_{c}>n_{h}}{=} \frac{\gamma_{c}(n_{c}-n_{h})}{8n_{h}[\gamma_{c}(1+n_{c})+\gamma_{h }(1+n_{h})]}.\]
The asymptotic scaled \(\mathbb{S}_{max}\) above only depends on the bath properties and is independent of the drive strength. Furthermore, as shown in Fig. 3**b**\(S_{max}\) follows a sublinear power law behavior and all the optimum phases \(\{\varphi_{j1}\}|_{S=S_{max}}\) coalesce to a single phase \(3\pi/2\) (Fig. 3**d**).
In the engine case, it is difficult to find an analytic closed-form expression for \(S_{max}\). However, we numerically observe in Fig. 3**a**, that the competition between entrainment and mutual coupling persists for any \(N\) since \(\mathbb{S}_{max}\) is smaller than entrainment contribution causing \(\mathbb{S}_{max}\to 0\). This decay is due to phase repulsiveness because of mutual coupling as shown in 3**c**. Thus, in the large \(N\)-limit, the qualitative behavior of this model is analogous to the Kuramoto model with phase-repulsive coupling, where the mean-field synchronization order parameter approaches zero [47].
_Summary.-_ We have shown that there exists an interplay between entrainment and mutual coupling in a collectively driven-dissipative degenerate thermal maser. The interplay depends on the thermodynamic functionality of the maser, i.e., they compete in the engine regime and cooperate in the refrigerator regime. The results rely on two key ingredients: i. a coherent drive that collectively couples to the degenerate manifold causing entrainment and mutual coupling to coexist and ii. a dissipative mechanism that causes a population inversion between the non-degenerated and degenerated manifolds to observe the competition.
We demonstrate our findings using a minimal model of a generalized Scovil-Schulz-DuBois maser heat engine and show that in the thermodynamic limit (\(N\to\infty\)) the dominance of mutual coupling leads to phase repulsiveness causing the engine' working substance to be asynchronized (\(\mathbb{S}_{max}=0\)). On the other hand, since there is cooperation in the refrigerator case, the phases coalesce to \(3\pi/2\) giving a finite \(\mathbb{S}_{max}\) that is independent of system properties. In other words, as the system size increases in order for the working substance to be synchronized the external drive needs to perform work on the system.
Our work not only contributes to the growing field of quantum synchronization by adding valuable insights when distinct synchronizing mechanisms coexist but helps understand quantum heat engines from a synchronization perspective.
_Acknowledgments.-_ This research was supported by the Institute for Basic Science in South Korea (IBS-R024-Y2). S.V. acknowledges support from a Government of India DST-QUEST grant number DST/ICPS/QuST/Theme-4/2019. The authors would like to thank V. Singh for the useful discussions.
|
2303.15456
|
Simulation of Wave in Hypo-Elastic-Plastic Solids Modeled by Eulerian
Conservation Laws
|
This paper reports a theoretical and numerical framework to model nonlinear
waves in elastic-plastic solids. Formulated in the Eulerian frame, the
governing equations employed include the continuity equation, the momentum
equation, and an elastic-plastic constitutive relation. The complete governing
equations are a set of first-order, fully coupled partial differential
equations with source terms. The primary unknowns are velocities and deviatoric
stresses. By casting the governing equations into a vector-matrix form, we
derive the eigenvalues of the Jacobian matrix to show the wave speeds. The
eigenvalues are also used to calculate the Courant number for numerical
stability. The model equations are solved using the Space-Time Conservation
Element and Solution Element (CESE) method. The approach is validated by
comparing our numerical results to an analytical solution for the special case
of longitudinal wave motion.
|
Lixiang Yang, Robert L Lowe
|
2023-03-11T01:24:37Z
|
http://arxiv.org/abs/2303.15456v1
|
# Simulation of Wave in Hypo-Elastic-Plastic Solids
###### Abstract
This paper reports a theoretical and numerical framework to model nonlinear waves in elastic-plastic solids. Formulated in the Eulerian frame, the governing equations employed include the continuity equation, the momentum equation, and an elastic-plastic constitutive relation. The complete governing equations are a set of first-order, fully coupled partial differential equations with source terms. The primary unknowns are velocities and deviatoric stresses. By casting the governing equations into a vector-matrix form, we derive the eigenvalues of the Jacobian matrix to show the wave speeds. The eigenvalues are also used to calculate the Courant number for numerical stability. The model equations are solved using the Space-Time Conservation Element and Solution Element (CESE) method. The approach is validated by comparing our numerical results to an analytical solution for the special case of longitudinal wave motion.
**Keywords: Elastic-plastic wave; Conservation laws; Hyperbolic differential equation; Space-Time CESE method**
## 1 Introduction
General theories of elastic-plastic waves has been treated extensively and reviewed from various points of view in the literature, e.g., Nowacki [1], Clifton [2], Cristescu [3], Craggs [4] Herrmann [5]. Numerical simulations of wave motion in elastic-plastic media were reported by Buchar et al. [6] with aid of finite element analysis, Transenstein and Collella [7] studied finite deformation in elastic-plastic solids by using hyper-elastic constitutive and kinematic evolution equation. By using higher-order Godunov scheme, they mainly studied wave propagation in hyper-elastic solids for one dimensional simulation. Miller and Collella [8] extended this work in hyper-elasticity and visco-plasticity to multiple dimension. Hill et al.[9] used a hybrid of the weighted essentially non-oscillatory
schemes combined with explicit centered difference to solve the equations of motion expressed in an Eulerian formulation. This formulation allows for a wide range of constitutive relations. Giese [10] studied elastic-plastic wave in three space dimensions. Since the governing equations are composed of two part, he solved the flux equation by using method of transport and integrated the stress-strain relationship in time with a high order ODE solver. In order to understand the formation of the plastic zone at the crack tip, Lin and Ballmann [11] used a characteristic-based difference method to simulate elastic-plastic wave propagation in a two-dimensional anisotropic plane strain domain. Their studies focus on small plastic deformation problems. Tran and Udaykumar [12] developed an Eulerian, sharp interface, Cartesian grid method to simulate impact and denotation. Since energy equations are considered, the Mie-Gruneisen equation of state is used to obtain pressure. The Essentially non-oscillatory scheme is employed to capture shocks and sharp immersed boundaries are captured by using a hybrid particle level set technique. Wang et al.[13] used an improved CESE method to model the impact problems of multi-material elastic plastic flows. Eulerian governing equations are adopted. Projectile nose and tail velocity are compared with experimental results. It confirms the high accuracy of CESE scheme. Sambasivan et al.[14] built a sharp interface Cartesian grid-based code to solve impact and collision problems in elasto-plastic solid medium. The ghost states are introduced to treat interface of material-to-material, material-to-void and material-to-rigid surface. Johnson-Cook material model is used in the computation. By using a 10th order compact finite difference scheme for spatial discretization and a 4th order Runge-Kutta time marching method, Ghaisas [15] developed a high-order Eulerian method to simulate elastic-plastic deformation as well as fluid flow. In their governing equations, derivative of the inverse deformation gradient tensor is written to become a hyperbolic differential equation with source term. A multi-medium Riemann solver was proposed by Li et al.[16] to study impact dynamics between solid and fluid. Hydro-elastoplastic constitutive material model and Mie-Gruneisen equation of state are used to close the governing equations. Recently, Cheng et al.[17] developed a cell-centered Lagrangian scheme to model elastic-plastic flow in two-dimensional medium. Detail is given on how to construct Riemann solver for contact and elasto-plastic wave. Since wave propagation in solid medium can be mathematically described as a set of coupled first order hyperbolic partial differential equations, Bonet et al.[18] casted linear momentum, the deformation gradient tensor and its cofactors into a first order partial differential equation system in Lagrangian frame. Detail mathematical properties such as hyperbolity, stability, and convergence are studied. Boscheri et al. [19] wrote deformation gradient and thermal impulse density into hyperbolic form and set the total energy to be a general summation of fluid and solid. Atta et al.[20] suggested shifted Chebyshev polynomials of the fifth-kind as basis functions to get their approximate solutions. Classical large deformation elastic-plastic problems were also attempted by using virtual element method [21]. Large deformation elastic-plastic dynamics were also solved by diffuse-interface methods which were originally used to solve multiphase fluid flows[22]. In the governing equations, two separate transport equations for elasto
tic and plastic deformation tensors are combined with other conservation laws. Liu et al.[23] modeled one-dimensional multi-material elastic-plastic flow using a Riemann solver. By using finite volume method with stationary grids in Eulerian frame, conservation laws and modified hypoelastic Wilkins model were solved to find the cause of adiabatic shear bands formation [24]. To understand seismic wave propagation, Sripanich et al.[25] studied third-order elasticity. They stated that a consistent description of conservation laws and right choice of constitutive relationship as well as elastic moduli need be used in practical scenarios. Xiong et al.[26] investigated elastic-plastic energy storage and dissipation of crystals under different strain rate impact by molecular dynamics simulations. Nonlinear elastic-plastic wave propagation in polymers due to the action of intense energy flows were studied by Boykov et al.[27]. Finite difference method is used to solve Lagrangian description of conservation laws and hypo-elastic-plastic material relationship which is taken from the old literature. Recently, numerical modeling of wave propagation in one dimensional elastic-plastic medium was investigated by using cell-centered Lagrangian scheme [28]. Four types of elastic-plastic problems such as impact, tensile, piston-like and Wilkins' problems were studied. By writing conservation of linear momentum and three geometric conservation laws (the deformation gradient, its cofactor and its determinant) into first order hyperbolic form, de Campos et al.[29] used a new Updated Reference Lagrangian Smooth Particle Hydrodynamics algorithm to analyze three-dimensional large deformation elasto-plasticity problems. In order to investigate shear band propagation, Eremin et al.[30] used micro-structure based finite-difference method to model plastic flow in low carbon steel. Heuze and Stainier[31] created a variational framework which will let hyperbolic conservation laws work with thermo-hyperelastic-viscoplastic constitutive equations [32]. Aided with a high-order and pressure oscillating-free scheme, Eulerian description of conservation laws as well as a hyper-elastic framework were also used to understanding multi-material elastic-plastic flow [33; 34]. Similar elastic-plastic wave propagation problems were also investigated with slightly different form of conservation laws, transport equation, or constitutive models in Eulerian frame [35; 36; 37]. Based on multi-material diffuse interface method, Wallis et al.[38] studied elasto-plastic-rigid body interaction using Eulerian conservation laws for solid-fluid interaction which were derived by Barton [39]. In order to simulate large deformation and penetration problems of elasto-plastic solids, Yeom [40] numerically solved Eulerian multi-material and multi-phase flow conservation laws using a high resolution computational fluid dynamics technique on Cartesian grids. Without considering large plastic deformation and discontinuity between multiple materials or interface between solid and fluid, the elasto-plastic constitutive model can be combined with finite element analysis to study all kinds of solid mechanics problems [41; 42].
In this paper, we will employ the Conservation Element and Solution Element (CESE) method [43; 44], an explicit space-time finite-volume scheme, to solve a system of nonlinear elasto-dynamic model equations. As a special finite-volume method, CESE method has been formerly used to solve dynamics and combustion problems, including detonations, cavitations, flows with com
plex shock structures [45, 46, 47]. Recently, Yang [48] used updated CESE scheme to study hypervelocity asteroid impacts based on an elasto-plastic flow model. During the past thirty years,many different finite-volume methods were continually applied to study solid mechanics problems [49, 50]. So did the CESE method. it has been employed to solve many dynamical and vibration problems in solid structures, for instance, [51, 52, 53, 54, 55, 56, 57, 58]. Since the application we are interested in is ultrasonic welding process, thermo-effect is ignored which was demonstrated by experiments. In this paper, we will report a novel theoretical and numerical approach to model elastic-plastic wave motion in solids without energy equation included. We extend the isothermal model for modeling stress wave propagation in elastic-plastic media by using a suitable constitutive equation to model the dynamical response of elastic-plastic materials.
The rest of the present paper is organized as follows. Section 2 illustrates the basic formulation of constitutive relation for plasticity of solids. Section 3 summarizes the model equations, including the continuity, momentum, and constitutive relations. The model equations are cast into a vector-matrix form. The Jacobian matrix of the one-dimensional equations are analyzed to show the eigenvalues, which represent the wave speeds. Linearizion of nonlinear wave equation will be given in section 4. Numerical method will be talked in section 5 and section 6. In section 7, a numerical example will be validated by experimental results. Nonlinearity and unloading profiles of the elastic-plastic wave are further studied by our numerical method. We then offer the limitations and concluding remarks, followed by a list of cited references.
## 2 Hypo-Elastic-Plastic Solids
A constitutive equation for hypo-elastic-plastic media is developed in this section. We first develop the constitutive equation based on the infinitesimal theory, then generalize this model to accommodate finite deformations. The medium of interest is assumed isotropic, homogeneous, non-porous, and metallic. We also adopt the customary assumptions of incompressibility of the plastic strain, yield insensitivity to the hydrostatic part of the stress, and, for the sake of simplicity, strain-rate-independent response.
Based on experimental observation, infinitesimal plasticity admits the additive decomposition
\[d\varepsilon_{ij}\,=\,d\varepsilon_{ij}^{e}\,+\,d\varepsilon_{ij}^{p},\]
where \(d\varepsilon_{ij}^{e}\) and \(d\varepsilon_{ij}^{p}\) are the infinitesimal elastic and plastic strain increments, respectively. The elastic strain increment can be obtained from isotropic linear elasticity:
\[d\varepsilon_{ij}^{e}\,=\,\frac{1+\nu}{E}\,d\sigma_{ij}\,-\,\frac{\nu}{E}\,d \sigma_{kk}\,\delta_{ij},\]
where \(d\sigma_{ij}\) is the stress increment, \(\delta_{ij}\) is the Kronecker delta, \(E\) is Young's modulus, and \(\nu\) is Poisson's ratio.
To determine the plastic strain increment \(d\varepsilon_{ij}^{p}\), we consider the equation of the yield surface of a material that undergoes strain-rate-independent isotropic
strain-hardening:
\[F(S_{ij},\bar{\varepsilon}^{\,p})=0, \tag{2.1}\]
where \(F(S_{ij},\bar{\varepsilon}^{\,p})\) is a scalar-valued yield function whose form is made explicit by the yield criterion, e.g., von Mises or Tresca. \(S_{ij}\) is the deviatoric part of stress, i.e.,
\[S_{ij}\,=\,\sigma_{ij}\,-\,\frac{1}{3}\,\sigma_{kk}\,\delta_{ij}.\]
That the yield function \(F\) depends on only the deviatoric part of the stress reflects the customary assumption of yield insensitivity to hydrostatic pressure. Note that the yield surface is the union of all points in deviatoric stress space that satisfy Eq. (2.1). The effective plastic strain \(\bar{\varepsilon}^{\,p}\), defined here as
\[\bar{\varepsilon}^{\,p}=\int d\bar{\varepsilon}^{\,p},\hskip 28.452756ptd\bar{ \varepsilon}^{\,p}\,\stackrel{{\rm def}}{{=}}\,\sqrt{\frac{2}{3 }\,d\bar{\varepsilon}^{p}_{ij}\,d\bar{\varepsilon}^{p}_{ij}}\,,\]
quantifies the plastic strain accumulated during the deformation history. It represents the scalar hardening parameter in (2.1).
The associated flow rule
\[d\bar{\varepsilon}^{p}_{ij}\,=\,d\lambda\;\frac{\partial F}{\partial S_{ij}} \tag{2.2}\]
implies normality of the plastic strain increment to the yield surface defined in deviatoric stress space. In Eq. (2.2), \(d\lambda\) is a scalar function that, loosely speaking, represents the magnitude of the plastic strain increment. The loading criteria are
\[F < 0\] elastic deformation \[F = \,0,\hskip 56.905512pt\frac{\partial F}{\partial S_{ij}}\,dS_{ij }\,>\,0\hskip 56.905512pt\text{plastic loading}\] \[F = \,0,\hskip 56.905512pt\frac{\partial F}{\partial S_{ij}}\,dS_{ij }\,=\,0\hskip 56.905512pt\text{neutral loading}\] \[F = \,0,\hskip 56.905512pt\frac{\partial F}{\partial S_{ij}}\,dS_{ij }\,<\,0\hskip 56.905512pt\text{elastic unloading}\]
Plastic strain only accrues during plastic loading; otherwise, the plastic strain increment \(d\varepsilon^{p}_{ij}\) vanishes. Hence, our adoption of the plastic loading criteria is tacit throughout the remainder of this section.
As strain-rate-insensitive materials harden during plastic deformation, points on the original yield surface remain on all subsequent yield surfaces. This observation, together with a first-order Taylor series expansion of \(F(S_{ij},\bar{\varepsilon}^{\,p})\), imply the consistency condition
\[dF\,=\,\frac{\partial F}{\partial S_{ij}}\,dS_{ij}\,+\,\frac{\partial F}{ \partial\bar{\varepsilon}^{\,p}}\,d\bar{\varepsilon}^{\,p}\,=\,0. \tag{2.3}\]
Use of Eq. (2.2) in (2.3) leads to
\[d\lambda=-\frac{\frac{\partial F}{\partial S_{ij}}\,dS_{ij}}{\frac{\partial F}{ \partial\bar{\varepsilon}^{\,P}}\left(\frac{2}{3}\frac{\partial F}{\partial S_ {ij}}\frac{\partial F}{\partial S_{ij}}\right)^{\frac{1}{2}}}\,. \tag{2.4}\]
We employ the von Mises yield criterion, i.e., \(J_{2}\)-flow theory, which makes the yield function \(F(S_{ij},\bar{\varepsilon}^{\,p})\) in Eq. (2.1) explicit:
\[F(S_{ij},\bar{\varepsilon}^{\,p})\,=\,\frac{1}{2}\,S_{ij}S_{ij}\,-\,\frac{1}{ 3}\left[\sigma^{y}(\bar{\varepsilon}^{\,p})\right]^{2}\,=\,0, \tag{2.5}\]
where \(J_{2}=S_{ij}S_{ij}/2\) is the second invariant of the deviatoric stress (related to the energy of distortion), and \(\sigma^{y}(\bar{\varepsilon}^{\,p})\) is the yield stress in uniaxial tension, which evolves with effective plastic strain as the material hardens during plastic deformation. For choice (2.5), it follows that
\[\frac{\partial F}{\partial S_{ij}}=S_{ij},\hskip 28.452756ptS_{ij}\,dS_{ij}= \frac{2}{3}\,\sigma^{y}d\sigma^{y},\hskip 28.452756pt\frac{\partial F}{ \partial\bar{\varepsilon}^{\,p}}=-\frac{2}{3}\,\sigma^{y}\frac{d\sigma^{y}}{ d\bar{\varepsilon}^{\,p}},\]
and use of these results in Eq. (2.4) leads to
\[d\varepsilon^{p}_{ij}\,=\,\frac{3}{2}\frac{d\sigma^{y}}{\sigma^{y}\,\frac{d \sigma^{y}}{d\bar{\varepsilon}^{\,p}}}\,S_{ij}.\]
For a linear strain-hardening material, the tensile yield stress increases linearly with the effective plastic strain, i.e.,
\[\sigma^{y}(\bar{\varepsilon}^{\,p})\,=\,\sigma^{y}_{o}\,+\,B_{{}_{SH}}\,\bar{ \varepsilon}^{\,p},\]
where the initial tensile yield stress \(\sigma^{y}_{o}\) and the strength coefficient \(B_{{}_{SH}}\) are material-dependent constants. It follows that the plastic strain increment is
\[d\varepsilon^{p}_{ij}\,=\,\frac{3}{2}\,\frac{d\bar{\sigma}}{B_{{}_{SH}}\,\bar {\sigma}}\,S_{ij},\]
where we have introduced the effective stress
\[\bar{\sigma}\stackrel{{\rm def}}{{=}}\sqrt{\frac{3}{2}\,S_{ij} \,S_{ij}}\,.\]
Thus, the total strain increment (elastic + plastic) is
\[d\varepsilon_{ij}\,=\,\frac{1+\nu}{E}\,d\sigma_{ij}\,-\,\frac{\nu}{E}\,d \sigma_{kk}\delta_{ij}\,+\,\frac{3}{2}\,\frac{d\bar{\sigma}}{B_{{}_{SH}}\,\bar {\sigma}}\,S_{ij}. \tag{2.6}\]
Equation (2.6) can be inverted to give the deviatoric stress increment, or, equiv
alently, its rate
\[\dot{S}_{ij}\;=\;2\mu\,\dot{\varepsilon}_{ij}\;-\;\frac{2}{3}\,\mu\,\dot{ \varepsilon}_{kk}\,\delta_{ij}\;-\;3\mu\,\frac{S_{kl}\,\dot{\varepsilon}_{kl}}{ \left(\frac{B_{\beta H}}{2\mu}+\frac{3}{2}\right)S_{mn}\,S_{mn}}\,S_{ij}, \tag{2.7}\]
where \(\mu\) is the shear modulus. The infinitesimal elastic-plastic constitutive equation (2.7) is generalized to the finite-deformation regime by (i) replacing the stress tensor \(\sigma_{ij}\) with its finite Eulerian analog \(T_{ij}\), the Cauchy stress, (ii) replacing the infinitesimal strain increment \(d\varepsilon_{ij}\) with the rate of deformation \(D_{ij}\), which is the work conjugate of the Cauchy stress, and (iii) employing an objective rate \(D/Dt\) of the stress. The objective rate ensures that the constitutive equation is invariant under an arbitrary superposed rigid body motion. The resulting constitutive equation is
\[\frac{D}{Dt}\,S_{ij}\;=\;2\mu D_{ij}\,-\,\frac{2}{3}\,\mu\,D_{kk}\,\delta_{ij} \,-\,\beta(s)S_{kl}\,D_{kl}S_{ij},\]
where \(s=S_{mn}S_{mn}\) and
\[\beta(s)\,=\,\left\{\begin{array}{ccc}0&\mbox{if}&F<0\\ 0&\mbox{if}&F=0&\mbox{and}&\frac{\partial F}{\partial S_{ij}}\,dS_{ij}<0\\ 0&\mbox{if}&F=0&\mbox{and}&\frac{\partial F}{\partial\sigma_{ij}}d\sigma_{ij}= 0\\ \frac{6\mu^{2}}{(3\mu+B_{SH})s}&\mbox{if}&F=0&\mbox{and}&\frac{\partial F}{ \partial\sigma_{ij}}d\sigma_{ij}>0\end{array}\right. \tag{2.8}\]
Note that the four rows of Eq. (2.8) correspond to elastic deformation, elastic unloading, neutral loading, and plastic loading, respectively. One common choice is the Jaumann rate:
\[\frac{DS_{ij}}{Dt}\;=\;\frac{\partial S_{ij}}{\partial t}\,+\,v_{k}\frac{ \partial S_{ij}}{\partial x_{k}}\,-\,W_{ik}S_{kj}\,+\,S_{ik}W_{kj}, \tag{2.9}\]
where \(W_{ij}=(\partial v_{i}/\partial x_{j}-\partial v_{j}/\partial x_{i})\) is the skew part of the velocity gradient. The resulting constitutive equation is
\[\frac{D}{Dt}S_{ij}=2\mu D_{ij}-\frac{2}{3}\mu D_{kk}\delta_{ij}\;-\;3\mu\, \frac{S_{kl}D_{kl}}{\left(\frac{B_{SH}}{2\mu}+\frac{3}{2}\right)S_{mn}S_{mn} }\,S_{ij}, \tag{2.10}\]
where \(D_{ij}=1/2\,(L_{ij}+L_{ji})\) is the symmetric part of the Eulerian velocity gradient \(L_{ij}=\partial v_{i}/\partial x_{j}\), and \(v_{i}\) is the velocity. The first two terms in Eq. (2.10) reflect elastic contributions to the deviatoric stress, while the final term represents the plastic contribution.
## 3 Governing Equations
Based on the elastic-plastic constitutive relation, Eq. (2.10), the three-dimensional governing equations for elastic-plastic wave motion formulated in the Eulerian frame are
_Conservation of mass:_
\[\frac{\partial\rho}{\partial t}\,+\,\frac{\partial}{\partial x_{i}}(\rho v_{i}) \;=\;0, \tag{3.1}\]
_Conservation of linear momentum:_
\[\frac{\partial}{\partial t}\left(\rho v_{i}\right)\,+\,\frac{\partial}{ \partial x_{j}}\left(\rho v_{i}v_{j}+p-S_{ij}\right)\;=\;0 \tag{3.2}\]
and Cauchy stress components and pressure are \(T_{ij}=-p+S_{ij}\), \(p=-\sum_{i=1}^{3}T_{ii}/3\). _Elastic-plastic constitutive relation:_
\[\frac{D}{Dt}S_{ij}=2\mu D_{ij}-\frac{2}{3}\mu D_{kk}\delta_{ij}-\beta(s)S_{kl }D_{kl}S_{ij}, \tag{3.3}\]
where \(s=S_{mn}S_{mn}\) and
\[\beta(s)\,=\,\left\{\begin{array}{ccc}0&\mbox{if}&F<0\\ 0&\mbox{if}&F=0&\mbox{and}&\frac{\partial F}{\partial\sigma_{ij}}d\sigma_{ij}< 0\\ 0&\mbox{if}&F=0&\mbox{and}&\frac{\partial F}{\partial\sigma_{ij}}d\sigma_{ij}= 0\\ \frac{6\mu^{2}}{(3\mu+B_{SH})s}&\mbox{if}&F=0&\mbox{and}&\frac{\partial F}{ \partial\sigma_{ij}}d\sigma_{ij}>0\end{array}\right. \tag{3.4}\]
The four rows of Eq. (3.4) represent elastic deformation, elastic unloading, neutral loading, and plastic loading, respectively. In Eqs. (3.1)-(3.4), \(\rho\) is the density, \(v_{i}\) is the velocity, and \(D_{ij}=\left(\partial v_{i}/\partial x_{j}+\partial v_{j}/\partial x_{i}\right)/2\) is the symmetric part of the velocity gradient, all functions of time \(t\) and the position \({\bf x}=\{x_{1},x_{2},x_{3}\}\) of a typical material particle in the current configuration referred to a fixed Cartesian basis. The shear modulus \(\mu\) is a prescribed material constant. If \(J_{2}\) flow plastic theory is adopted, we have
\[\frac{\partial F}{\partial\sigma_{ij}}d\sigma_{ij}=d(\frac{1}{2}S_{ij}S_{ij}- k^{2}), \tag{3.5}\]
where \(k\) is a constant for perfect plastic materials and in general it is
\[k^{2}=\frac{1}{3}(\bar{\sigma}(\varepsilon_{ij}^{p}))^{2}.\]
Then Eq.(3.3) and Eq.(3.4) can be written as
\[\frac{D}{Dt}S_{ij}=2\mu D_{ij}-\frac{2}{3}\mu D_{kk}\delta_{ij}-\beta(s)S_{kl }D_{kl}S_{ij}, \tag{3.6}\]
where \(s=S_{mn}S_{mn}\) and
\[\beta(s)\,=\,\left\{\begin{array}{ccl}0&\mbox{if}&F<0\\ 0&\mbox{if}&F=0\quad\mbox{and}\quad d(\frac{1}{2}S_{ij}S_{ij}-k^{2})\leq 0\\ \frac{6\mu^{2}}{(3\mu+BS_{SH})s}&\mbox{if}&F=0\quad\mbox{and}\quad d(\frac{1}{2 }S_{ij}S_{ij}-k^{2})>0\end{array}\right. \tag{3.7}\]
#### 1-D Formulation
For one-dimensional cases (ignore \(\partial/\partial x_{2},\partial/\partial x_{3}\)), the mass and momentum equations are
\[\frac{\partial\rho}{\partial t}+\frac{\partial\rho v_{1}}{ \partial x_{1}}=0, \tag{3.8}\] \[\frac{\partial\rho v_{1}}{\partial t}+\frac{\partial}{\partial x _{1}}\left(\rho v_{1}v_{1}+p-S_{11}\right)=0, \tag{3.9}\]
where \(p\) is mean stress and \(S_{11}\) is deviatoric stress. Moreover, for 1D strain problem, the 3-d elastic-plastic constitutive equation can be simplified to be
\[\frac{\partial\rho S_{11}}{\partial t}+\frac{\partial\rho v_{1}S_{11}}{ \partial x_{1}}=\frac{4}{3}\mu\rho\left(1-\frac{\gamma}{1+\frac{B_{SH}}{3\mu} }\right)\frac{\partial v_{1}}{\partial x_{1}}, \tag{3.10}\]
where
\[\gamma\,=\,\left\{\begin{array}{ccl}0&\mbox{if}&F<0\\ 0&\mbox{if}&F=0\quad\mbox{and}\quad d(\frac{1}{2}S_{ij}S_{ij}-k^{2})\leq 0\\ 1&\mbox{if}&F=0\quad\mbox{and}\quad d(\frac{1}{2}S_{ij}S_{ij}-k^{2})>0\end{array}\right. \tag{3.11}\]
Mentioned in previous section, to close the system of equations, we employ the following equation of state to relate pressure to density:
\[p=k\ln\frac{\rho}{\rho_{o}}+p_{0}. \tag{3.12}\]
Eqs. (3.8)- (3.10) could generate a hyperbolic system, which could be written in vector form as:
\[\frac{\partial\mathbf{U}}{\partial t}+\frac{\partial\mathbf{E}}{\partial x_{1 }}=\mathbf{H} \tag{3.13}\]
where
\[\mathbf{U} =(\rho,\rho v_{1},\rho S_{11})^{T},\] \[\mathbf{E} =(\rho v_{1},\rho v_{1}v_{1}+p-S_{11},\rho S_{11}v_{1})^{T},\] \[\mathbf{H} =\left(0,0,\frac{4}{3}\mu\rho(1-\beta/(1+B_{SH}/3\mu))\frac{ \partial v_{1}}{\partial x_{1}}\right)^{T}.\]
By analyzing the eigen-structure of this hyperbolic system, we directly calculate the speed of sound in the solid with plastic deformation. Eq. (3.13) can be recast as
\[\frac{\partial{\bf U}}{\partial t}+{\bf A}\frac{\partial{\bf U}}{\partial x_{1}}= {\bf H} \tag{3.14}\]
where
\[{\bf A}=\left(\begin{array}{ccc}0&1&0\\ -v_{1}^{2}+\frac{k}{\rho}+\frac{S_{11}}{\rho}&2v_{1}&-\frac{1}{\rho}\\ -v_{1}S_{11}&S_{11}&v_{1}\end{array}\right) \tag{3.15}\]
and
\[{\bf H}=\left(0,0,\frac{4}{3}\mu\rho(1-\beta/(1+B_{SH}/3\mu))\frac{\partial v_ {1}}{\partial x}\right)^{T} \tag{3.16}\]
Alternatively, we can rewrite the above hyperbolic system by using the non-conservative variables vector
\[\widetilde{\bf U}=(\rho,v_{1},S_{11})^{T}, \tag{3.17}\]
which leads to the non-conservative form:
\[\frac{\partial\widetilde{\bf U}}{\partial t}+\widetilde{\bf A}\frac{\partial \widetilde{\bf U}}{\partial x_{1}}=\widetilde{\bf H} \tag{3.18}\]
with
\[\widetilde{\bf A}=\left(\begin{array}{ccc}v_{1}&\rho&0\\ \frac{k}{\rho^{2}}&v_{1}&-\frac{1}{\rho}\\ 0&0&v_{1}\end{array}\right)\]
and
\[\widetilde{H}=\left(0,0,\frac{4}{3}\mu\left(1-\frac{\beta}{1+B_{SH}/3\mu} \right)\frac{\partial v_{1}}{\partial x}\right)^{T} \tag{3.19}\]
By moving the source term from right side to the left side, we could transform Eq.(3.18) into
\[\frac{\partial\widetilde{\bf U}}{\partial t}+\overline{\bf A}\frac{\partial \widetilde{\bf U}}{\partial x_{1}}=0 \tag{3.20}\]
where
\[\overline{\bf A}=\left(\begin{array}{ccc}v_{1}&\rho&0\\ \frac{k}{\rho^{2}}&v_{1}&-\frac{1}{\rho}\\ 0&-\frac{4}{3}\mu(1-\frac{\beta}{1+B_{SH}/3\mu})\frac{\partial v_{1}}{\partial x }&v_{1}\end{array}\right)\]
which is suitable for assessing the eigenstructure of the system of equations. The eigenvalues of matrix \(\overline{A}\) can be readily derived and they are
\[\lambda_{1}=v_{1},\lambda_{2,3}=v_{1}\pm c=v_{1}\pm\sqrt{\frac{k+\frac{4}{3}\mu( 1-\frac{\beta}{1+B_{SH}/3\mu})}{\rho}}, \tag{3.21}\]
where \(k\) is bulk modulus, \(\mu\) is shear modulus. It can be seen that is \(\sqrt{(k+\frac{4}{3}\mu)/\rho}\) by letting \(\beta=0\). It can be seen that this plastic wave speed is slower than elastic wave speed in the bulk material. In particular, for the elastic-perfectly plastic materials, i.e. \(\beta=1\) and \(B_{SH}=0\), the plastic wave speed is given by
\[c=\sqrt{\frac{k}{\rho}}. \tag{3.22}\]
In the rest of the present paper, the above formulation will be numerically solved by the CESE method. The computational conditions and numerical results and remarks will be illustrated in the following sections.
## 4 Linearization of nonlinear plastic wave and wave speed illustration
For the purpose of comparison to the small-amplitude numerical solution of the nonlinear wave equations, we obtain the analytical solution of the linearized problem. We linearize the nonlinear governing equations (3.8), (3.9) and (3.10) as follows. We expand \(\rho(x_{1},t)\), \(v(x_{1},t)\) and \(S_{11}\)in \(\epsilon\) about the rest state \(\rho^{0}\), \(v^{0}\) and \(S_{11}^{0}\):
\[\rho(x_{1},t) =\rho^{0}+\epsilon\rho^{1}(x_{1},t)+\epsilon^{2}\rho^{2}(x_{1},t)+\dots, \tag{4.1}\] \[v(x_{1},t) =v^{0}+\epsilon v^{1}(x_{1},t)+\epsilon^{2}v^{2}(x_{1},t)+\dots,\] \[S_{11}(x_{1},t) =S_{11}^{0}+\epsilon S_{11}^{1}(x_{1},t)+\epsilon^{2}S_{11}^{2}(x _{1},t)+\dots,\]
where \(\rho^{i}(x_{1},t)\), \(v^{i}(x_{1},t)\) and \(S_{11}^{i}(x_{1},t)\), \(i=1,2,...\) are the \(i\)-th corrections of density, velocity and deviatoric stress, respectively, and \(\epsilon\) is a small, dimensionless, positive, scalar quantity. Inserting the expansions into Eq. (3.8), Eq. (3.9) and Eq. (3.10), selecting \(\rho^{0}=\rho_{0}\), \(v^{0}=0\) and \(S_{11}^{0}=0\) as the rest state, the following linear equations are obtained from the order-\(\epsilon\) problem:
\[\frac{\partial\rho^{1}}{\partial t}+\rho_{0}\frac{\partial v^{1}} {\partial x_{1}} =0, \tag{4.2}\] \[\rho_{0}\frac{\partial v^{1}}{\partial t}+\frac{K}{\rho_{0}} \frac{\partial\rho^{1}}{\partial x}-\frac{\partial S_{11}^{1}}{\partial x} =0,\] \[\frac{\partial S_{11}^{1}}{\partial t} =\frac{4}{3}\mu[1-\beta/(1+B_{SH}/3\mu)]\frac{\partial v^{1}}{ \partial x}.\]
Equations (4.2) are combined to recover second order linear wave equations in velocity:
\[\frac{\partial^{2}v^{1}}{\partial t^{2}}=c^{2}\frac{\partial^{2}v^{1}}{\partial x ^{2}}. \tag{4.3}\]
where
\[c=\sqrt{\frac{K+\frac{4}{3}\mu[1-\beta/(1+B_{SH}/3\mu)]}{\rho_{0}}}\]
is the plastic wave propagation speed. If we integrate Eq. (4.3) with respect to time and set the arbitrary function of \(x\) which arise to be zero, we recover
\[\frac{\partial^{2}u}{\partial t^{2}}=c^{2}\frac{\partial^{2}u}{\partial x^{2}} \tag{4.4}\]
with \(u(x,t)\) the axial displacement component. It can be seen in Eq.(4.4) that plastic wave speed depends on plastic modulus \(B_{SH}\) which is constant in metals with linear hardening plastic materials, a reasonable first approximation for strain hardening. So the variation of the velocity of plastic wave propagation, as a function of strain, is governing by the slope of the plastic stress-strain curve. For instance, the stress-strain curve of some materials takes the form illustrated in Fig.(1a), i.e. where \(\sigma d^{2}\sigma/d\epsilon^{2}<0\). For such case, the plastic wave speed \(c(\epsilon)\) increases when the stress increase. In the impact case, since the stress will increase at the impact end, the waves generated by the impact will propagate with continually increasing velocities. In this case the distance between the wave fronts becomes short during propagation and there is a tendency to form shock waves. Detail analysis of infinitesimal plastic wave speed was shown by Cristescu [3] (see Appendix). However, for some rubbers, soils and certain metals, the constitutive equation takes the form shown in Fig.(1b), i.e. for which the slope decreases continuously (\(\sigma d^{2}\sigma/d\epsilon^{2}>0\)) for any strain. The speed of wave propagation will decrease when the stress increases. This also means that the wave fronts will become wide during their propagation. Therefore, an expansion wave will be formed. Several numerical examples will be given in the results section.
## 5 Numerical Solution by Radial Return Mapping
Based on the consistency condition, the effective stress must be constrained to always fall either within or on the yield surface. Different from infinitesimal plasticity, in solving finite plasticity problem, the typical numerical method is radial return algorithm [12]. This algorithm includes two steps: (i) predict a trial stress using the elastic part of Eq. (3.10),
\[\frac{\partial\rho S_{11,tr}}{\partial t}+\frac{\partial\rho uS_{11,tr}}{ \partial x}=\frac{4}{3}\mu\rho\frac{\partial u}{\partial x} \tag{5.1}\]
and (ii) correct the trial stress to be true elastic-plastic stress by pulling the trial stress back to the yield surface:
\[S_{11,tr}=S_{11,tr}-\frac{S_{11,tr}}{\mid S_{11,tr}\mid}\cdot\frac{\mid S_{11,tr} \mid-\mid S_{11,pre}\mid}{1+\frac{B}{3\mu}} \tag{5.2}\]
where \(S_{11.tr}\) is the trial stress predicted by assuming purely elastic deformation, \(S_{11,pre}\) is the true elastic-plastic stress calculated at the previous time step.
## 6 The CESE Method
Conventional finite volume methods are formulated according to a flux balance over a fixed spatial domain. The conservation laws state that the rate of change of the total amount of a substance contained in a fixed spatial domain, i.e., the control volume \(V\), is equal to the flux of that substance across the boundary of \(V\), denoted as \(S(V)\). Consider the differential form of a conservation law as follows:
\[\frac{\partial u}{\partial t}+\bigtriangledown\cdot\mathbf{f}=0 \tag{6.1}\]
where \(u\) is density of the conserved flow variable, \(f\) is the spatial flux vector. By applying Reynold's transport theorem to the above equation, one can obtain the integral form as:
\[\frac{\partial}{\partial t}\int_{V}udV+\oint_{S(V)}\mathbf{f}\cdot d\mathbf{S }=0 \tag{6.2}\]
Figure 1: A schematic of one dimensional stress-strain curve. (a) A plastic stress-strain curve for a work-hardening material. (b) A plastic stress-strain curve concave towards the stress axis.
where \(dV\) is a spatial volume element in \(V\), \(d\mathbf{s}=d\sigma\mathbf{n}\) with \(d\sigma\) and \(\mathbf{n}\) being the area and the unit outward normal vector of a surface element on \(S(V)\) respectively. By integrating Eq.(6.2), one has
\[\left[\int_{V}udV\right]_{t=t_{f}}-\left[\int_{V}udV\right]_{t=t_{s}}+\int_{t_{ s}}^{t_{f}}\left(\oint_{S(V)}\mathbf{f}\cdot d\mathbf{S}\right)dt=0 \tag{6.3}\]
The discretization of Eq.(6.3) is the focus of the conventional finite-volume methods. In particular, the calculation of the flux terms in Eq.(6.3) would introduce the upwind methods due to the nonlinearity of the convection terms in the conservation laws.
In the CESE method, we do not use the above formulation based on the Reynolds transport theorem. Instead, the conservation law is formulated by treating space and time on an equal-footing. This unified treatment of space and time allows a consistent integration in space-time and thus ensures local and global flux balance. This chapter briefly illustrates the CESE method in one-spatial dimension.
### One-Dimensional CESE Method
To proceed, let time and space be the two orthogonal coordinates of a space-time system, i.e.,\(x_{1}=x\) and \(x_{2}=t\). They constitute a two-dimensional Euclidean space \(E_{2}\). Define \(h\equiv(f,u)\), then by using the Gauss divergence theorem, Eq.(6.1) becomes
\[\int_{\partial\Omega}\mathbf{h}\cdot d\mathbf{s}=0 \tag{6.4}\]
Equation (6.4) states that the total space-time flux \(\mathbf{h}\) leaving the space-time volume through its surface vanishes. Refer to Figure 2 for a schematic of Eq.(6.4). To integrate Eq. (6.4) we employ the CESE method [43].
In the CESE method, separated definitions of Solution Element (SE) and Conservation Element (CE) are introduced. In each
Figure 2: A schematic of space-time integral of the CESE method
variables are assumed continuous and a prescribed function is used to represent the profile. In the present calculation, a linear distribution is used. Over each CE, the space-time flux in the integral form, Eq.(6.4), is imposed. Figure 3 shows the space-time mesh and the associated SEs and CEs. Solutions of variables are stored at mesh nodes which are denoted by filled circular dots. Since a staggered mesh is used, solution variables at neighboring SEs leapfrog each other in time-marching calculation. The SE associate with each mesh node is a yellow rhombus. Inside the SE, the solution variables are assumed continuous. Across the interfaces of neighboring SEs, solution discontinuities are allowed. In this arrangement, solution information from one SE to another propagates only in one direction, i.e., toward the future through the oblique interface as denoted by the red arrows. Through this arrangement of the space-time staggered mesh, the classical Riemann problem has been avoided. Figure 3(b) illustrates a rectangular CE, over which the space-time flux conservation is imposed. This flux balance provides a relation between the solutions of three mesh nodes: \((j,n)\), \((j-1/2,n-1/2)\), and \((j+1/2,n-1/2)\). If the solutions at time step \(n-1/2\) are known, the flux conservation condition would determine the solution at \((j,n)\).
In the present research, many differential equations have source terms. Thus, we consider the one-dimensional equations with source terms:
\[\frac{\partial u_{m}}{\partial t}+\frac{\partial f_{m}}{\partial x}=s_{m}, \tag{6.5}\]
where \(m=1,2,3\) and the source term \(s_{m}\) are functions of the unknowns \(u_{m}\) and their spatial derivatives. For any \((x,t)\in\mathrm{SE}(j,n)\), \(u_{m}(x,t)\), \(f_{m}(x,t)\) and \(\mathbf{h}_{m}(x,t)\), are approximated by \(u^{*}(x,t;j,n)\), \(f^{*}(x,t;j,n)\), and \(\mathbf{h}^{*}(x,t;j,n)\). By assuming linear distribution inside an SE, we have
\[u^{*}_{m}(x,t;j,n)=\] \[\quad(u_{m})^{n}_{j}+(u_{mx})^{n}_{j}(x-x_{j})+(u_{mt})^{n}_{j}(t -t^{n}),\] \[f^{*}_{m}(x,t;j,n)=\] \[\quad(f_{m})^{n}_{j}+(f_{mx})^{n}_{j}(x-x_{j})+(f_{mt})^{n}_{j}( t-t^{n}),\] \[\mathbf{h}^{*}_{m}(x,t;j,n)=(f^{*}_{m}(x,t;j,n),u^{*}_{m}(x,t;j,n )),\]
where
\[(u_{mx})_{j}^{n} =\left(\frac{\partial u_{m}}{\partial x}\right)_{j}^{n},\] \[(f_{mx})_{j}^{n} =\left(\frac{\partial f_{m}}{\partial x}\right)_{j}^{n}=(f_{m,l})_ {j}^{n}(u_{lx})_{j}^{n},\] \[(u_{mt})_{j}^{n} =\left(\frac{\partial u_{m}}{\partial t}\right)_{j}^{n}=-(f_{mx})_ {j}^{n}=-(f_{m,l})_{j}^{n}(u_{lx})_{j}^{n},\] \[(f_{mt})_{j}^{n} =\left(\frac{\partial f_{m}}{\partial t}\right)_{j}^{n}\] \[=(f_{m,l})_{j}^{n}(u_{lt})_{j}^{n}=-(f_{m,l})_{j}^{n}(f_{l,p})(u _{px})_{j}^{n},\]
and \((f_{m,l})_{j}^{n}\equiv(\partial f_{m}/\partial u_{l})_{j}^{n}\) is the Jacobian matrix. Assume that, for any \((x,t)\in\mathrm{SE}(j,n)\), \(u_{m}=u_{m}^{*}(x,t;j,n)\) and \(f_{m}=f_{m}^{*}(x,t;j,n)\) satisfy Eq. (6.5), i.e.,
\[\frac{\partial u_{m}^{*}(x,t;j,n)}{\partial t}+\frac{\partial f_{m}^{*}(x,t;j, n)}{\partial x}=s_{m}^{*}(x,t;j,n), \tag{6.6}\]
where we assume that \(s_{m}^{*}\) is constant within \(\mathrm{SE}(j,n)\), i.e., \(s_{m}^{*}(x,t;,j,n)=(s_{m})_{j}^{n}\). Eq. (6.6) becomes
\[(u_{mt})_{j}^{n}=-(f_{mx})_{j}^{n}+(s_{m})_{j}^{n}. \tag{6.7}\]
Since\((f_{mx})_{j}^{n}\) are functions of \((u_{m})_{j}^{n}\) and \((u_{mx})_{j}^{n}\); and \((s_{m})_{j}^{n}\) are also functions of \((u_{m})_{j}^{n}\), Eq. (6.7) implies that \((u_{mt})_{j}^{n})\) are also functions of \((u_{m})_{j}^{m}\) and \((u_{mx})_{j}^{m}\). Aided by the above equations, we determine that the only unknowns are \((u_{m})_{j}^{n}\) and \((u_{mx})_{j}^{n}\) at each mesh point \((j,n)\).
Next, we impose space-time flux conservation over \(\mathrm{CE}(j,n)\) to determine the unknowns \((u_{m})_{j}^{n}\). Refer to Fig. 3(b). Assume that \(u_{m}\) and \(u_{mx}\) at mesh points \((j-1/2,n-1/2)\) and \((j+1/2,n-1/2)\) are known and their values are used to calculate \((u_{m})_{j}^{n}\) and \((u_{mx})_{j}^{n}\) at the new time level \(n\). By enforcing the flux balance over \(\mathrm{CE}(j,n)\), i.e.,
\[\oint_{S(\mathrm{CE}(j,n))}\mathbf{h}_{m}^{*}\cdot d\mathbf{s}=\int_{\mathrm{ CE}(j,n)}s_{m}^{*}d\Omega,\]
one obtains
\[(u_{m})_{j}^{n}-\frac{\Delta t}{4}(s_{m})_{j}^{n}=\frac{1}{2} \Big{[}(u_{m})_{j-1/2}^{n-1/2}+(u_{m})_{j+1/2}^{n-1/2}\] \[\quad+\frac{\Delta t}{4}(s_{m})_{j-1/2}^{n-1/2}+\frac{\Delta t}{4 }(s_{m})_{j+1/2}^{n-1/2}\] \[\quad+(p_{m})_{j-1/2}^{n-1/2}-(p_{m})_{j+1/2}^{n-1/2}\Big{]}, \tag{6.8}\]
Figure 3: A schematic of the CESE method in one spatial dimension. (a) Zigzagging SEs. (b) Integration over a CE to solve \(u_{i}\) and \((u_{x})_{i}\) at the new time level.
where
\[(p_{m})_{j}^{n} =\frac{\Delta x}{4}(u_{mx})_{j}^{n}+\frac{\Delta t}{\Delta x}(f_{m}) _{j}^{n}+\frac{\Delta t^{2}}{4\Delta x}(f_{mt})_{j}^{n}.\]
Given the values of the marching variables at the mesh nodes \((j-1/2,n-1/2)\) and \((j+1/2,n-1/2)\), the right-hand side of Eq. (6.8) can be explicitly calculated. Since \((s_{m})_{j}^{n}\) on the left hand side of Eq. (6.8) is a function of \((u_{m})_{j}^{n}\), we use Newton's method to solve for \((u_{m})_{j}^{n}\). The initial guess of the Newton iterations is
\[(\bar{u}_{m})_{j}^{n} =\frac{1}{2}\Big{[}(u_{m})_{j-1/2}^{n-1/2}+(u_{m})_{j+1/2}^{n-1/2}\] \[\quad+\frac{\Delta t}{4}(s_{m})_{j-1/2}^{n-1/2}+\frac{\Delta t}{4 }(s_{m})_{j+1/2}^{n-1/2}\] \[\quad+(p_{m})_{j-1/2}^{n-1/2}-(p_{m})_{j+1/2}^{n-1/2}\Big{]},\]
i.e., the explicit part of the solution of \((u_{m})_{j}^{n}\).
The solution procedure for \((u_{mx})_{j}^{n}\) at node \((j,n)\) follows the standard \(a\)-\(\varepsilon\) scheme [43] with \(\varepsilon=0.5\). To proceed, we let
\[(u_{mx})_{j}^{n} =\frac{(u_{mx}^{+})_{j}^{n}+(u_{mx}^{-})_{j}^{n}}{2}, \tag{6.9}\]
where
\[(u_{mx}^{\pm})_{j}^{n} =\pm\frac{(u_{m})_{j\pm 1/2}^{n}-(u_{m})_{j}^{n}}{\Delta x/2},\] \[(u_{m})_{j\pm 1/2}^{n} =(u_{m})_{j\pm 1/2}^{n-1/2}+\frac{\Delta t}{2}(u_{mt})_{j\pm 1 /2}^{n-1/2}.\]
For solutions with discontinuities, Eq. (6.9) is replaced by a re-weighting procedure to add artificial damping at the jump
\[(u_{mx})_{j}^{n} =W\left((u_{mx}^{-})_{j}^{n},(u_{mx}^{+})_{j}^{n},\alpha\right),\]
where the re-weighting function \(W\) is defined as:
\[W(x_{-},x_{+},\alpha) =\frac{|x_{+}|^{\alpha}x_{-}+|x_{-}|^{\alpha}x_{+}}{|x_{+}|^{ \alpha}+|x_{-}|^{\alpha}},\]
and \(\alpha\) is an adjustable constant. The complete discussion of the one-dimensional CESE method can be found in [43, 59]. The above method with CE and SE defined as in Fig. 3 is useful for solving the hyperbolic PDEs with non-stiff source terms.
## 7 Numerical Results
### Semi-infinite Domain Impact Analysis
We consider a one-dimensional copper bar with an initial speed \(u=40m/s\) hitting a stationary copper bar. Refer to Figure 4. The initial pressures \(p\) and deviatoric stress component \(S_{11}\) in both copper bars are zero. The material properties of copper are listed in Table 1.
We assume the material is elastic-perfectly plastic, i.e. the yielding stress always equals to the initial yield stress without hardening (\(B_{SH}=0\) in Eq. (4.2)). The boundary conditions at the left end of initially moving copper bar and the right end of initially static copper bar are set as the non-reflective boundary conditions. The focus of the present impact problem is the interactions between the moving bar and the initially static bar. The non-reflective boundary conditions at the two far ends allows clear observation of wave evolution initiated from the impact.
The computational domain is 2 meters, which is uniformly discretized into 400 numerical cells. The time step for the time marching calculation is 0.6 \(\mu s\). Based on the known size of spatial grid, the time increment, and the longitudinal plane wave speed in copper bar, the CFL number in computation is controlled to be about 0.6. The physical duration of wave propagation in computation is 0.17 ms.
In Fig.5(a) and Fig.5(b), red lines with symbols represent the numerical solutions of density and pressure by the CESE method. The blue solid lines in these two figures represent the exact solutions by Udaykumar et al. ([12]). They used the Mie-Gruneisen equation as the equation of state to relate internal energy, pressure and density. Fig.5(a) and Fig.5(b) show that the numerical solutions by solving our isothermal model equations compare well with the analytical solution ([12]) in terms of the wave locations and amplitude for both the plastic wave and the precursive elastic wave. This agreement between numerical solutions and the analytical solution shows that in the range of low-impact velocity, the material response simulated by the simple equation of state asymptotically approaches that simulated by the Mie-Gruneisen equation.
Both the exact solution and the numerical solution show that the elastic wave is faster than the plastic wave. This is consistent with elastic-plastic wave speed
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(k(GP_{a})\) & \(\rho_{o}(kg/m^{3})\) & \(\mu(GP_{a})\) & \(E(GP_{a})\) & \(\sigma_{y}(MP_{a})\) \\ \hline
140 & 8930 & 45 & 122 & 90 \\ \hline \end{tabular}
\end{table}
Table 1: Material properties of copper
Figure 4: Initial condition of the one-dimensional impact problem
shown in Eq.(3.21). In a solid with pure elastic deformation, the wave speed is \(c=\sqrt{[k+(4/3)/\mu]/\rho}\). When the deformation involves perfect plasticity, the wave speed is \(c=\sqrt{k/\rho}\), which is lower than elastic wave speed.
To proceed, consider a one-dimensional copper bar with an initial speed \(u=30\)m/s hitting a stationary copper bar. The materials are assumed to be elastic and isotropic linear strain hardening plastic. Three cases are considered in this example. Case 1: the material has one effective yielding stress, e.g., 60 MPa. When strain is small, the material is elastic, e.g., elastic Young's modulus (122 GPa), shear modulus (45 GPa) and bulk modulus (140 GPa). After equivalent stress surpasses effective yielding stress, the material properties will change once, which is decided by the effective stress-strain curve; Case 2: The material will have two effective yielding stresses, e.g., 60 MPa and 80 MPa. If stress is larger than yielding stress, the material properties will change twice. Case 3: The material has 4 effective yielding stresses, e.g., 60 MPa, 80 MPa, 100 MPa, 120 MPa. These multiple effective yielding stresses can be considered as discretized hardening stages for plastic deformation. With the materials' properties defined, we use CESE method to calculate elastoplastic wave propagation. In the computation, \(\Delta x=0.66mm\) and \(\Delta t=1.3\times 10^{-7}\)s. The initial pressures \(p\) and deviatoric stress component \(S_{11}\) in both copper bars are zero. Fig.6 shows density and deviatoric stress profiles for three cases. Elastoplastic wave propagates to the right. It was demonstrated there are two wave fronts, one elastic wave and one plastic wave, for case 1. From the discussion earlier, we know that elastic wave front is faster than plastic wave front. In case 2, there are 3 wave fronts, where the fastest wave is elastic wave. The other two wave fronts are plastic wave. The plastic wave speeds will depend on the two plastic hardening slopes of effective stress and strain curve. Similarly, in case 3, there are five wave fronts, e.g., four plastic wave fronts and one elastic wave front. When the effective yielding stress number increases, eventually, the effective stress and strain curve becomes smooth. The elastoplastic wave fronts could become infinite. Therefore, an expansion wave will be formed.
### Unloading of Plastic Wave
In this numerical example, we will test the elastic and plastic wave unloading. It is well known that materials can incur residual stress during unloading from plastic zone in static state. In this example, the unloading of plastic wave propagation will be demonstrated. The numerical example is set to be the same as the former impact analysis except that the length of the impact bar is short and finite. The left side of the bar is stress free. Stress wave will switch from the tense to compressive and vice versa at the free end.
Fig.(7) shows a short length copper bar impacting a semi-infinite copper bar. After contact, elastoplastic wave is generated from the contact area. One is left running wave, the other one is right running wave. The left running wave will be reflected from the left end and join the right running wave. In Fig.(7), all of the elastic and plastic waves propagate rightward. The elastic wavefront is the fastest, followed by two plastic wavefronts. But there is only one elastic
Figure 5: A snapshot of density and pressure at \(t=0.17ms\) in an initially static copper bar. The CESE numerical result by using the isothermal model is compared to the exact solution by Udaykumar et al. ([12]) (a) a snapshot of density at \(t=0.17ms\). (b) a snapshot of pressure at \(t=0.17ms\).
Figure 6: Density and stress profiles of plastic wave propagation in bulk materials for 3 cases: (top) material has one isotropic hardening effective yielding stress; (middle) material has two isotropic hardening effective yielding stress; (bottom) material has four isotropic hardening effective yielding stress.
wave speed in the unloading zone. It demonstrates that unloading path will be parallel to the elastic curve in the constitutive relationship.
### Ultrasonic Plastic Wave
In this setting, we consider a copper bar whose left end is forced by a sinusoidal ultrasonic force which is
\[F=F_{0}\sin(\omega t), \tag{7.1}\]
with \(F_{0}=200000KN\) and \(\omega=10^{5}rad/s\). For simplicity, we set the initial conditions zero and the area of the bar to unity. Since the external force is high enough to cause copper yielding, elastic and plastic wave in the bar will be generated. It will propagate to the other end. Stress wave and velocity wave profiles are shown in Fig.(8) and Fig.(9). When elastic and plastic wave propagates to the right, wave shapes have been changed. Elastic tensional and compressional waves and plastic tensional and compressional waves take place during the same time. The difference of elastic wave speed and plastic wave speed makes the wave profile continually changing. It's also noticed that the original symmetrical input wave profiles have been changed to unsymmetrical elasto-plastic wave profiles.
This asymmetry could be caused by the asymmetrical effective stress-strain curve. That is, in the copper constitutive model, yielding compression stress is different from yielding tensile stress. Because of the nonlinearity of the governing equations, the density and stress profiles shown in Fig.(8) and Fig.(9) no longer keep sinusoid shape. The oscillations of many other frequency waves are added due to nonlinear effect. The final wave shape is becoming square wave.
Figure 7: The density profile (left) and the stress profile (right) of stress wave propagation in bulk elastic-hardening plastic materials. The unloading elastic stress wave propagates to the right with elastic wave speed. The two plastic stress wave fronts propagate to the right with two slow plastic wave speeds.
Figure 8: Density profiles of stress wave propagation in bulk elastic-hardening plastic materials. The left end is applied with a sinusoidal force in 10 kHz frequency. Amplitude of the applied force is larger than yielding stress of copper.
Figure 9: Stress profiles of stress wave propagation in bulk elastic-hardening plastic materials. The left end is applied with a sinusoidal force in 10 kHz frequency. Amplitude of the applied force is larger than yielding stress of copper.
## 8 Limitation and Conclusion
### Limitation
To understand stress wave propagation in multiple mediums and multiple phases such as solid-fluid interface and large amplitude elastic-plastic flow is a very challenge subject. The difficulties are really coming from several aspects. Firstly, governing equations in Eulerian frame consist of conservation of mass, momentum, energy, and many transport equations. They are very complicated nonlinear partial differential equations, which are all coupled together. This actually makes analytically solvable problems unsolvable. In vibration analysis or quantum mechanics, we are trying very hard to decouple multiple differential equations by either introducing eigenvectors and eigenvalues or assuming separation variables. The original purpose of solving conservation laws in Eulerian frame in fixed volume and fixed mesh is to deal with large deformation and plastic flow which may be challenging for Lagrangian method. But it introduces so many nonlinearities into the governing equations which makes problems even harder. The other limitation of governing equations in continuum mechanics scale is that no micro-structure evolutions are considered. When we talk about conservation of momentum, we talk about linear momentum. However, in continuous scale, We don't know whose linear momentum this is. Does this linear momentum belongs to dislocations, electrons, or molecules? If this linear momentum belongs to a finite volume, what is the size of finite volume? What if there exist many cracks or many interfaces inside the finite volume. Linear momentum may not be conserved. This linear momentum can be changed to surface motion on the cracks or angular momentum of molecules. In large deformable mediums, there is a chance that linear momentum can be exchanged with angular momentum which is not considered here. When Isaac Newton introduced the second law, e.g., \(F=ma\), he was working on particles. He never expected people will extend his theory to deformable materials. It is Cauchy, Lagrange, Navier, Stokes, and Euler who introduced particle mechanics to deformable materials which forms solid mechanics and fluid dynamics. When they built the theories, none of them considered micro-structures or quantum energy levels of solid deformable materials. They told us that mass, momentum, and energy are conserved. But they didn't tell us how fast they are conserved, where they will be conserved, when they will be conserved. Lots of concepts and theories still need be validated. Validation will be understood through how atoms and molecules bind themselves together. A path from micro-structure to macro-structure need be set up [60]. Quantum theory and statistical mechanics will be the answer.
Secondly, to build a large amplitude elastic-plastic constitutive model is very difficult. Traditional elastic plastic yielding criteria such as Von Mises or Drucker-Prager models are all rate independent plastic models. Their usage with conservation laws to model high speed impact is questionable. Their mathematical structures can hardly be cast into hyperbolic differential equation. In reality, when plastic deformation happens, dislocation will be created or annihilated. Dislocation density in crystalline structures [61] or defect density in
amorphous structures [62] will be changed and accumulated. In large deformation elastic and plastic \(F_{e}F_{p}\) model, no micro-structures are considered. Writing \(F_{e}F_{p}\) model into a transport equation coupled with conservation laws is almost impossible. Thirdly, Discretization of large deformation elastic and plastic material model in numerical method such as CESE method is very complex. Some part of the constitutive model has to be treated as a source term and leave the other part in the hyperbolic form. With this treatment, time marching speed in numerical code will be different from wave speed in plastic zone. So if we have a very good numerical method such as CESE method, mathematical treatment of plastic deformation is not perfect. The numerical results can go to a different direction.
### Conclusion
In this article, we pay more attention to understand plastic modeling instead of numerical method. The isothermal hyperbolic model for stress wave in elastic-plastic solid does not include energy conservation equation. Equation of state employed relates pressure and density, without considering internal energy. We applied the isothermal model to simulate low-speed impact problems. The numerical results are validated by comparing to an analytical solution, which was derived by using a more comprehensive equation state with the thermal effect.
The above results show that the isothermal model developed in the present paper correctly predicts the elastic-plastic wave propagation in low and moderate-speed impact. Thus, if temperature is not of concern in a low-impact-speed problem, process which might be able to assumed as a isothermal problem due to a slight temperature change and a low material particle speed, one may use the isothermal model to simulate process instead of complete model including the thermal effect.
## Acknowledgement
The help and generous support from Dr. John Sheng-tao Yu, Dr. Steven E Bechtel, and Dr. Minghao Tsai are greatly appreciated. The authors also wish to acknowledge the generous support of this work by National Science Foundation Grant DMI-0600060.
|
2309.00766
|
Generalized Continuous and Discrete Stick Fragmentation and Benford's
Law
|
Inspired by the basic stick fragmentation model proposed by Becker et al. in
arXiv:1309.5603v4, we consider three new versions of such fragmentation models,
namely, continuous with random number of parts, continuous with probabilistic
stopping, and discrete with congruence stopping conditions. In all of these
situations, we state and prove precise conditions for the ending stick lengths
to obey Benford's law when taking the appropriate limits. We introduce the
aggregated limit, necessary to guarantee convergence to Benford's law in the
latter two models. We also show that resulting stick lengths are non-Benford
when our conditions are not met. Moreover, we give a sufficient condition for a
distribution to satisfy the Mellin transform condition introduced in
arXiv:0805.4226v2, yielding a large family of examples.
|
Xinyu Fang, Steven J. Miller, Maxwell Sun, Amanda Verga
|
2023-09-02T00:05:57Z
|
http://arxiv.org/abs/2309.00766v1
|
# Generalized continuous and discrete stick fragmentation and Benford's law
###### Abstract.
Inspired by the basic stick fragmentation model proposed by Becker et al. in [B+], we consider three new versions of such fragmentation models, namely, continuous with random number of parts, continuous with probabilistic stopping, and discrete with congruence stopping conditions. In all of these situations, we state and prove precise conditions for the ending stick lengths to obey Benford's law when taking the appropriate limits. We introduce the _aggregated limit_, necessary to guarantee convergence to Benford's law in the latter two models. We also show that resulting stick lengths are non-Benford when our conditions are not met. Moreover, we give a sufficient condition for a distribution to satisfy the Mellin transform condition introduced in [JKKKM], yielding a large family of examples.
###### Contents
* 1 Introduction
* 1.1 Background
* 1.2 The basic model
* 1.3 Benford's law
* 1.4 Continuous fragmentation with random number of parts
* 1.5 Continuous fragmentation with probabilistic dying
* 1.6 Discrete fragmentation with congruence stopping condition
* 2 Preliminaries
* 2.1 Mellin transform condition
* 2.2 Proof of continuity criterion for the Mellin transform condition
* 3 Continuous fragmentation with random number of parts
* 3.1 Proof of Theorem 1.5
* 3.2 Proof of Theorem 1.6
* 4 Continuous fragmentation with probabilistic stopping
* 4.1 Proof of Theorem 1.7
* 4.2 Proof of Theorem 1.8
* 4.3 Proof of Theorem 1.10
* 4.4 Proof of Theorem 1.11
* 5 Discrete fragmentation with congruence stopping condition
* 5.1 Proof of Theorem 1.12
* 5.2 Proof of Theorem 1.13
* 5.2.1 Proof of First Item
* 5.2.2 Proof of Second Item
* 5.3 When \(|S|\neq n/2\)
* 5.3.1 Proof of Theorem 5.15
* 5.3.2 Proof of Theorem 5.16
* 5.4 General Number of Parts
* 6 Acknowledgements
## 1. Introduction
### Background
Benford's Law, named after the physicist and mathematician Frank Benford who observed it in 1938, describes the non-uniform distribution of first digits in many real-world datasets. According to this law (which we define precisely below), the digit 1 arises as the leading digit approximately 30% of the time, 2 approximately 17% of the time, and so on, with larger digits occurring less frequently. This counterintuitive pattern emerges due to the logarithmic nature of the distribution. It can be observed in a wide range of naturally occurring datasets, such as financial reports, census data, scientific constants, and even seemingly unrelated fields like social media statistics. Today, there are numerous applications of Benford's law including in voting fraud detection [Nig], economics [Tod, V-BFJ], geology [NM1], signal processing [PHA], and the physical sciences [BMP, Eli, MSPZ, NWR, PTTV, SM1, SM2]. See [BH2, Mil1] for more on the general theory and fields where it is observed.
Given its ubiquity and many applications, it is therefore of interest to study which mathematical processes lead to Benford behavior. In general, it is often true that arithmetic operations (such as sums or products) of random variables yield a random variable that is closer to satisfying Benford's Law [Adh, AS, Bha, JKKKM, Lev1, Lev2, MN1, Rob, Sak, Sch1, Sch2, Sch3, ST]. However, this is not always the case (see for example [BH1]). In certain cases, a central limit theorem law is attainable, where Benfordness follows from the convergence of the distribution of mantissas (see Section 1.3) to the uniform distribution.
In 2006, A. Kossovsky studied the distribution of leading digits of chained probability distributions, and conjectured that as the length of the chain increases then the behavior tends to Benford's law [Kos]. Inspired by this conjecture, Jang, Kang, Kruckman, Kudo and Miller proved in [JKKKM] that if \(X_{1},\ldots,X_{m}\) are independent continuous random variables with densities \(f_{1},\ldots,f_{m}\), for any base \(B\), for many choices of the densities (more precisely, those that satisfy a certain _Mellin transform condition_, which we describe in detail later), the distribution of the digits of \(X_{1},\ldots,X_{m}\) converges to Benford's law base \(B\) as \(m\to\infty\). We prove a more practical criterion for a distribution to satisfy the needed Mellin transform relation.
A nice way to translate such a result into a concrete physical process is by considering the stick fragmentation model, first proposed by Becker et al. [B+] based upon work by the physicist Lemons [Lem] on partitioning a conserved quantity, which we review in the next subsection. Basically, [B+] asked if a stick of a given length is repeatedly cut into two at random proportions, as the number of levels of this cutting goes to infinity, does the final collection of stick lengths converge to Benford behavior? This formulation indicates why a sequence of products of random variables with increasing lengths is a natural object to consider. Such a process also has discrete variants that resemble the particle decay process in nuclear physics, and thus may be useful for modelling those processes. Some examples of other types of decomposition models include [AF, Bert, Car, CV, IMS, IV, Kak, Kol, Loo, Oll, PvZ, Slud, vZ].
We study several natural generalizations of the basic stick decomposition model; namely,
* continuous fragmentation with random number of parts,
* continuous fragmentation with probabilistic stopping, and
* discrete fragmentation with congruence stopping conditions,
which we will discuss in more detail in the subsequent sections. Indeed, we show that a much larger family of stick fragmentation processes result in strong Benford behavior (defined rigorously
in Section 1.3).1 In particular, we give an affirmative answer to [B+, Conjecture 8.1(i)] and prove a result that vastly generalizes their conjecture. Notably, we introduce the _aggregated limit_, which is necessary in order to talk about convergence to Benford's law in the latter two models. In order to show that the conditions we require to get convergence to Benford are in fact optimal, we prove non-Benfordness results when our precise conditions are not met.
Footnote 1: There are other generalizations one could study; see [BDMMM, DM] for Benfordness of \(d\)-dimensional frames of \(n\)-dimensional boxes.
### The basic model
We recall the following basic stick decomposition model studied in [B+]. Start with a stick of length \(L\) and fix a continuous probability distribution \(\mathcal{D}\) with density function supported on \([0,1]\). Choose \(p_{1}\in[0,1]\) according to \(\mathcal{D}\) and break \(L\) into \(p_{1}L\) and \((1-p_{1})L\). This is the first _level_. Now for each subsequent level, repeat the same process on every new stick obtained in the previous level, where each breaking involves sampling a new ratio \(p_{i}\in[0,1]\) according to \(\mathcal{D}\). Then at the end of the \(N\)-th level, each resulting stick has length of the form
\[X_{i}\ =\ \prod_{n=1}^{N}p_{n}, \tag{1.2.1}\]
where \(p_{n}\) represents the proportion used to cut the ancestor of \(X_{i}\) in the \(n\)-th level.
For such a process and its variants, we are interested in whether the final collection of stick lengths \(\{X_{i}\}\) follows _Benford's law_ as defined in Section 1.3. Becker et al. [B+] gave a proof of the Benfordness of the basic process described above, given that the distribution \(\mathcal{D}\) satisfies a certain condition involving the convergence of a sum of products of its Mellin transform. This condition was proposed by Jang et al. in [JKKKM, Theorem 1.1]. We restate it precisely in Section 2.1. There, we also give a sufficient condition for a distribution to satisfy this property. Throughout, we adopt the convention that \(\log x\) stands for the natural logarithm of \(x\), although the base usually does not play a role unless we explicitly state it.
### Benford's law
Fix a base \(B>0\). Any \(x>0\) can be written as
\[x\ =\ S_{B}(x)\cdot B^{k_{B}(x)} \tag{1.3.1}\]
where \(S_{B}(x)\in[1,B)\) is the _significant_ of \(x\) base \(B\) and \(k_{B}(x)=\lfloor\log_{B}(x)\rfloor\) is the _exponent_. The _mantissa_ of \(x\) is defined to be
\[M_{B}(x)\ =\ \log_{B}(x)-k_{B}(x).\]
We have the following standard definition (see for example [MN1]).
**Definition 1.1** (Benford's law for a sequence).: _A sequence of positive numbers \((a_{i})\) is said to be Benford base \(B\) if_
\[\lim_{I\to\infty}\frac{\#\{i\leq I:1\leq S_{B}(a_{i})\leq s\}}{I}\ =\ \log_{B}s \tag{1.3.2}\]
_for all \(s\in[1,B]\)._
We can also define the notion of Benford behavior for a random variable supported on \((0,\infty)\).
**Definition 1.2**.: _A probability distribution \(\mathcal{D}\), supported on \((0,\infty)\), is said to be Benford base\(B\) if for \(X\sim\mathcal{D}\), \(M_{B}(X)\) follows the uniform distribution on \([0,1]\). This is equivalent to saying that_
\[\mathbb{P}(1\leq S_{B}(X)\leq s)\ =\ \log_{B}s \tag{1.3.3}\]
_for all \(s\in[1,B]\)._
This is also sometimes referred to as _strong Benfordness_, as opposed to _weak Benfordness_, which only concerns the leading digits of a sequence of numbers. Since we are interested in the limiting behavior of a sequence of finite sets of stick lengths, we give the following precise definition of "convergence to Benford".
**Definition 1.3**.: _A sequence of finite collections of positive numbers \(({\mathcal{A}}_{n}=\{a_{n,i}\})_{n}\) is said to converge to strong Benford behavior (base \(B\)) if_
\[\lim_{n\to\infty}\frac{\#\{i:1\leq S_{B}(a_{n,i})\leq s\}}{|{\mathcal{A}}_{n}| }\ =\ \log_{B}s \tag{1.3.4}\]
_for all \(s\in[1,B)\)._
Thus, base 10 the probability of a first digit being \(d\) for a sequence that is strong Benford is \(\log_{10}(d+1)-\log_{10}(d)=\log_{10}(1+1/d)\); in particular the probabilities decrease from about \(30.1\%\) for a leading digit of 1 down to approximately \(4.6\%\) for a 9. While there is thus a tremendous bias towards smaller leading digits, if instead we look at the distribution of the logarithm of the significands (the mantissas) we have the uniform distribution.
In our case, the collections of random variables representing ending stick lengths are indexed by either the total number of levels \(N\), the starting stick length \(L\), and/or the number of starting sticks \(R\), with these quantities going to infinity in the limit. With an abuse of notation, we always denote the collection of ending stick lengths by \(\{X_{i}\}\) and suppress the parameters \(N,L\) and \(R\), but the limits will be explicitly stated. Before stating the definition of Benfordness for a sequence of collections of random variables, recall the following notations from [B+].
For \(s\in[0,B)\), we define the indicator function of "significant at most \(s\)" by
\[\varphi_{s}(x)\ :=\ \begin{cases}1,&\text{ if the significand of $x$ is at most $s$}\\ 0,&\text{ otherwise.}\end{cases} \tag{1.3.5}\]
Denote the proportion of elements in a set \(\{X_{i}\}\) whose significand is at most \(s\) by
\[P(s)\ :=\ \frac{\sum_{i}\varphi_{s}(X_{i})}{\#\{X_{i}\}}. \tag{1.3.6}\]
**Definition 1.4** ([B+]).: _A sequence of (finite) collections of random variables \((\{X_{i}\})_{n}\) is said to converge to strong Benford behavior (base \(B\)) if_
1. \[\lim_{n\to\infty}\mathbb{E}[P_{n}(s)]\ =\ \log_{B}(s)\] (1.3.7) _and_
2. \[\lim_{n\to\infty}\operatorname{Var}[P_{n}(s)]\ =\ 0.\] (1.3.8)
If we break a stick into \(k\) pieces using some random process, this can be thought of as sampling from a distribution supported on \([0,1]^{k-1}\) (for the \(k-1\) breaking points). From now on, we assume all distributions from which these breaking points are sampled are _good_. This is defined precisely in Section 2.1, but can be thought of as requiring the distribution to "sufficiently continuous".
### Continuous fragmentation with random number of parts
In the basic fragmentation process described in Section 1.2, the number of parts each stick breaks into each time is fixed at 2. It is natural to ask whether the same conclusion holds when we allow this number to be randomly chosen as well. Indeed, we prove strong Benfordness of final stick lengths in the following two scenarios:
1. the number of parts is chosen independently _at each level_, and is uniform for every stick in that level;
2. the number of parts is chosen for _each individual stick_ within every level.
We now state our results precisely as follows. Let \(G\) be a discrete distribution on \(\{1,2,\ldots,m\}\) such that \(\mathbb{P}(X=1)<1\) for \(X\sim G\). For each \(k\in\{1,2,\ldots,m\}\), let \(\mathcal{F}_{k}\) be a finite set of _good_ probability distributions with density functions supported on \([0,1]^{k-1}\).
**Theorem 1.5**.: _Start with a stick of length \(L\). At each level \(i\), independently choose \(k\in\{1,2,\ldots,m\}\) according to \(G\), and break up every stick into \(k\) parts by cutting it at the \(k-1\) coordinates of a random variable sampled from some distribution in \(\mathcal{F}_{k}\). Then, the distribution of the stick lengths at level \(N\) approaches a strong Benford distribution almost surely as \(N\to\infty\)._
**Theorem 1.6**.: _Start with a stick of length \(L\). At each level \(i\), for each stick in that level, independently choose \(k\in\{1,2,\ldots,m\}\) according to \(G\), and break the stick into \(k\) parts by cutting it at the \(k-1\) coordinates of a random variable sampled from some distribution in \(\mathcal{F}_{k}\). Then, the distribution of the stick lengths at level \(N\) approaches a strong Benford distribution almost surely as \(N\to\infty\)._
### Continuous fragmentation with probabilistic dying
We now investigate continuous processes in which the number of parts is fixed throughout, but each new stick _dies_ (i.e., stops breaking in subsequent levels) with a certain probability. More specifically, we consider the following fragmentation process.
Start from \(R\) sticks of length \(L>0\). Fix a positive integer \(k\geq 2\). We call a stick _alive_ if it continues to break in the next level and _dead_ otherwise. All initial sticks are assumed to be alive and each breaks into \(k\) pieces in the first level with the \((k-1)\) breaking points being the coordinates of a random variable chosen from some _good_ probability distribution on \([0,1]^{k-1}\). The breaking point random variable of each living stick is independent from each other. After each level, each new stick obtained continues to be _alive_ with probability \(r\) and _dead_ with probability \(1-r\). Then we have the following.
**Theorem 1.7**.: _When \(r=1/k\) and the alive/dead status of each stick is independent, the process ends in finitely many levels with probability 1, and the collection of ending stick lengths almost surely converges to strong Benford behavior as \(R\to\infty\)._
**Theorem 1.8**.: _When \(r>1/k\), there is positive probability that the process with \(R=1\) does not end in finitely many levels._
**Remark 1.9**.: _The only remaining case is when \(r<1/k\). It is not hard to see that the process ends in finitely many levels in this case. Our numerical simulations strongly suggest that the distribution is non-Benford, but it is an interesting open question as to what distributions result from such processes._
We also have the following version of Theorem 1.7 with dependencies between the dead/alive status of different sticks. This serves as a continuous analogue to the discrete processes discussed in the next section in which the dead/alive status of sticks have number theoretic dependencies.
**Theorem 1.10** (Theorem 1.7 but with dependence).: _When \(r=1/k\) but the alive/dead status of the children of the same stick are possibly dependent on one another, the collection of stick lengths after \(N\geq\log R\) levels almost surely converges to strong Benford behavior as \(R\to\infty\)._
**Theorem 1.11**.: _When \(r<1/k\) and the alive/dead status of the children of the same stick are possibly dependent on one another, the collection of stick lengths does not converge to strong Benford behavior for sufficiently large bases \(B\)._
### Discrete fragmentation with congruence stopping condition
Now we turn to the setting of discrete stick fragmentation. For a subset \(\mathfrak{S}\subseteq\mathbb{Z}_{+}\) and a positive integer \(L\notin\mathfrak{S}\), define the discrete fragmentation process with starting length \(L\) and stopping set \(\mathfrak{S}\) as follows.
Start with a stick of integer length \(L\). In the first level, it breaks into two new sticks at an integer point chosen according to the uniform distribution on \(\{1,\dots,L-1\}\). Now after each level, a new stick becomes _dead_ if its length is in \(\mathfrak{S}\), the _stopping set_, and continues to be _alive_ otherwise. The starting stick is assumed to be alive. When a stick is _dead_, it no longer breaks in the subsequent levels. Note that the _stopping condition_, namely the condition for a stick to become dead, is described by the subset \(\mathfrak{S}\subseteq\mathbb{Z}_{+}\), called the _stopping set_. In the next level, each of the living sticks continues to break into two following the discrete uniform distribution. The process ends when all new sticks meet the stopping condition, i.e., when all sticks at the end of a level are dead. We are interested in whether the final collection of stick lengths converge to the Benford distribution as we take the limit \(L\to\infty\).
In [B+], the authors showed the Benfordness of a discrete process in which the breaking only continues on one side of the stick, with stopping condition being length equal to \(1\). Here we study processes with more general stopping conditions defined by congruence classes. Our results are the following.
**Theorem 1.12**.: _Start with a stick of odd integer length \(L\). Let the stopping set be \(\mathfrak{S}=\{1\}\cup\{2m:m\in\mathbb{Z}_{+}\}\). Namely, a stick dies whenever its length is \(1\) or even. Then the distribution of lengths of all dead sticks at the end approaches strong Benfordness as \(L\to\infty\)._
**Theorem 1.13**.: _Fix an even modulus \(n\geq 2\) and a subset \(S\subset\{0,\dots,n-1\}\) of size \(n/2\) representing the residue classes. Let the stopping set be_
\[\mathfrak{S}\ :=\ \{1\}\cup\{m\in\mathbb{Z}_{+}:m=qn+r,\ r\in S,q\in\mathbb{Z}\}. \tag{1.6.1}\]
_If we start with \(R\) identical sticks of positive integer length \(L\notin\mathfrak{S}\), then the collection of ending stick lengths converges to strong Benford behavior given that \(R>(\log L)^{3}\) as \(L\to\infty\)._
We also prove non-Benfordness results when \(|S|\neq n/2\) and make more specific conjectures in Section 5.3. In Section 5.4, we discuss generalizing the process to one where the number of parts each stick is broken into is a chosen integer \(k\geq 2\).
**Remark 1.14**.: _Given that our results involve random decomposition of residue classes modulo a given integer, they could be of number theoretic interest. One can ask further whether similar results hold when \(\mathfrak{S}\) is given by other subsets of \(\mathbb{Z}_{+}\) that arise in number theory, for example, the set of quadratic residues modulo a given integer, the set of primes or practical numbers, etc._
We note that the only property of \(\mathfrak{S}\) that is fundamentally necessary in our above work involving discrete breaking processes seems to be the density of the set in the natural numbers. As result, we have the following conjecture:
**Conjecture 1.1**.: _Let \(\mathfrak{S}\) be such that the limit below exists and let_
\[r=\lim_{n\to\infty}\frac{|\{[1,n]\cap\mathfrak{S}\}|}{n}. \tag{1.6.2}\]
_Moreover, assume that \(r>0\). Then, we have that set of dead stick lengths approaches Benford behavior if and only if \(r=1/2\)._
In the next section, we briefly review the Mellin transform condition and prove a practical sufficient criterion (Theorem 2.2) for it to hold. A family of examples satisfying the Mellin transform condition that follow from that criterion is given in Example 2.3. Subsequent sections include the proofs of our main results and further discussions on our conjectures.
## 2. Preliminaries
### Mellin transform condition
Becker et al. [B+] gave a proof of the Benfordness of the basic process described at the beginning of the section, given that the distribution \(\mathcal{D}\) satisfy a certain condition involving the convergence of a sum of products of its Mellin transform. This condition was proposed by Jang et al. in [JKKKM, Theorem 1.1]. We restate it precisely as follows.
For a continuous real-valued function \(f:[0,\infty)\to\mathbb{R}\), let \(\mathcal{M}f\) denote its _Mellin transform_ defined by
\[\mathcal{M}f(s)\ =\ \int_{0}^{\infty}f(x)x^{s}\frac{dx}{x}. \tag{2.1.1}\]
Let \(\mathcal{F}=\{\mathcal{D}_{j}\}_{j\in I}\) be a family of probability distributions with associated density functions \(f_{j}\) supported on \([0,\infty)\) and \(p:\mathbb{Z}_{+}\to I\). We say that \(\mathcal{F}\) satisfies the _Mellin transform condition_ if the following holds and the convergence is uniform over all choices of \(p\):
\[\lim_{n\to\infty}\sum_{\begin{subarray}{c}\ell=-\infty\\ \ell\neq 0\end{subarray}}^{\infty}\prod_{m=1}^{n}\mathcal{M}f_{\mathcal{D}_{p (m)}}\left(1-\frac{2\pi i\ell}{\log B}\right)\ =\ 0. \tag{2.1.2}\]
The following corollary of [JKKKM, Theorem 1.1 & Lemma 1.2] relating the Mellin transform property to Benford behavior will be used repeated in our proofs of Benfordness results, so we restate it here for ease of reference.
**Theorem 2.1** ([JKKKM, Theorem 1.1]).: _Let \(\mathcal{F}=\{\mathcal{D}_{j}\}_{j\in I}\) be a family of probability distributions with associated density functions \(f_{j}\) supported on \([0,\infty)\) satisfying the Mellin transform property and \(p:\mathbb{Z}_{+}\to I\). Let \(X_{1}\sim\mathcal{D}_{p(1)}\). For all \(i\geq 2\), let \(X_{i}\) be a random variable with probability density function given by_
\[\theta^{-1}f_{\mathcal{D}_{p(i)}}(x/\theta) \tag{2.1.3}\]
_where \(\theta\) is the value of the previous random variable \(X_{i-1}\). Then if \(Y_{n}=\log_{B}X_{n}\), we have_
\[\begin{split}&|\mathbb{P}(Y_{n}\mod 1\in[a,b])-(b-a)|\\ &\leq\ (b-a)\cdot\left|\lim_{n\to\infty}\sum_{\begin{subarray}{c}\ell=- \infty\\ \ell\neq 0\end{subarray}}^{\infty}\prod_{m=1}^{n}\mathcal{M}f_{\mathcal{D}_{ p(m)}}\left(1-\frac{2\pi i\ell}{\log B}\right)\right|.\end{split} \tag{2.1.4}\]
_In particular, the limiting distribution as \(n\to\infty\) of \(X_{n}\) is Benford base \(B\)._
Note that from the way \(X_{n}\) is defined, it precisely models the product of \(n\) random variables, each distributed according to \(\mathcal{D}_{p(i)}\) for \(1\leq i\leq n\). The following gives a general condition on \(\mathcal{F}\) for it to satisfy the Mellin transform condition. A weaker version of the result is briefly discussed in [JKKKM].
**Theorem 2.2**.: \(\mathcal{F}\) _satisfies the Mellin transform condition if it is finite and all \(f_{j}\in\mathcal{F}\) are \(\alpha_{j}\)-Holder continuous \((0<\alpha_{j}\leq 1)\) and supported only on \([0,1]\). In particular, for such an \(\mathcal{F}\), a sequence of products of random variables distributed according to some sequence of the \(f_{j}\in\mathcal{F}\) approaches Benford behavior, and the rate of this convergence is uniform over all such sequences._
Consider a probability distribution \(\mathcal{D}\) on \(\mathbb{R}^{m}\) that is supported on \([0,1]^{m}\) with cumulative distribution function \(F\). For \(X\sim\mathcal{D}\), Let \(\mathrm{rk}_{i}(X)\) denote its \(i\)th smallest coordinate, where \(1\leq i\leq m\). Let \(\mathrm{rk}_{0}(X)=0\) and \(\mathrm{rk}_{m+1}(X)=1\). Then, we say that \(\mathcal{D}\) is _good_ if
\[Y_{i}\ =\ \mathrm{rk}_{i+1}(X)-\mathrm{rk}_{i}(X) \tag{2.1.5}\]
has Holder continuous density for all \(0\leq i\leq m\). In other words, if \(X\) represents the cut points of a stick, then we require the distances between adjacent ones to have Holder continuous densities. This definition is necessary for exploring stick breaking, in which we must choose multiple breaking points of a stick from a distribution and then consider distributions of ratios between the lengths of children and their parents. That is, if such a distribution is _good_, then Theorem 2.2 applies. In fact, many distributions of interest are _good_. For instance, we have the following family of examples.
**Example 2.3**.: _Suppose that \(\mathcal{D}\) is the product of \(m\) independent \(1\)-dimensional distributions \(\mathcal{D}_{i}\) with densities \(f_{i}\) and cumulative densities \(F_{i}\). If the \(f_{i}\) are Holder continuous, then \(\mathcal{D}\) is good._
Proof.: Let \(Y_{i}=\mathrm{rk}_{i+1}(X)-\mathrm{rk}_{i}(X)\) for some \(X\sim D\). Assume that \(1\leq i<m\). Then
\[1-F_{Y_{i}}(c)\ =\ \sum_{j=1}^{m}\sum_{\begin{subarray}{c}S\subseteq[m]\setminus \{j\}\\ |S|=i-1\end{subarray}}\int_{0}^{1}f(x)\prod_{l\in S}F_{l}(x)\prod_{l\not\in S, \ l\not=j}(1-F_{l}(x+c))\ dx \tag{2.1.6}\]
where we sum over the possible \(X_{j}\) that correspond to \(\mathrm{rk}_{i}(X)\) and the possible sets \(S\) of the other variables that are less than \(X_{j}\). By continuity, we can differentiate with respect to \(c\) and move the differentiation inside of the integral to obtain
\[f_{Y_{i}}(c)\ =\ \sum_{j=1}^{m}\sum_{\begin{subarray}{c}S\subseteq[m]\setminus \{j\}\\ |S|=i-1\end{subarray}}\int_{0}^{1}f(x)\prod_{l\in S}F_{l}(x)\sum_{l^{\prime} \not\in S,\ l^{\prime}\not=j}f_{l}(x+c)\prod_{l\not\in S,\ l\not=j,l^{\prime}} (1-F_{l}(x+c))\ dx. \tag{2.1.7}\]
Now, the \(F_{i}\) are continuously differentiable, so the are also Holder continuous. We can then take the minimal exponent \(\alpha\) among the the \(f_{i}\) to obtain that \(f_{Y_{i}}\) is \(\alpha\)-Holder continuous since sums and products of \(\alpha\)-Holder continuous functions are \(\alpha\)-Holder continuous. We can similarly show that \(f_{Y_{i}}\) is \(\alpha\)-Holder continuous when \(i=0,m\).
From now on, we assume all distributions from which breaking points are sampled are _good_.
### Proof of continuity criterion for the Mellin transform condition
Proof of Theorem 2.2.: Note that, for fixed \(j\in I\),
\[{\mathcal{M}}f_{j}\left(1-\frac{2\pi i\ell}{\log B}\right) =\ \int_{0}^{\infty}f_{j}(x)x^{-\frac{2\pi i\ell}{\log B}}dx\] \[=\ \int_{0}^{\infty}f_{j}(e^{\log x})e^{\log x}e^{-\frac{2\pi i \ell}{\log B}\log x}\frac{dx}{x}\] \[=\ \int_{-\infty}^{\infty}g_{j}(y)e^{-\frac{2\pi i\ell}{\log B}y}dy\] \[=\ \widehat{g_{j}}\left(\frac{\ell}{\log B}\right), \tag{2.2.1}\]
where \(g_{j}(y)=f_{j}(e^{y})e^{y}\). Moreover, \(\|g_{j}\|_{1}=\|f_{j}\|_{1}=1\), so the Riemann-Lebesgue lemma applies and says that
\[{\mathcal{M}}f_{j}\left(1-\frac{2\pi i\ell}{\log B}\right)\ \to\ 0 \tag{2.2.2}\]
as \(\ell\to\infty\). Also, for \(\ell\neq 0\),
\[\left|\widehat{g_{j}}\left(\frac{\ell}{\log B}\right)\right|\ \leq\ \|g_{j}\|_{1}\ =\ 1. \tag{2.2.3}\]
We do not have equality in the above since it follows from triangle inequality and the integrand does not always have the same complex argument (since \(g_{j}\) is continuous). Thus, if we take
\[h(\ell)\ =\ \max_{j}\left|{\mathcal{M}}f_{j}\left(1-\frac{2\pi i\ell}{\log B }\right)\right|, \tag{2.2.4}\]
we have that \(h(\ell)<1\) for \(\ell\neq 0\) and also \(h(\ell)\to 0\) as \(\ell\to\infty\). We now investigate the rate of this convergence. We begin by mimicking the proof of the Riemann-Lebesgue lemma. For any \(f:\mathbb{R}\to\mathbb{C}\) continuous and compactly supported, using the substitution \(x\mapsto x+\frac{\pi}{\xi}\) for \(\xi\neq 0\), we have
\[\hat{f}(\xi)\ =\ \int_{\mathbb{R}}f(x)e^{-ix\xi}dx\ =\ \int_{\mathbb{R}}f\left(x+\frac{\pi}{\xi} \right)e^{-ix\xi}e^{-i\pi}dx\ =\ -\int_{\mathbb{R}}f\left(x+\frac{\pi}{\xi}\right)e^{-ix\xi}dx. \tag{2.2.5}\]
Taking the average, we get
\[|\hat{f}(\xi)|\ \leq\ \frac{1}{2}\int_{\mathbb{R}}\left|f(x)-f\left(x+\frac{ \pi}{\xi}\right)\right|dx. \tag{2.2.6}\]
Apply this to \(f=\widehat{g_{j}}\) and \(\xi=\frac{\ell}{\log B}\),
\[\left|\widehat{g_{j}}\left(\frac{\ell}{\log B}\right)\right| \leq\ \frac{1}{2}\int_{\mathbb{R}}\left|g_{j}(x)-g_{j}\left(x+\frac{\pi\log B}{ \ell}\right)\right|dx\] \[\leq\ \frac{1}{2}\int_{\mathbb{R}}\left|f_{j}(e^{x})e^{x}-f_{j}(e^{x+ \frac{\pi\log B}{\ell}})e^{x+\frac{\pi\log B}{\ell}}\right|dx\] \[\leq\ \frac{1}{2}\int_{0}^{1}\left|f_{j}(u)-cf_{j}(cu)\right|du\] \[\leq\ \frac{1}{2}\sup_{[0,1]}(|f_{j}(u)-f_{j}(cu)|+|f_{j}(cu)-cf_{j}(cu )|), \tag{2.2.7}\]
where \(c=e^{\frac{\pi\log B}{\ell}}\) and we used the fact that \(f\) is only supported on \([0,1]\) (this can be easily changed to any compact interval, but for the purpose of this paper all the distributions we consider satisfy
this condition). From the assumption that \(f_{j}\) is \(\alpha_{j}\)-Holder continuous, there exists a constant \(\mu\geq 0\) such that
\[|f_{j}(u)-f_{j}(cu)|\ \leq\ \mu|(1-c)u|^{\alpha}\ \leq\ \mu|1-c|^{\alpha} \tag{2.2.8}\]
and
\[|f_{j}(cu)-cf_{j}(cu)|\ \leq\ (1-c)|f_{j}(cu)|\ \leq\ |1-c|M \tag{2.2.9}\]
for all \(u\in[0,1]\), where \(M>0\) is an upper bound for \(f\). Now
\[c\ =\ 1+\frac{\pi\log B}{\ell}+o(1/\ell), \tag{2.2.10}\]
so
\[|1-c|\ =\ \frac{\pi\log B}{\ell}+o(1/\ell). \tag{2.2.11}\]
We may assume \(0<\alpha\leq 1\), so that \(|1-c|^{\alpha}\) dominates. There exists some \(L\) large enough so that the sum \(\sum_{|\ell|\geq L}\ell^{-n\alpha}\to 0\) as \(n\to\infty\). By the pigeonhole principle, there exist a \(j\in I\) such that \(|p^{-1}(\{j\})|=\infty\). Then we have
\[\sum_{\begin{subarray}{c}\ell=-\infty\\ \ell\neq 0\end{subarray}}^{\infty}\prod_{m=1}^{n}\mathcal{M}f_{\mathcal{D}_{ p(m)}}\left(1-\frac{2\pi i\ell}{\log B}\right)\ \leq\ \sum_{\begin{subarray}{c}\ell=-\infty\\ \ell\neq 0\end{subarray}}^{\infty}\prod_{\begin{subarray}{c}m=1\\ p(m)=j\end{subarray}}^{n}\mathcal{M}f_{j}\left(1-\frac{2\pi i\ell}{\log B}\right)\] \[\leq\ \sum_{\begin{subarray}{c}\ell=-\infty\\ \ell\neq 0\end{subarray}}^{\infty}\mathcal{M}f_{j}\left(1-\frac{2\pi i\ell}{ \log B}\right)^{n}\ \to\ 0 \tag{2.2.12}\]
as \(n\to\infty\). This proves (2.1.2). To see that the convergence is uniform over all \(p\), note that by the pigeonhole principle, for any choice of \(p\) and any positive integer \(N\) there exists a \(j\in I\) such that \(|p^{-1}(\{j\})\cap\{1,\ldots,N\}|\ \geq\ N/|I|\). Therefore for any \(\epsilon>0\), it suffices to take the maximum among all the \(N\)'s needed for each \(j\in I\) so that
\[\sum_{\begin{subarray}{c}\ell=-\infty\\ \ell\neq 0\end{subarray}}^{\infty}\mathcal{M}f_{j}\left(1-\frac{2\pi i\ell}{ \log B}\right)^{N/|I|}\ <\ \epsilon. \tag{2.2.13}\]
## 3. Continuous fragmentation with random number of parts
In this section, we present proofs of the results on the generalizations of the basic model by allowing the number of parts to be randomly chosen. For both Theorem 1.5 and Theorem 1.6, we show that
\[\lim_{N\to\infty}\mathbb{E}[P_{N}(s)]\ =\ \log_{B}(s) \tag{3.0.1}\]
and
\[\lim_{N\to\infty}\mathrm{Var}[P_{N}(s)]\ =\ 0 \tag{3.0.2}\]
where \(N\) is the number of levels.
### Proof of Theorem 1.5
We first assume that \(\mathbb{P}(k=1)=0\), i.e., that \(k\) is chosen from \(\{2,\ldots,m\}\). Then, the problem is solved similarly as in Section 2 of [B+]. Let \(Y_{i}\) be the number of sticks that each stick is cut into at level \(i\). For simplicity, we assume \(i\) always runs through \(\{1,\ldots,N\}\) and omit the range in the equations below. Note that
\[\mathbb{E}(P_{N}(s))\ =\ \sum_{(y_{i})\in\{2,\ldots,m\}^{N}}\mathbb{E}(P_{N}(s)|Y_ {i}=y_{i}\ \forall i)\mathbb{P}(Y_{i}=y_{i}\ \forall i), \tag{3.1.1}\]
so it suffices to show that
\[\mathbb{E}(P_{N}(s)|Y_{i}=y_{i}\ \forall i)\ =\ \mathbb{E}(\varphi_{s}(p_{1}p_{2 }\cdots p_{N}))\ \to\ \log_{B}(s) \tag{3.1.2}\]
at a uniform rate for any choice of splittings \((Y_{i})=(y_{i})\) as \(N\to\infty\), where \(p_{i}\sim\mathcal{D}_{i}\) with \(\mathcal{D}_{i}\in\mathcal{F}_{y_{i}}\). This follows immediately from Theorem 2.2, given our assumption that the distributions in \(\mathcal{F}_{k}\) are _good_ and \(|\mathcal{F}_{k}|<\infty\) for all \(k\). The variance can also be bounded independently of the values of the variables \(Y_{i}\). Let \(M=y_{1}y_{2}\cdots y_{N}\) be the number of sticks after \(N\) levels. We have
\[P_{N}(s)\ =\ \frac{1}{M}\sum_{i=1}^{M}\varphi_{s}(X_{i}) \tag{3.1.3}\]
so that
\[\mathrm{Var}(P_{N}(s)) =\ \mathbb{E}[P_{N}(s)^{2}]-\mathbb{E}[P_{N}(s)]^{2}\] \[=\ \mathbb{E}\left[\frac{1}{M^{2}}\sum_{i=1}^{M}\varphi_{s}(X_{i}) ^{2}+\frac{1}{M^{2}}\sum_{\begin{subarray}{c}i\neq j\\ 1\leq i,j\leq M\end{subarray}}\varphi_{s}(X_{i})\varphi_{s}(X_{j})\right]- \mathbb{E}[P_{N}(s)]^{2}\] \[=\ \frac{1}{M^{2}}\mathbb{E}[P_{N}(s)]+\frac{1}{M^{2}}\sum_{ \begin{subarray}{c}i\neq j\\ 1\leq i,j\leq M\end{subarray}}\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j}) ]-\mathbb{E}[P_{N}(s)]^{2}. \tag{3.1.4}\]
It therefore suffices to show that
\[\frac{1}{M^{2}}\sum_{\begin{subarray}{c}i\neq j\\ 1\leq i,j\leq M\end{subarray}}\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j}) ]\ \to\ \log_{B}^{2}(s) \tag{3.1.5}\]
as \(N\to\infty\) (since the first term in (3.1.4) goes to \(0\) and \(\mathbb{E}[P_{N}(s)]^{2}\to\log_{B}^{2}(s)\) by the above). By the same reasoning as in the proof of [B+, Theorem 1.5], if
\[X_{i}\ =\ Lp_{1}p_{2}\ldots p_{N-n-1}p_{N-n}\ldots p_{N},\]
\[X_{j}\ =\ Lp_{1}p_{2}\ldots p_{N-n-1}q_{N-n}\ldots q_{N} \tag{3.1.6}\]
where exactly the last \(n\) breaks are independent, then
\[\left|\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]-\log_{B}^{2}(s)\right| \ \leq\ f(n) \tag{3.1.7}\]
where \(f(n)\) depends only on \(n\) and goes to \(0\) as \(n\to\infty\). For a fixed \(X_{i}\), there exists at most \(\prod_{i=N-n}^{N}y_{i}\) (in fact, precisely \((y_{N-n}-1)\prod_{i=N-n+1}^{N}y_{i}\)) sticks \(X_{j}\) that involve \(n\) independent breaks
from \(X_{i}\) (i.e., have a shared ancestor \(n+1\) breaks ago). Thus,
\[\left|\frac{1}{M^{2}}\sum_{\begin{subarray}{c}i\neq j\\ 1\leq i,j\leq M\end{subarray}}\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]- \log_{B}^{2}(s)\right| \leq\ \frac{1}{M^{2}}\sum_{i=1}^{M}\sum_{n=0}^{N-1}\prod_{i=N-n}^{N}y_{i}f(n)\] \[=\ \sum_{n=0}^{N-1}f(n)\prod_{i=1}^{N-n-1}\frac{1}{y_{i}}\] \[\leq\ \sum_{n=0}^{N-1}2^{n+1-N}f(n). \tag{3.1.8}\]
Since \(\sum_{n=0}^{N-1}2^{n+1-N}\leq 2\), the last sum is a weighted average of the \(f(n)\)'s where the terms with larger \(n\) has higher weight. Therefore the sum goes to \(0\) as \(N\to\infty\) since \(f(n)\to 0\) as \(n\to\infty\)
To cover the case when \(Y_{i}=1\) is allowed with positive probability strictly less than \(1\), simply note that almost surely the number of levels involving the stick breaking into greater than \(1\) piece increases without bound as \(N\to\infty\).
### Proof of Theorem 1.6
The proof of \(\lim_{N\to\infty}\mathbb{E}(P_{N}(s))=\log_{B}(s)\) is exactly the same as in the previous case. To show \(\lim_{N\to\infty}\mathrm{Var}(P_{N}(s))=0\), it suffices to show that
\[\frac{1}{M^{2}}\sum_{1\leq i,j\leq M}\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s} (X_{j})]-\log_{B}^{2}(s)\ =\ \frac{1}{M^{2}}\sum_{1\leq i,j\leq M}\left(\mathbb{E}[\varphi_{s}(X_{i}) \varphi_{s}(X_{j})]-\log_{B}^{2}(s)\right) \tag{3.2.1}\]
goes to zero. Recall that if \(X_{i},X_{j}\) have exactly \(n\) independent breaks, then
\[\left|\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]-\log_{B}^{2}(s)\right| \ \leq\ f(n) \tag{3.2.2}\]
where \(f(n)\) depends only on \(n\) and goes to \(0\) as \(n\to\infty\). So we have
\[\frac{1}{M^{2}}\sum_{1\leq i,j\leq M}\left(\mathbb{E}[\varphi_{s} (X_{i})\varphi_{s}(X_{j})]-\log_{B}^{2}(s)\right)\] \[\leq\ \sum_{n=0}^{N-1}f(n)\cdot\frac{\#\{(i,j):(X_{i},X_{j})\text{ have exactly $n$ independent breaks}\}}{M^{2}}. \tag{3.2.3}\]
Let
\[A_{n}\ =\ \#\{(i,j):(X_{i},X_{j})\text{ have exactly $n$ independent breaks}\}. \tag{3.2.4}\]
Then
\[\sum_{n=0}^{N-1}A_{n}\ =\ M^{2}. \tag{3.2.5}\]
So the RHS of (3.2.3) is a weighted average of the \(f(n)\)'s. We show that with probability tending to \(1\),
\[\frac{1}{M^{2}}\sum_{n=0}^{\log\log N}A_{n}\ \to\ 0 \tag{3.2.6}\]
as \(N\to\infty\). Intuitively, we would expect the number of pairs of \((X_{i},X_{j})\) having at most \(\log\log N\) independent breaks (i.e., having high dependence) to only make up a very small proportion of the \(M^{2}\) pairs in total. If so,
\[\sum_{n=0}^{\lfloor\log\log N\rfloor}f(n)\frac{A_{n}}{M^{2}}\ \to\ 0 \tag{3.2.7}\]
as \(N\to\infty\) since \(f(n)\) is bounded. On the other hand,
\[\sum_{n=\log\log N+1}^{N}f(n)\frac{A_{n}}{M^{2}}\ \leq\ f(\log\log N+1)\ \to\ 0 \tag{3.2.8}\]
as \(N\to\infty\), therefore we would have that the RHS of (3.2.3) tends to \(0\) as \(N\to\infty\), as desired.
Now we show (3.2.6). Let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{r}\) be the sticks at the \((N-\lfloor\log\log N\rfloor)\)-th level. Let \(a_{i}\) be the number of sticks that come from \(\alpha_{i}\) at the end of all \(N\) levels. Then, any two \((X_{i},X_{j})\) having at most \(\log\log N\) independent breaks share some \(\alpha_{l}\) as an ancestor. Thus, the number of such pairs is
\[a_{1}^{2}+a_{2}^{2}+\cdots+a_{r}^{2}. \tag{3.2.9}\]
Let \(Z_{1}:=a_{1}^{2}+a_{2}^{2}+a_{3}^{2}+\ldots a_{r}^{2}\) and \(Z_{2}:=(a_{1}+a_{2}+\cdots+a_{r})^{2}=M^{2}\). Then
\[\frac{1}{M^{2}}\sum_{n=0}^{\lfloor\log\log N\rfloor}A_{n}\ =\ \frac{a_{1}^{2}+a_{2}^{2}+\cdots+a_{r}^{2}}{M^{2}}\ =\ \frac{Z_{1}}{Z_{2}}. \tag{3.2.10}\]
We claim that \(r>\sqrt{N}/2\) with high probability. Consider a sequence of \(\lceil\sqrt{N}\rceil\) levels and let the number of sticks in the highest level be \(\ell\). The probability that the number of sticks does not increase at all throughout the levels, i.e., that all levels have \(\ell\) sticks is at most \(p_{1}^{\ell\sqrt{N}}\). Thus, we can look at the blocks of \(\lceil\sqrt{N}\rceil\) levels and deduce that the probability of increasing the number of sticks at least once is at least
\[\prod_{\ell=1}^{\sqrt{N}}\Big{(}1-p_{1}^{\ell\sqrt{N}}\Big{)}\ \geq\ 1-\sum_{\ell=1}^{\infty}p_{1}^{\ell\sqrt{N}}\ =\ 1-\frac{p_{1}^{\sqrt{N}}}{1-p_{1}^{\sqrt{N}}}. \tag{3.2.11}\]
This probability thus approaches \(1\) as \(N\to\infty\). This shows that with probability approaching \(1\), the total number of sticks increases by at least \(1\) within each block, so there will be more than \(\sqrt{N}/2\) sticks at the \((N-\lfloor\log\log N\rfloor)\)-th level.
Now, fix \(r\), i.e., condition on the value of \(r\) and assume \(r>\sqrt{N}/2\). The \(a_{i}\) are independent and identically distributed. Let \(\mu_{j}=\mathbb{E}[a_{i}^{j}]\) and \(\sigma^{2}=\operatorname{Var}[a_{i}]\). We have
\[\mathbb{E}[Z_{1}]\ =\ r\mu_{2},\quad\operatorname{Var}[Z_{1}]\ =\ r\operatorname{Var}[a_{i}^{2}]\ =\ r\mu_{4}-r\mu_{2}^{2} \tag{3.2.12}\]
and
\[\mathbb{E}[Z_{2}] =\ r^{2}\mu^{2}+r\sigma^{2}\ =\ r\mu_{2}+r(r-1)\mu_{1}^{2}, \tag{3.2.13}\] \[\operatorname{Var}[Z_{2}] =\ \mathbb{E}[(a_{1}+a_{2}+\cdots+a_{r})^{4}]-\mathbb{E}[(a_{1}+a_{2} +\cdots+a_{r})^{2}]^{2}\] \[=\ r\mu_{4}+4r(r-1)\mu_{1}\mu_{3}+4r(r-1)\mu_{2}^{2}+6r(r-1)(r-2) \mu_{1}^{2}\mu_{2}\] \[\ \ \ \ +r(r-1)(r-2)(r-3)\mu_{1}^{4}-(r\mu_{2}+r(r-1)\mu_{1}^{2})^{2}\] \[=\ \mu_{1}^{4}r^{4}+(6\mu_{1}^{2}\mu_{2}-6\mu_{1}^{4})r^{3}+(4\mu_{ 1}\mu_{3}+4\mu_{2}^{2}-18\mu_{1}^{2}\mu_{2})r^{2}\] \[\ \ \ \ +(\mu_{4}-4\mu_{1}\mu_{3}-4\mu_{2}^{2}+12\mu_{1}^{2}\mu_{2}-6 \mu_{1}^{4})r\] \[\ \ \ -[\mu_{1}^{4}r^{4}+(\mu_{1}^{2}\mu_{2}-\mu_{1}^{4})r^{3}+(\mu_{ 2}^{2}-2\mu_{1}^{2}\mu_{2}+\mu_{1}^{4})r^{2}]\] \[\leq\ 5\mu_{1}^{2}\mu_{2}r^{3}+(4\mu_{1}\mu_{3}+3\mu_{2}^{2})r^{2}+( \mu_{4}+12\mu_{1}^{2}\mu_{2})r. \tag{3.2.14}\]
Now, we have that
\[\mu_{j}\ \leq\ \max(a_{i})^{j}\ \leq\ m^{j\log\log N}\ =\ (\log N)^{j\log m} \tag{3.2.15}\]
so that
\[\text{Var}[Z_{2}]\ \leq\ 5(\log N)^{C}r^{3}+7(\log N)^{C}r^{2}+13(\log N)^{C}r\ \leq\ 25(\log N)^{C}r^{3} \tag{3.2.16}\]
where \(C=4\log m\). Moreover, we may use (3.2.15) to obtain
\[\text{Var}[Z_{1}]\ \leq\ r\mu_{4}\ \leq\ (\log N)^{C}r. \tag{3.2.17}\]
We also have the following bounds (by trivially lower bounding moments by \(1\)).
\[\mathbb{E}[Z_{1}]\ \geq\ r,\quad\mathbb{E}[Z_{2}]\ \geq\ r^{2}. \tag{3.2.18}\]
Now apply Chebyshev's inequality to both \(Z_{1}\) and \(Z_{2}\) to get
\[\mathbb{P}(Z_{1}>\frac{3}{2}\mathbb{E}[Z_{1}])\ \leq\ \frac{\text{Var}[Z_{1}]}{ \frac{1}{4}\mathbb{E}[Z_{1}]^{2}}\ \leq\ \frac{4(\log N)^{C}r}{r^{2}\mu_{2}^{2}}\ \leq\ \frac{4(\log N)^{C}}{r} \tag{3.2.19}\]
and
\[\mathbb{P}(Z_{2}<\frac{1}{2}\mathbb{E}[Z_{2}])\ \leq\ \frac{\text{Var}[Z_{2}]}{ \frac{1}{4}\mathbb{E}[Z_{2}]^{2}}\ \leq\ \frac{100(\log N)^{C}r^{3}}{(r\mu_{2}+r(r-1)\mu_{1}^{2})^{2}}\ \leq\ \frac{100(\log N)^{C}r^{3}}{r^{4}}\ \leq\ \frac{100(\log N)^{C}}{r}. \tag{3.2.20}\]
Note that in simplifying the above two expressions we used the trivial bounds \(\mu_{j}\geq 1\) for the denominators. Recall that with probability going to \(1\), \(r\geq\sqrt{N}/2\). Under this assumption, as \(N\to\infty\),
\[\mathbb{P}\left(\frac{Z_{1}}{Z_{2}}<\frac{\frac{3}{2}\mathbb{E}[Z_ {1}]}{\frac{1}{2}\mathbb{E}[Z_{2}]}\right) \ \geq\ 1-\mathbb{P}\left(Z_{1}>\frac{3}{2}\mathbb{E}[Z_{1}] \right)-\mathbb{P}\left(Z_{2}<\frac{1}{2}\mathbb{E}[Z_{2}]\right)\] \[\ =\ 1-\frac{4(\log N)^{C}}{\sqrt{N}/2}-\frac{100(\log N)^{C}}{ \sqrt{N}/2}\ \to\ 1. \tag{3.2.21}\]
Since
\[\frac{\frac{3}{2}\mathbb{E}[Z_{1}]}{\frac{1}{2}\mathbb{E}[Z_{2}]}\ =\ \frac{3r\mu_{2}}{r\mu_{2}+r(r-1)\mu_{1}^{2}}\ \leq\ \frac{3(\log N)^{j\log m}}{r}\ \leq\ \frac{3(\log N)^{j\log m}}{\sqrt{N}/2}\ \to\ 0 \tag{3.2.22}\]
as \(N\to\infty\), we have \(Z_{1}/Z_{2}\ \to\ 0\) as \(N\to\infty\) with probability going to \(1\). By (3.2.10), this implies (3.2.6), so we are done.
## 4. Continuous fragmentation with probabilistic stopping
In this section, we consider the continuous breaking process in which the splitting number is fixed, but each new stick has a certain probability of becoming inactive. This is inspired by the conjecture on the discrete breaking process stopping at certain residue classes.
For simplicity, in the continuous breaking problem we always assume the initial length is \(1\), since scaling of stick lengths does not affect Benfordness. We can first consider the following simpler scenario. It can be seen as a generalization of the Restricted \(1\)-Dimensional Decomposition Model studied in [B+, Theorem 1.9] (where they have shown the case \(k=2\)). The proof is analogous.
**Theorem 4.1**.: _Start from a stick of length \(L=1\). Fix a positive integer \(k\geq 2\). From a good distribution on \((0,1)\), sample \(k-1\) values as cut points to break the stick into \(k\) pieces. Then have exactly one stick be alive (i.e., all the remaining \(k-1\) sticks become dead). Assume nothing about which stick is chosen to be alive. Repeat this \(N\) times. Then as \(N\to\infty\), the collection of resulting dead stick lengths converges to strong Benford behavior._
Proof.: Note that at the end of \(N\) levels there will be \((k-1)N+1\) sticks in total. Since \(\frac{(k-1)\log(N)}{N+1}\to 0\) as \(N\to\infty\), we may remove the sticks that become dead in the first \(\log(N)\) levels. The remaining sticks \(X_{i}\) satisfy, uniformly,
\[\mathbb{E}[\varphi_{s}(X_{i})]\ \to\ \log_{10}(s) \tag{4.0.1}\]
as \(N\to\infty\) by Theorem 2.2. It follows that
\[\mathbb{E}[P_{N}(s)]\ \to\ \log_{10}(s) \tag{4.0.2}\]
as \(N\to\infty\). For the variance, we also adopt a similar strategy. Label the sticks that become dead at level \(n\) as \(X_{n(k-1)+i}\) for \(1\leq i\leq k-1\). We say a stick \(X_{i}\) belongs to level \(n\) if \(n=\lceil\frac{i}{k-1}\rceil\), i.e., the stick becomes dead at level \(n\). Consider the set of pairs of indices
\[\mathcal{A}\ :=\ \{(i,j):(k-1)\log(N)+1\leq i\leq j-(k-1)(\log(N)+1)\leq N-(k-1 )(\log(N)+1)\}. \tag{4.0.3}\]
In other words, this is the collection of pairs that both become inactive after at least \(\log(N)\) levels and are at least \(\log(N)\) levels apart. The same reasoning as in the proof of [B+, Theorem 1.9] shows that for all \((i,j)\in\mathcal{A}\), we have
\[\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]\ \to\ \log_{10}^{2}(s) \tag{4.0.4}\]
as \(N\to\infty\) uniformly. Explicitly, we have that
\[X_{i} = \alpha_{1}\cdots\alpha_{t-1}\alpha_{t}\cdots\alpha_{u} \tag{4.0.5}\] \[X_{j} = \alpha_{1}\cdots\alpha_{t-1}\beta_{t}\cdots\beta_{v} \tag{4.0.6}\]
where \(v\geq t+\log(N)\) and \(\alpha_{t+1},\ldots,\alpha_{u},\beta_{t+1},\ldots,\beta_{v}\) are independent. Let \(c=\alpha_{1}\ldots\alpha_{t-1}\beta_{t}\). We have that
\[\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})|\alpha_{1},\ldots,\alpha_{u}, \beta_{t}]\ =\ \varphi_{s}(X_{i})\varphi_{s}(c\beta_{t+1}\ldots\beta_{v}) \tag{4.0.7}\]
which approaches \(\varphi_{s}(X_{i})\log_{B}(s)\) uniformly, with error a function of \(N\). However, \(\mathbb{E}[\varphi_{s}(X_{i})]\to\log_{B}(s)\) uniformly as well, so (4.0.4) follows. Moreover, it is easy to check that
\[|\mathcal{A}|=\frac{N^{2}}{2}+O(N\log(N)). \tag{4.0.8}\]
Thus \(\operatorname{Var}(P_{N}(s))\to 0\) as \(N\to\infty\), as desired.
A special case of the process in Theorem 1.7 is the following.
**Theorem 4.2**.: _Start from \(R\) sticks, each of length \(L=1\). Fix a positive integer \(k\geq 2\). Initially all sticks are alive and each breaks into \(k\) pieces independently, resulting in \(kR\) new sticks. Then randomly choose \(R\) out of these new sticks to continue, while the remaining \(kR-R\) die. Repeat this for \(N\) levels. Then as \(N\to\infty\), the collection of resulting stick lengths converges to strong Benford behavior._
Proof.: This follows from essentially the same argument as in Theorem 4.1. For expectation, again notice that out of all \((k-1)RN+R\) final pieces, at most \((k-1)R\log(N)\) pieces have lengths being a product of less than \(\log(N)\) independent ratios. For the remaining ones, we still have \(\mathbb{E}[\varphi_{s}(X_{i})]\to\log_{10}(s)\) uniformly in \(i\) as \(N\to\infty\), and \(\mathbb{E}[P_{N}(s)]\to\log_{10}(s)\) follows since \(\frac{(k-1)R\log(N)}{(k-1)RN+R}\to 0\). For variance, note that given \(X_{i}\) and \(X_{j}\), \(i<j\), belonging to levels \(n_{i}\) and \(n_{j}\) respectively, the only difference between the current scenario and the one in Theorem 4.1 is that now they could come from different parents at level \(n_{i}\). Namely, the number of independent levels they have could now be larger than \(n_{j}-n_{i}\). This makes the condition \(\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]\to\log_{10}^{2}(s)\) even easier to satisfy. Therefore, using the same argument of throwing away highly dependent pairs, we can easily generalize the proof to the current case.
### Proof of Theorem 1.7
We first show that the process starting from a single stick terminates in finitely many levels.
Proof of finite termination.: Let \(p\) be the probability that it does not terminate. In such a case, one of the live children initiates a breaking that does not terminate. Thus, we have, if \(A\) is the number of live children of the original stick,
\[p = \sum_{a=1}^{k}\mathbb{P}(A=a)\mathbb{P}(\text{at least one of the $a$ live children initiates infinite breaking}) \tag{4.1.1}\] \[= \sum_{a=1}^{k}\binom{k}{a}\frac{1}{k^{a}}\left(1-\frac{1}{k} \right)^{k-a}(1-(1-p)^{a})\] \[= \sum_{a=1}^{k}\binom{k}{a}\frac{1}{k^{a}}\left(1-\frac{1}{k} \right)^{k-a}-\sum_{a=1}^{k}\binom{k}{a}\left(\frac{1-p}{k}\right)^{a}\left(1- \frac{1}{k}\right)^{k-a}\] \[= \left[1-\left(1-\frac{1}{k}\right)^{k}\right]-\left[\left(1- \frac{p}{k}\right)^{k}-\left(1-\frac{1}{k}\right)^{k}\right]\] \[= 1-\left(1-\frac{p}{k}\right)^{k}.\]
Now, we have that, by Bernoulli's inequality,
\[\left(1-\frac{p}{k}\right)^{k}\ \geq\ 1-p \tag{4.1.2}\]
with equality if and only if \(p=0\). But we do have equality, so \(p=0\), as desired.
Now, consider the process where all \(R\) sticks are being broken simultaneously. The above result implies that for any given \(R\), this process also ends in finitely many levels with probability \(1\). Now we show the second part of Theorem 1.7.
Let \(n_{i}\) be the number of live sticks present at the \(i^{th}\) level so that \(n_{0}=R\). Then, we have the following:
**Lemma 4.3**.: _For \(i\geq 0\),_
\[\mathbb{P}(|n_{i}-R|\leq t)\ \geq\ 1-\frac{2i^{3}R(k-1)}{t^{2}k} \tag{4.1.3}\]
_if \(t<R\)._
Proof.: The result is trivial for \(i=0\). We proceed with induction on \(i\). Assume the result for \(i\); we show it for \(i+1\). Fix \(n_{i}\). We have that
\[\mathbb{P}(|n_{i+1}-R|\leq t) \ \geq\ \mathbb{P}\left(|n_{i}-R|\leq\frac{i}{i+1}t,\ \ |n_{i+1}-n_{i}|\leq\frac{1}{i+1}t\right) \tag{4.1.4}\] \[\ \geq\ 1-\mathbb{P}\left(|n_{i}-R|>\frac{i}{i+1}t\right)- \mathbb{P}\left(|n_{i+1}-n_{i}|>\frac{1}{i+1}t,\ \ |n_{i}-R|\leq\frac{i}{i+1}t\right)\] \[\ \geq\ 1-\mathbb{P}\left(|n_{i}-R|>\frac{i}{i+1}t\right)- \mathbb{P}\left(|n_{i+1}-n_{i}|>\frac{1}{i+1}t,\ \ n_{i}<2R\right)\] \[\ \geq\ 1-\mathbb{P}\left(|n_{i}-R|>\frac{i}{i+1}t\right)- \mathbb{P}\left(|n_{i+1}-n_{i}|>\frac{1}{i+1}t\ \Big{|}\ n_{i}<2R\right).\]
Now, note that \(n_{i+1}\) is binomially distributed with parameters \(n_{i}k\) and \(1/k\). Thus, conditioning on \(n_{i}\), it has expectation \(n_{i}\) and variance \(n_{i}(1-1/k)\). So, by Chebyshev's inequality,
\[\mathbb{P}\left(|n_{i+1}-n_{i}|>\frac{1}{i+1}t\ \Big{|}\ n_{i}\right)\ <\ \frac{n_{i}(1-1/k)}{\frac{1}{(i+1)^{2}}t^{2}}\ \leq\ \frac{2(i+1)^{2}R(k-1)}{t^{2}k} \tag{4.1.5}\]
and we have that, from (4.1.4) and the inductive hypothesis,
\[\mathbb{P}(|n_{i+1}-R|\leq t) \ \geq\ 1-\frac{2i^{3}R(k-1)}{\frac{i^{2}}{(i+1)^{2}}t^{2}k}- \frac{2(i+1)^{2}R(k-1)}{t^{2}k}\] \[\ \geq\ 1-\frac{2(i+1)^{3}R(k-1)}{t^{2}k}. \tag{4.1.6}\]
\(\Box\)
For any \(R\) and \(N\), define
\[P_{R}(s)\ :=\ \frac{\sum_{i}\varphi_{s}(X_{i})}{\#\{X_{i}\}} \tag{4.1.7}\]
where the sum runs over the set of resulting sticks in a process starting with \(R\) sticks (which is finite with probability 1). We show \(\mathbb{E}[P_{R}(s)]\to\log_{10}(s)\) and \(\mathrm{Var}(P_{R}(s))\to 0\) as \(R\to\infty\).
For the expectation, we first show the existence of a function \(h(R)\to\infty\) as \(R\to\infty\) such that the average of \(\mathbb{E}[\varphi_{s}(X_{i})]\) for sticks \(X_{i}\) that die within the first \(h(R)\) levels goes to \(\log_{B}(s)\) as \(R\to\infty\). Define
\[P_{R}^{\prime}(s)\ :=\ \frac{\sum_{X_{i}\ \mathrm{in\ first}\ n\ \mathrm{levels}} \varphi_{s}(X_{i})}{\#\{X_{i}|X_{i}\ \mathrm{in\ first}\ n\ \mathrm{levels}\}}. \tag{4.1.8}\]
For \(X_{i}\) belonging to level \(n\),
\[|\mathbb{E}[\varphi_{s}(X_{i})]-\log_{B}(s)|\ \leq\ f(n) \tag{4.1.9}\]
where \(f\) satisfies \(f(n)\to 0\) as \(n\to\infty\) by Theorem 2.2. We now show that in each of the first \(h(R)\) levels, a roughly equal number of sticks become dead. We may take \(h(R)=R^{1/10}\) and \(t=R^{2/3}\) and apply Lemma 4.3. Then we obtain that when \(i\leq h(R)\),
\[\mathbb{P}(R-R^{2/3}<n_{i}<R+R^{2/3})\ \geq\ 1-\frac{2R^{3/10}R(k-1)}{R^{4/3}k} \ \geq\ 1-2R^{-1/30}\ \to\ 1 \tag{4.1.10}\]
as \(R\to\infty\). Let \(\overline{n_{i}}\) be the number of sticks that become inactive at level \(i\). Then we have for all \(i\leq h(R)\), with probability going to 1,
\[(k-1)R-(k+1)R^{2/3}\ <\ \overline{n_{i}}\ <\ (k-1)R+(k+1)R^{2/3}, \tag{4.1.11}\]
which implies, when \(R\) is sufficiently large,
\[\left(k-\frac{3}{2}\right)R\ <\ \overline{n_{i}}\ <\ \left(k-\frac{1}{2}\right)R. \tag{4.1.12}\]
Thus, conditioning on the above event,
\[|\mathbb{E}(P_{R}^{\prime}(s))-\log_{B}(s)| \leq\ \frac{1}{\sum_{i=1}^{h(R)}\overline{n_{i}}}\sum_{i=1}^{h(R)}f(i) \overline{n_{i}}\] \[\leq\ \frac{1}{(k-\frac{3}{2})Rh(R)}\sum_{i=1}^{h(R)}f(i)(k-\frac{1}{2})R\] \[\leq\ 3\frac{1}{h(R)}\sum_{i=1}^{h(R)}f(i)\ \to\ 0 \tag{4.1.13}\]
as \(R\to\infty\). This implies \(\mathbb{E}[P_{R}^{\prime}(s)]\to\log_{10}(s)\). Now, for the sticks after level \(h(R)\), simply note that
\[|\mathbb{E}[\varphi_{s}(X_{i})]-\log_{B}(s)|\ \leq\ f(n)\ \leq\ \inf_{n\geq h(R)}f(n) \tag{4.1.14}\]
which tends to \(0\) as \(R\to\infty\). \(P_{R}(s)\) is a weighted average of these \(\varphi_{s}(X_{i})\) and \(P_{R}^{\prime}(s)\), so \(|\mathbb{E}(P_{R}(s))-\log_{B}(s)|\to 0\), as desired.
Now we analyze the variance. Since for any pair of final sticks \((X_{i},X_{j})\), if both die after at least \(\log(R)\) levels and they die at least \(\log(\log(R))\) levels apart (i.e., they have enough independence), then we have
\[\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]\ \to\ \log_{B}^{2}(s) \tag{4.1.15}\]
uniformly for all pairs satisfying these criteria by Theorem 2.2. Thus, it suffices to show that the proportion of pairs of the following two types among all pairs goes to \(0\) as \(R\to\infty\):
1. at least one of \(X_{i}\),\(X_{j}\) dies before \(\log(R)\) levels, or
2. \(X_{i}\), \(X_{j}\) have a common ancestor less than \(\log(\log(R))\) levels before they both die.
Let \(M\) be the total number of dead sticks ever.
To show (1), we first show that the number of sticks that die within the first \(\log(R)\) levels is small compared to \(M\) with probability going to \(1\). Keep our choice of \(h(R)\) and \(t\) earlier. Therefore when \(R\) is sufficiently large, using the upper bound from (4.1.12), we get that as number of sticks that die within the first \(\log(R)\) levels is upper bounded by
\[\left(k-\frac{1}{2}\right)\log(R)R \tag{4.1.16}\]
with probability going to \(1\). Now again using (4.1.12), we can lower bound \(M\) by lower bounding the total number of sticks that die within the first \(h(R)\) levels. This gives
\[M\ \geq\ h(R)\left(k-\frac{3}{2}\right)R\ =\ \left(k-\frac{3}{2}\right)R^{11/10} \tag{4.1.17}\]
with probability going to \(1\). Since
\[\frac{(k-\frac{1}{2})\log(R)R}{(k-\frac{3}{2})R^{11/10}}\ \to\ 0 \tag{4.1.18}\]
as \(R\to\infty\), we have shown that the proportion of sticks that die in the first \(\log(R)\) levels among all goes to \(0\) as \(R\to\infty\) with probability going to \(1\). This then implies that the number of pairs that involve a stick of this type also takes up a diminishing proportion of all pairs of final sticks as \(R\to\infty\).
Now we show (2), namely, that the number of pairs \(X_{i}\), \(X_{j}\) having a common ancestor at most \(\log(\log R)\) levels before they both die is \(o(M^{2})\) with high probability. Fix some \(X_{i}\). Then, the number of sticks, dead or alive, that share the \(\alpha\) ancestor of \(X_{i}\) and is \(\alpha-\beta\) levels away is at most \(k^{\beta}\). Thus, the number of \(X_{j}\) that satisfying (2) when paired with \(X_{i}\) is bounded above by
\[\sum_{\alpha=1}^{\lfloor\log(\log R)\rfloor}\sum_{\beta=0}^{\lfloor\log(\log R )\rfloor}k^{\beta}\ \leq\ \log(\log R)\frac{k^{\log(\log R)}-1}{k-1}\ \leq\ \log(\log R)(\log R)^{\log k}. \tag{4.1.19}\]
Hence, the number of such pairs is bounded above by \(M(\log R)^{1+\log k}=o(M^{2})\) by (4.1.17).
### Proof of Theorem 1.8
Let \(A\) be some integer that is sufficiently large (we can determine what this means later). There then exists some fixed \(j\) such that \(n_{j}>A\) with positive probability \(p^{*}\). Now, consider \(i\geq j\). Conditioning on \(n_{i}\), we have that \(n_{i+1}\) is a random variable with mean \(n_{i}rk\) and variance \(n_{i}r(1-r)\). Thus, by Chebyshev's inequality, we have that
\[\mathbb{P}\left(n_{i+1}>n_{i}\left(1+\frac{rk-1}{2}\right)\right)\ \geq\ 1-\mathbb{P}\left(|n_{i+1}-n_{i}rk|\geq n_{i}\frac{rk-1}{2}\right)\ \geq\ 1-\frac{n_{i}r(1-r)}{n_{i}^{2}\left(\frac{rk-1}{2}\right)^{2}}. \tag{4.2.1}\]
We can then let \(a=\frac{r(1-r)}{A\left(\frac{rk-1}{2}\right)^{2}}\) and \(c=1+\frac{rk-1}{2}\). Then the above inequality can be written as
\[\mathbb{P}(n_{i+1}>cn_{i})\ \geq\ 1-\frac{aA}{n_{i}}. \tag{4.2.2}\]
It follows that
\[\mathbb{P}(n_{i+1}>Ac^{i-j+1}\ \big{|}\ n_{i}>Ac^{i-j}) \ \geq\ \mathbb{P}(n_{i+1}>cn_{i}\ \big{|}\ n_{i}>Ac^{i-j}) \tag{4.2.3}\] \[\ \geq\ \inf_{n_{i}>Ac^{i-j}}\left(1-\frac{aA}{n_{i}}\right)\ \geq\ 1-ac^{j-i}.\]
Hence, the probability that \(n_{i}>Ac^{i-j}\) for all \(i\geq j\) given that \(n_{j}>A\) is at least
\[p^{\prime}\ =\ (1-a)(1-ac^{-1})(1-ac^{-2})\cdots. \tag{4.2.4}\]
Now, since \(\lim_{x\to 0}\log(1-x)/x=-1\), we may set \(A\) large enough so that \(a\) is sufficiently small so that \(\log(1-ac^{t})>-2ac^{t}\) for \(t\leq 0\). We then have
\[\log(p^{\prime})\ =\ \sum_{t=0}^{\infty}\log(1-ac^{-t})\ >\ \sum_{t=0}^{\infty}-2ac^{-t}\ =\ -\frac{2a}{1-c}. \tag{4.2.5}\]
In particular, \(p^{\prime}\geq e^{-2a/(1-c)}>0\). Thus the probability that \(n_{i}>Ac^{i-j}\) for all \(i\geq j\) is at least \(p^{*}p^{\prime}\) which is positive. Hence, not only is the process infinite with positive probability, but the number of alive sticks at each level blows up with positive probability.
### Proof of Theorem 1.10
Without assuming independence on the alive/dead status of the sticks, we prove the following weaker version of Lemma 4.3.
**Lemma 4.4**.: _For \(i\geq 0\),_
\[\mathbb{P}(|n_{i}-R|\leq t)\ \geq\ 1-\frac{2i^{3}Rk^{2}}{t^{2}} \tag{4.3.1}\]
_if \(t<R\)._
Proof.: As in the proof of Lemma 4.3, we proceed with induction on \(i\), noting that the result is trivial for \(i=0\). By the same calculation, (4.1.4) holds, That is,
\[\mathbb{P}(|n_{i}-R|\leq t)\ \geq\ 1-\mathbb{P}\left(|n_{i}-R|>\frac{i}{i+1}t \right)-\mathbb{P}\left(|n_{i+1}-n_{i}|>\frac{1}{i+1}t\ \Big{|}\ n_{i}<2R\right). \tag{4.3.2}\]
We have that \(n_{i+1}\) is the sum of \(n_{i}\) independent random variables with mean \(1\) and variance bounded by \(k^{2}\). Thus, conditioning on \(n_{i}\), it has expectation \(n_{i}\) and variance at most \(n_{i}k^{2}\). Chebyshev's inequality implies
\[\mathbb{P}\left(|n_{i+1}-n_{i}|>\frac{1}{i+1}t\ \ \Big{|}\ \ n_{i}\right)\ <\ \frac{n_{i}k^{2}}{\frac{1}{(i+1)^{2}}t^{2}}\ \leq\ \frac{2(i+1)^{2}Rk^{2}}{t^{2}}. \tag{4.3.3}\]
We then have that
\[\mathbb{P}(|n_{i+1}-R|\leq t)\ \geq\ 1-\frac{2i^{3}Rk^{2}}{\frac{i^{2}}{(i+1)^{ 2}}t^{2}k}-\frac{2(i+1)^{2}Rk^{2}}{t^{2}k}\ \geq\ 1-\frac{2(i+1)^{3}Rk^{2}}{t^{2}} \tag{4.3.4}\]
which completes the induction.
Theorem 1.10 follows from essentially the same arguments as in proof of Theorem 1.7 using Lemma 4.4. We highlight the necessary changes below.
For any \(R\) and \(N\), define
\[P_{R,N}(s)\ :=\ \frac{\sum_{i}\varphi_{s}(X_{i})}{\#\{X_{i}\}} \tag{4.3.5}\]
where the sum runs over the set of resulting sticks in the first \(N\) levels of a process starting with \(R\) sticks. We prove \(\mathbb{E}[P_{R,N}(s)]\to\log_{10}(s)\) and \(\mathrm{Var}(P_{R,N}(s))\to 0\) if \(N\geq\log(R)\) and \(R\to\infty\). Keep the choices of \(h(R)=R^{1/10}\) and \(t=R^{2/3}\) in the proof of Theorem 1.7. For sticks that die after \(h(R)\) levels, we know that
\[|\mathbb{E}[\varphi_{s}(X_{i})]-\log_{B}(s)|\ \leq\ f(h(R)) \tag{4.3.6}\]
where the right-hand-side goes to \(0\) in \(R\). Therefore it again suffices to estimate the errors
\[|\mathbb{E}[\varphi_{s}(X_{i})]-\log_{B}(s)| \tag{4.3.7}\]
for \(X_{i}\) that dies within the first \(h(R)\) levels. Now the exact same argument applies simply after replacing Lemma 4.3 with Lemma 4.4.
For variance, let \(M\) now denote the number of resulting sticks after \(N\) levels. By the same logic,
\[\mathbb{E}[\varphi_{s}(X_{i})\varphi_{s}(X_{j})]\ \to\ \log_{B}^{2}(s) \tag{4.3.8}\]
uniformly given that \(X_{i}\) and \(X_{j}\) have a most recent common ancestor more than \(\log(\log R)\) levels away from \(X_{i}\) and both \(X_{i},X_{j}\) die after at least \(\log(R)\) levels. It therefore suffices to show that such pairs \(X_{i},X_{j}\) make up a proportion of all pairs of dead sticks that tends to \(1\). This is done in the same way as in the proof of Theorem 1.7.
### Proof of Theorem 1.11
We now argue that there is a limiting distribution for the stick lengths exists. This implies that the mantissas also approach some distribution.
**Lemma 4.5**.: _When \(r\leq 1/k\), the final collection of stick lengths converges to a unique limiting distribution as \(R\to\infty\)._
Proof.: Fix \(R\). Note that the distribution of the overall collection of sticks resulting from breaking \(R\) initial sticks is an average of the distributions of sticks resulting from each initial stick weighted by the number of resulting sticks. For a single-stick breaking process, let \(p_{i}\) be the probability that it ends with \(i\) sticks. Assume that given that the process ends in \(i\) sticks, the collection of stick lengths follow distribution \(\mathcal{D}_{i}\), with cumulative distribution function \(F_{\mathcal{D}_{i}}\) that is continuous and compactly supported. Namely, a random variable representing the length of a randomly chosen stick among the \(i\) ending sticks follows \(\mathcal{D}_{i}\). Then the weighted average has cumulative distribution function
\[F\ =\ \frac{\sum_{i=1}^{\infty}ip_{i}F_{\mathcal{D}_{i}}}{\sum_{i=1}^{\infty}ip_ {i}}.\]
This sum converges because \(\sum_{i=1}^{\infty}ip_{i}=\mathbb{E}[M_{R}]\), which is finite by Lemma 4.6. Now it suffices to show our claim about \(\mathcal{D}_{i}\). Fix \(i\). There are a finite number of ways in which sticks die that result in a collection of \(i\) sticks when the process ends. For each given configuration (i.e., sequence of dying of the sticks), the probability density function of the length of a particular final stick is a certain integral of the product of the density functions of the corresponding component for each ancestor of the break-point distribution. Averaging this over all configurations resulting in \(i\) sticks, due to symmetry, every stick ends up having the same distribution. Therefore \(\mathcal{D}_{i}\) is well-defined. Then take the limit as \(R\to\infty\). Use _Borel's law of large numbers2_ to show that for a large enough collection of \(i\)'s, when \(R\) is large enough, with probability \(1\), the proportion of trials ending in \(i\) sticks is close to \(p_{i}\); moreover, when \(R\) is large enough, the cdf of the lengths within each group of trials resulting in \(i\) sticks is close to \(F_{\mathcal{D}_{i}}\) with probability \(1\). Just need to show point-wise convergence!
Footnote 2: See for example [https://en.wikipedia.org/wiki/Law_of_large_numbers#Borel](https://en.wikipedia.org/wiki/Law_of_large_numbers#Borel)’s_law_of_large_numbers.
**Lemma 4.6**.: _We have that_
\[\mathbb{E}[M_{R}]\ =\ R\frac{k-kr}{1-kr} \tag{4.4.1}\]
Proof.: Let \(p_{i}\) be the probability that exactly \(i\) of the children of the first stick are alive. Then,
\[\mathbb{E}[M_{1}] =\ \sum_{i=0}^{k}p_{i}(\mathbb{E}[M_{i}]+k-i)\ =\ \sum_{i=0}^{k}p_{i}(i\mathbb{E}[M_{1}]+k-i) \tag{4.4.2}\] \[=\ k+(\mathbb{E}[M_{1}]-1)\sum_{i=0}^{k}ip_{i}\ =\ k-kr+kr\mathbb{E}[M_{1}]\]
so that
\[\mathbb{E}[M_{1}]\ =\ \frac{k-kr}{1-kr}. \tag{4.4.3}\]
Linearly of expectation then implies the result.
**Corollary 4.7**.: _We have that_
\[M_{R}\ \leq\ 2R\frac{k-kr}{1-kr} \tag{4.4.4}\]
_with probability at least \(1/2\)._
Proof.: This follows directly from Lemma 4.6 and Markov's inequality.
**Lemma 4.8**.: _Let \(a>1\) be some real number and let \(b_{a}\) be the expected number of child sticks that are of length at least \(L/a\) starting from a stick of length \(L\). With probability at least_
\[1-\frac{4k^{2}}{b_{a}^{2}(1-r)^{2}R} \tag{4.4.5}\]
_the number of dead sticks in the first level of length at least \(L/a\) is at least \(b_{a}(1-r)R/2\)._
Proof.: Denote this quantity by \(M_{L,a}^{R}(1)\). Then, the probability of a child with length at least \(L/a\) being dead is \(1-r\). Thus,
\[\mathbb{E}[M_{L,a}^{R}(1)]\ =\ Rb_{a}(1-r). \tag{4.4.6}\]
Note that \(M_{L,a}^{R}(1)\) is a sum of independent random variables distributed identically to \(M_{L,a}^{1}(1)\), and \(\mathrm{Var}[M_{L,a}^{1}(1)]\leq k^{2}\), so
\[\mathrm{Var}[M_{L,a}^{R}(1)]\ \leq\ Rk^{2}. \tag{4.4.7}\]
Chebyshev's inequality then implies
\[\mathbb{P}\left(M_{L,a}^{R}(1)\leq\frac{b_{a}}{2}(1-r)R\right)\ \leq\ \frac{Rk^{2}}{(Rb_{a}(1-r)/2)^{2}}\ =\ \frac{4k^{2}}{b_{a}^{2}(1-r)^{2}R}. \tag{4.4.8}\]
By Lemma 4.8 and Corollary 4.7, the proportion of sticks with length at least \(L/a\) is at least
\[\frac{b_{a}}{2}(1-r)R\left(2R\frac{k-kr}{1-kr}\right)^{-1}\ =\ \frac{b_{a}}{4}(1-r)\frac{1-kr}{k-kr}\ =\ \frac{b_{a}(1-kr)}{4k} \tag{4.4.9}\]
with probability at least
\[\frac{1}{2}-\frac{4k^{2}}{b_{a}^{2}(1-r)^{2}R}. \tag{4.4.10}\]
Now, choose \(a=k\). Then, we have that, with some probability approaching \(1/2\), at least a proportion of \(b_{k}(1-kr)/(4k)\) of the sticks are of length at least \(L/k\). Moreover, \(b_{k}>0\) so this proportion is positive. Let
\[B\ >\ k^{\frac{4k}{b_{k}(1-kr)}}. \tag{4.4.11}\]
Then, these sticks occupy an interval of length
\[\log_{B}(k)\ <\ \frac{b_{a}(1-kr)}{4k} \tag{4.4.12}\]
in the distribution of the normalized mantissas. It follows that the mantissas of the stick lengths do not almost surely approach a uniform distribution as \(R\to\infty\). That is, the stick lengths do not approach Benford behavior.
## 5. Discrete fragmentation with congruence stopping condition
In this section, we consider the setting of discrete stick fragmentation, i.e., all stick lengths involved are positive integers. We present the proofs of Theorem 1.12, where we start with a stick of odd integer length, break it in two each time and only continue breaking the new stick with odd length until we reach a stick of length \(1\), and Theorem 1.13, where we start with a collection of \(R\) sticks and break each of them in two until a piece falls into certain residue classes or becomes length \(1\). When we take the appropriate limit in both of these scenarios, the ending collection of stick lengths becomes Benford. The overall strategy, adopted from that in SS3 of [B+], is to approximate the discrete process with an appropriate continuous analogue, and by showing that the two processes are "close" in a precise sense, deduce the desired result from the corresponding continuous result.
### Proof of Theorem 1.12
In order to carry out the approximation strategy outlined above, we define a continuous process and a discrete process based on the same sequence of random ratios, so that the latter is the process we are interested in and the former known to be Benford. Our goal is to show that that their end results are "close" enough so that the Benfordness of the former implies that of the latter. The two processes are defined as follows. Let \((c_{i})_{i\geq 0}\) be a sequence of random numbers chosen from \((0,1)\) with respect to the uniform distribution.
* Let \(\mathcal{Q}\) denote the continuous process. In this process, we start with a stick of length \(h_{0}=L\). For each \(i\geq 1\), break off a fragment of length \(Y_{i}=c_{i-1}h_{i-1}\) at the \(i\)-th level, which becomes _dead_, namely, stops breaking further. The other stick of length \(h_{i}=h_{i-1}-Y_{i}=(1-c_{i-1})h_{i-1}\) stays _alive_ and continues to break in the next step.
* Let \(\mathcal{P}\) denote the discrete process. In this process, we start with a stick of length \(\ell_{0}=L\). For each \(i\geq 1\), break off a fragment of length \(X_{i}=2\lceil\frac{c_{i-1}(\ell_{i-1}-1)}{2}\rceil\) at the \(i\)-th level, which becomes _dead_. Note that by construction, \(X_{i}\) is an even integer taking values in \([2,\ell_{i-1}-1]\).The remaining stick of length \(\ell_{i}=\ell_{i-1}-X_{i}\) (which is always an odd integer) stays _alive_ and continues to break in the next step.
* Moreover, a stick in \(\mathcal{P}\) also becomes dead if it has length \(1\). In that case, the corresponding stick in \(\mathcal{Q}\) also dies.
We first derive the following lemma that bounds the length of a stick \(X_{k}\) in \(\mathcal{P}\) with the length of the corresponding stick \(Y_{k}\) in \(\mathcal{Q}\).
**Lemma 5.1**.: _Given that \(\ell_{k},h_{k}>2\), we have,_
\[Y_{k}\prod_{i=1}^{k-1}\left(1-\frac{2}{\ell_{i}-2}\right)-2\ \leq\ X_{k}\ \leq\ Y_{k}\prod_{i=1}^{k-1}\left(1+\frac{2}{\ell_{i}-2}\right)+2\prod_{i=1}^{ k-1}\frac{\ell_{i}}{\ell_{i}-4}. \tag{5.1.1}\]
Proof.: Let \(d_{k}\) be the rounded version of \(c_{k}\) used in \(\mathcal{P}\), i.e., \(d_{k}=X_{k+1}/\ell_{k}\). Then, note that
\[(\ell_{k}-1)c_{k}\ \leq\ X_{k+1}\ \leq\ (\ell_{k}-1)c_{k}+2 \tag{5.1.2}\]
so that
\[\left(1-\frac{1}{\ell_{k}}\right)c_{k}\ \leq\ d_{k}\ \leq\ \left(1-\frac{1}{\ell_{k}} \right)c_{k}+\frac{2}{\ell_{k}}\quad\Longrightarrow\quad|d_{k}-c_{k}|\ \leq\ \frac{2}{\ell_{k}}. \tag{5.1.3}\]
It follows that
\[\ell_{k}\ =\ L\prod_{i=0}^{k-1}(1-d_{i})\ \leq\ L\prod_{i=0}^{k-1}\left(1-c_{i}+ \frac{2}{\ell_{i}}\right)\ \leq\ L\prod_{i=0}^{k-1}(1-c_{i})\prod_{i=0}^{k-1}\left(1+\frac{2}{\ell_{i}(1-c _{i})}\right). \tag{5.1.4}\]
We have that
\[\ell_{i}(1-c_{i})\ \geq\ \ell_{i}(1-d_{i})-2\ =\ \ell_{i+1}-2, \tag{5.1.5}\]
so
\[\ell_{k}\ \leq\ L\prod_{i=0}^{k-1}(1-c_{i})\prod_{i=0}^{k-1}\left(1+\frac{2}{ \ell_{i+1}-2}\right)\ \leq\ h_{k}\prod_{i=1}^{k}\left(1+\frac{2}{\ell_{i}-2}\right). \tag{5.1.6}\]
Equation (5.1.5) also implies that \(1-c_{i}-\frac{2}{\ell_{i}}\geq 1-c_{i}\left(1-\frac{2}{\ell_{i+1}-2}\right)\), so that
\[\ell_{k}\ \geq\ L\prod_{i=0}^{k-1}\left(1-c_{i}-\frac{2}{\ell_{i}}\right)\ \geq\ L\prod_{i=0}^{k-1}\left[(1-c_{i})\left(1-\frac{2}{\ell_{i+1}-2}\right) \right]\ \geq\ h_{k}\prod_{i=1}^{k}\left(1-\frac{2}{\ell_{i}-2}\right). \tag{5.1.7}\]
We can then multiply (5.1.6) by \(d_{k}\) to get
\[X_{k+1}\ \leq\ h_{k}d_{k}\prod_{i=1}^{k-1}\left(1+\frac{2}{\ell_{i+1}-2}\right)\ \leq\ \left(Y_{k+1}+\frac{2h_{k}}{\ell_{k}}\right)\prod_{i=1}^{k-1}\left(1+\frac{2}{ \ell_{i+1}-2}\right) \tag{5.1.8}\]
and then use (5.1.7) to obtain
\[X_{k+1} \leq\ Y_{k+1}\prod_{i=1}^{k}\left(1+\frac{2}{\ell_{i}-2}\right)+2 \prod_{i=1}^{k}\left(1+\frac{2}{\ell_{i}-2}\right)\left(1-\frac{2}{\ell_{i}-2} \right)^{-1}\] \[\leq\ Y_{k+1}\prod_{i=1}^{k}\left(1+\frac{2}{\ell_{i}-2}\right)+2 \prod_{i=1}^{k}\frac{\ell_{i}}{\ell_{i}-4}. \tag{5.1.9}\]
We can reason similarly by multiplying (5.1.7) with \(d_{k}\) to obtain
\[X_{k+1}\ \geq\ Y_{k+1}\prod_{i=1}^{k}\left(1-\frac{2}{\ell_{i}-2}\right)-2. \tag{5.1.10}\]
\(\Box\)
Let \(g(x)\) be a function that goes to infinity as \(x\to\infty\) with \(g(x)=o(\sqrt{\log(x)})\). Let \(h(x)\) be a function that goes to infinity as \(x\to\infty\). The following corollary of Lemma 5.1 essentially says that \(X_{k}\) and \(Y_{k}\) are very close given that \(k\) is not too large and \(\ell_{k-1}\), \(Y_{k}\) are large enough.
**Corollary 5.2**.: _For all \(k<g(L)\log L\) such that \(\ell_{k-1}>\log^{2}(L)+2\) and \(Y_{k}>h(L)\), we have_
\[Y_{k}(1-o(1))\ \leq\ X_{k}\ \leq\ Y_{k}(1+o(1)). \tag{5.1.11}\]
_Proof._ By Lemma 5.1, we have
\[X_{k} \leq\ Y_{k}\prod_{i=1}^{k-1}\left(1+\frac{2}{\ell_{i}-2}\right)+2 \prod_{i=1}^{k-1}\frac{\ell_{i}}{\ell_{i}-4}\] \[\leq\ Y_{k}\left(1+\frac{2}{\log^{2}(L)}\right)^{k-1}+2\left(\frac {\log^{2}(L)}{\log^{2}(L)-4}\right)^{k-1}\] \[\leq\ Y_{k}\left(1+\frac{2}{\log^{2}(L)}\right)^{g(L)\log L}+2 \left(1+\frac{8}{\log^{2}(L)}\right)^{g(L)\log L}\] \[\leq\ Y_{k}\exp\left(\frac{2g(L)}{\log L}\right)+2\exp\left(\frac {8g(L)}{\log L}\right). \tag{5.1.12}\]
As \(L\to\infty\), \(\frac{g(L)}{\exp(L)}\to 0\), so \(\exp\left(\frac{2g(L)}{\log L}\right)\to 1\) and \(2\exp\left(\frac{8g(L)}{\log L}\right)=O(1)\). Now by our assumption \(Y_{k}\to\infty\), we get asymptotically that
\[X_{k}\ \leq\ Y_{k}(1+o(1)). \tag{5.1.13}\]
For the other inequality, apply Lemma 5.1 again to get
\[X_{k} \geq\ Y_{k}\prod_{i=1}^{k-1}\left(1-\frac{2}{\ell_{i}-2}\right)-2\] \[\geq\ Y_{k}\left(1+\frac{2}{\log^{2}(L)}\right)^{k-1}-2\] \[\geq\ Y_{k}\left(1-\frac{2}{\log^{2}(L)}\right)^{g(L)\log L}-2\] \[\geq\ Y_{k}(2e)^{-\frac{2g(L)}{\log L}}-2. \tag{5.1.14}\]
Again \((2e)^{-\frac{2g(L)}{\log L}}\to 1\) as \(L\to\infty\), and since \(Y_{k}\to\infty\), we get asymptotically that
\[X_{k}\ \geq\ Y_{k}(1-o(1)). \tag{5.1.15}\]
The following lemma then helps us translate Benfordness of \(\{Y_{i}\}\) to that of \(\{X_{i}\}\) given that they are close enough in the sense above. This is essentially [B+, Lemma 3.3], but we give a different proof here. Let \(\{Z_{i}\}_{L}=\{Z_{1},\ldots,Z_{k_{L}}\}\) denote a finite sequence of random variables whose length \(k_{L}\) depends on \(L\).
**Lemma 5.3**.: _Suppose \(\{Y_{i}\}_{L}=\{Y_{1},Y_{2},\ldots,Y_{k_{L}}\}\) is strong Benford as \(L\to\infty\). Then if \(\{X_{i}\}_{L}=\{X_{1},X_{2},\ldots,X_{k_{L}}\}\) is such that_
\[Y_{i}(1-o(1))\ \leq\ X_{i}\ \leq\ Y_{i}(1+o(1)) \tag{5.1.16}\]
_as \(L\to\infty\), \(\{X_{i}\}_{L}\) is strong Benford as \(L\to\infty\)._
Proof.: We prove that \(\log(X_{i})\mod 1\) is equidistributed in \([0,1]\). For simplicity, define
\[\phi(x)\ :=\ \log(x)\pmod{1} \tag{5.1.17}\]
for any \(x>0\). By our assumption we have
\[\log(Y_{i})+\log(1-o(1))\ \leq\ \log(X_{i})\ \leq\ \log(Y_{i})+ \log(1+o(1))\] \[\implies\log(Y_{i})-o(1)\ \leq\ \log(X_{i})\ \leq\ \log(Y_{i})+o(1)\] \[\implies\phi(Y_{i})-o(1)\ \leq\ \phi(X_{i})\ \leq\ \phi(Y_{i})+o(1) \tag{5.1.18}\]
with probability going to \(1\). For any \(0\leq a<b\leq 1\),
\[\mathbb{P}\left(a+o(1)<\phi(Y_{i})<b-o(1)\right)\ \leq\ \mathbb{P}(a<\phi(X_{i})<b)\ \leq\ \mathbb{P}(a-o(1)<\phi(Y_{i})<b+o(1)) \tag{5.1.19}\]
with probability going to \(1\). But since \(Y_{i}\) is strong Benford, we have that
\[\mathbb{P}\left(a+o(1)<\phi(Y_{i})<b-o(1)\right)\ =\ b-a-o(1) \tag{5.1.20}\]
and
\[\mathbb{P}\left(a-o(1)<\phi(Y_{i})<b+o(1)\right)\ =\ b-a+o(1), \tag{5.1.21}\]
so
\[b-a-o(1)\ \leq\ \mathbb{P}(a<\phi(X_{i})<b)\ \leq\ b-a+o(1), \tag{5.1.22}\]
which implies that \(\mathbb{P}(a<\phi(X_{i})<b)\to b-a\) as \(L\to\infty\) with probability going to \(1\).
By [B+, Theorem 1.9], the process \({\mathcal{Q}}\) is Benford. Given the lemma above, it now suffices to show that the premises of Corollary 5.2 are satisfied for almost all \(k\). The following lemma shows that the process ends within \(g(L)\log L\) levels with probability going to \(1\), so the first condition that \(k\) is not too large is almost always true.
**Lemma 5.4**.: _Let \(F_{L}\) be the number of fragments generated by a stick of length \(L\). As \(L\to\infty\),_
\[{\mathbb{P}}[(\log\log L)^{2}<F_{L}<g(L)\log L]\ =\ 1-o(1). \tag{5.1.23}\]
Proof.: We first show the upper bound using Markov's inequality. We prove by induction that
\[{\mathbb{E}}[F_{\ell}]\ =\ 1+2\sum_{\begin{subarray}{c}0<j<\ell\\ j\text{ even}\end{subarray}}\frac{1}{j}. \tag{5.1.24}\]
It is clear that \({\mathbb{E}}[F_{1}]=1\). We have the recurrence
\[{\mathbb{E}}[F_{L}]\ =\ \frac{2}{L-1}\sum_{\begin{subarray}{c}\ell<L\\ \ell\text{ odd}\end{subarray}}(1+{\mathbb{E}}[F_{\ell}]) \tag{5.1.25}\]
since there is a \(\frac{2}{L-1}\) probability of breaking off a piece of length \(\ell\) in the first break for \(1\leq\ell\leq L-1\) and \(\ell\) odd. By the induction hypothesis, we have
\[{\mathbb{E}}[F_{L}] =\ \frac{2}{L-1}\sum_{\begin{subarray}{c}\ell<L\\ \ell\text{ odd}\end{subarray}}\left(1+\left(1+2\sum_{\begin{subarray}{c}0<j< \ell\\ j\text{ even}\end{subarray}}\frac{1}{j}\right)\right)\] \[=\ \frac{2}{L-1}\cdot\frac{L-1}{2}+\frac{2}{L-1}\sum_{ \begin{subarray}{c}\ell<L\\ \ell\text{ odd}\end{subarray}}\left(1+2\sum_{\begin{subarray}{c}0<j<\ell\\ j\text{ even}\end{subarray}}\frac{1}{j}\right)\] \[=\ 1+\frac{2}{L-1}\left(\frac{L-1}{2}+2\sum_{\begin{subarray}{c }0<j<L-2\\ j\text{ even}\end{subarray}}\frac{\frac{L-j-1}{2}}{j}\right)\] \[=\ 1+\frac{2}{L-1}\left(1+\sum_{\begin{subarray}{c}0<j<L-2\\ j\text{ even}\end{subarray}}\left(1+\frac{L-j-1}{j}\right)\right)\] \[=\ 1+\frac{2}{L-1}+\frac{2}{L-1}\sum_{\begin{subarray}{c}0<j<L-2 \\ j\text{ even}\end{subarray}}\frac{L-1}{j}\] \[=\ 1+2\sum_{\begin{subarray}{c}0<j<L\\ j\text{ even}\end{subarray}}\frac{2}{j}, \tag{5.1.26}\]
where (5.1.26) follows from the previous step by observing that each \(\frac{1}{j}\) is counted
\[\#\{l\text{ odd}:j<\ell<L\}\ =\ \frac{L-j-1}{2} \tag{5.1.27}\]
many times. This completes the induction step, so we have shown (5.1.24). Now since
\[\sum_{\begin{subarray}{c}0<j<L\\ j\ {\rm even}\end{subarray}}\frac{1}{j}\ \sim\ \frac{1}{2}\log(L/2), \tag{5.1.28}\]
we have
\[\mathbb{E}[F_{L}]\ \sim\ \log L+O(1). \tag{5.1.29}\]
By Markov's inequality,
\[\mathbb{P}(F_{L}>g(L)\log L)\ \leq\ \frac{\log L+O(1)}{g(L)\log L}\ =\ O\left(\frac{1}{g(L)}\right). \tag{5.1.30}\]
The proof of the lower bound follows the exact same reasoning as the proof of Lemma 3.4 in [B+].
**Corollary 5.5**.: _Let \(k_{L}\) be the total number of sticks when the process \(\mathcal{P}\) ends. Let_
\[k^{\prime}_{L}\ =\ |\{k:\ell_{k}\geq\log^{3}(L)\}|. \tag{5.1.31}\]
_Then with probability going to 1,_
\[\lim_{L\to\infty}\frac{k^{\prime}_{L}}{k_{L}}\ =\ 1. \tag{5.1.32}\]
_Moreover, for all \(k\) such that \(\ell_{k}\geq\log^{3}(L)\), we have \(Y_{k+1}\to\infty\) as \(L\to\infty\) uniformly with probability going to \(1\)._
Proof.: The following argument is essentially the same as the one given in the proof of [B+, Corollary 3.5]. We include it here for completeness. Note that \(k_{L}-k^{\prime}_{L}\) is the number of sticks generated after \(\ell_{k}\) first becomes smaller than \(\log^{3}(L)\), and is thus upper bounded by \(\log(\log^{3}(L))g(\log^{3}(L))\) with probability going to \(1\) by Lemma 5.4. On the other hand, \(k_{L}>(\log\log L)^{2}\) with probability going to \(1\). Therefore as \(g(L)=o(\sqrt{\log(L)})\),
\[\lim_{L\to\infty}\frac{k^{\prime}_{L}}{k_{L}}\ =\ 1-\lim_{L\to\infty}\frac{k_{L}-k^{ \prime}_{L}}{k_{L}}\ >\ 1-\frac{\log(\log^{3}(L))g(\log^{3}(L))}{(\log\log(L))^{2}}\ =\ 1 \tag{5.1.33}\]
with probability going to \(1\). To prove the second part of the Corollary, note that, for \(k\) such that \(\ell_{k}\geq\log^{3}(L)\),
\[c_{k}\geq\frac{1}{g(L)\log^{2}(L)}\ \ \ \ \Longrightarrow\ \ \ X_{k+1}\geq\frac{\log L}{g(L)} \tag{5.1.34}\]
which approaches infinity. The probability of the former occurring for all such \(k\) is
\[\left(1-\frac{1}{g(L)\log^{2}(L)}\right)^{g(L)\log L}\ =\ (1-o(1))e^{-\frac{1}{\log L }}\ \to\ 1. \tag{5.1.35}\]
Thus we immediately deduce the same holds for \(Y_{k+1}\) in view of Lemma 5.1. This completes the proof.
We have verified that all conditions required in Corollary 5.2 are satisfied with probability going to \(1\), so we are done.
### Proof of Theorem 1.13
For any integer \(\ell>1\), \(r\in\{0,\ldots,n-1\}\), let
\[p_{r}(\ell)\ =\ \frac{|(n\mathbb{Z}+r)\cap[1,\ldots,\ell-1]|}{\ell-1}. \tag{5.2.1}\]
In other words, \(p_{r}(\ell)\) is the proportion of integers between \(1\) and \(\ell-1\) falling into the residue class \(r\) modulo \(n\). Note that
\[\frac{1}{n}-\frac{1}{\ell-1}\ \leq\ p_{r}(\ell)\ \leq\ \frac{1}{n}+\frac{1}{ \ell-1} \tag{5.2.2}\]
for all \(r\). Define a discrete distribution \(\mathcal{D}_{\ell}\) on \(\{0,\ldots,n-1\}\) by
\[\mathbb{P}(X_{\ell}=r)\ =\ p_{r}(\ell). \tag{5.2.3}\]
Fix starting stick length \(L\in\mathbb{Z}_{+}\backslash\mathfrak{S}\). We define a discrete process \(\mathcal{P}\), and a continuous process \(\mathcal{Q}\) that depends on \(\mathcal{P}\) as follows.
* In both processes, we start with a stick of the same integer length \(L>1\). Both starting sticks are assumed to be alive. (Since we are defining the process recursively, assume that at the start of each level, every living stick in \(\mathcal{Q}\) uniquely corresponds to a living stick in \(\mathcal{P}\) and vice versa. This is clearly true in the first level. We will see from our construction that this property is always preserved.)
* At each level, for each living stick in \(\mathcal{P}\) of length \(\ell\), choose a random ratio \(p\in(0,1)\) uniformly and a residue class \(r\in\{0,\ldots,n-1\}\) with respect to the distribution \(\mathcal{D}_{\ell}\). Suppose \(m=|(n\mathbb{Z}+r)\cap[1,\ldots,\ell-1]|\). Let \(X\) be the \((\lfloor mp\rfloor+1)\)-th smallest integer in \([1,\ldots,\ell-1]\) with residue \(r\) modulo \(n\).
* Cut the stick in \(\mathcal{P}\) into pieces of lengths \(X\) and \(\ell-X\), and cut the corresponding stick (of length \(h\)) in \(\mathcal{Q}\) into pieces of lengths \(ph\) and \((1-p)h\).
* Now, in process \(\mathcal{P}\), any new stick generated becomes dead if its length is in \(\mathfrak{S}\), and in this case the corresponding stick in \(\mathcal{Q}\) dies, too.
* Continue to the next level until all sticks die.
By choosing the ratio \(p\) and the residue class \(r\) of \(X\) independently, we ensure that dead/alive status of a new stick in either process is independent of the ratio \(p\) used to generate its length. In particular, in the continuous process, the probability that a new stick dies is always close to \(1/2\) with an error of at most \(\frac{n+4}{2(\ell-1)}\) (sum over \(n/2\) residues and then an error of \(\frac{2}{\ell-1}\) to account for stopping at length \(1\)).
We want to argue the following:
1. The continuous process \(\mathcal{Q}\) thus constructed is "close" to the process in Theorem 1.10, and thus results in strong Benford behavior.
2. For almost all pairs of corresponding ending sticks \(X_{k}\), \(Y_{k}\) in \(\mathcal{P}\), \(\mathcal{Q}\) respectively, we have \[Y_{k}(1-o(1))\ \leq\ X_{k}\ \leq\ Y_{k}(1+o(1))\] (5.2.4) as \(L\to\infty\), so that we can apply Lemma 5.3 to argue that \(\mathcal{P}\) is Benford.
#### 5.2.1 Proof of First Item.
**Lemma 5.6**.: _Let \(T_{i}\) be the number of living sticks at level \(i\) of length at least \(\frac{L}{(\log L)^{i}}\). Given that \(L>(n+5)(\log L)^{j}\), \(\log L>10j\), and \(R>2(\log L)^{2}\), we have that_
\[\mathbb{P}\left(T_{i}\geq R\left(1-\frac{5i}{\log L}\right)\ \forall\ 0\leq i\leq j \right)\ \geq\ \left(1-\frac{2(\log L)^{2}}{R}\right)^{j}\ \geq\ 1-\frac{2j(\log L)^{2}}{R}. \tag{5.2.5}\]
Proof.: We proceed with induction on \(j\). The result is clearly true for \(j=0\) since \(T_{0}=R\). We show the result for \(j\) implies that for \(j+1\). From now on we condition on the history up to the \(j\)-th level. Consider a stick at level \(j\) of length at least \(\frac{L}{(\log L)^{j}}\). For each of its children, the probability of being shorter than \(\frac{L}{(\log L)^{j+1}}\) is at most \(\frac{1}{\log L}\) and the probability of being alive is at least
\[\frac{1}{2}-\frac{n+4}{2\left(\frac{L}{(\log L)^{j}}-1\right)}\ \geq\ \frac{1}{2}-\frac{n+4}{2(n+4)\log L}\ =\ \frac{1}{2}-\frac{1}{2\log L}. \tag{5.2.6}\]
So the probability that the child is both of length at least \(\frac{L}{(\log L)^{j+1}}\) and active is bounded below by
\[\mathbb{P}(\text{alive})-\mathbb{P}\left(\text{length}\leq\frac{L}{(\log L)^ {j+1}}\right)\geq\frac{1}{2}-\frac{1}{2\log L}-\frac{1}{\log L}=\frac{1}{2}- \frac{3}{2\log L}. \tag{5.2.7}\]
Let \(T^{\prime}_{j+1}\) be the number of live sticks at level \(j+1\) of length at least \(\frac{L}{(\log L)^{j+1}}\) whose parent is of length at least \(\frac{L}{(\log L)^{j}}\). Since there are \(T_{j}\) such parents generating \(2T_{j}\) children in total, summing the above probability over each child, we have that
\[\mathbb{E}(T^{\prime}_{j+1}|T_{j})\ \geq\ 2T_{j}\left(\frac{1}{2}-\frac{3}{2 \log L}\right)\ \geq\ T_{j}\left(1-\frac{3}{\log L}\right). \tag{5.2.8}\]
Moreover, for each parent, the variance of the number of its active children that are of length at least \(\frac{L}{(\log L)^{j+1}}\) is at most \(2^{2}=4\) since it has at most \(2\) children in total. Also, each sub-process starting from one of these parents is independent from another, so the total variance \(\text{Var}(T^{\prime}_{j+1})\leq 4T_{j}\). Then, conditioning on the history of the process up to the \(j\)-th level, by Chebyshev's inequality,
\[\mathbb{P}\left(T_{j+1}<T_{j}\left(1-\frac{5}{\log L}\right)\right)\ \leq\ \mathbb{P}\left(|T^{\prime}_{j+1}-\mathbb{E}(T^{\prime}_{j+1})|>T_{j}\frac{2}{ \log L}\right)\ <\ \frac{4T_{j}}{\frac{4T_{j}^{2}}{(\log L)^{2}}}\ <\ \frac{2(\log L)^{2}}{R}, \tag{5.2.9}\]
where the last inequality is true with probability \(\left(1-\frac{2(\log L)^{2}}{R}\right)^{j}\) by the induction hypothesis. This implies
\[\mathbb{P}\left(T_{j+1}\ \geq\ T_{j}\left(1-\frac{5}{\log L}\right)\right)\ \geq\ 1-\frac{2(\log L)^{2}}{R}. \tag{5.2.10}\]
Notice that
\[\left(1-\frac{5j}{\log L}\right)\left(1-\frac{5}{\log L}\right)\ >\ 1-\frac{5(j+1)}{\log L}, \tag{5.2.11}\]
so that
\[T_{j+1}\ \geq\ T_{j}\left(1-\frac{5}{\log L}\right)\ \Longrightarrow\ T_{j+1}\ \geq\ R\left(1-\frac{5(j+1)}{\log L}\right) \tag{5.2.12}\]
given that \(T_{j}\geq R(1-\frac{5j}{\log L})\). Thus we have
\[\mathbb{P}\left(T_{j+1}\ \geq\ R\left(1-\frac{5(j+1)}{\log L}\right)\right)\ \geq\ 1-\frac{2(\log L)^{2}}{R} \tag{5.2.13}\]
given that \(T_{j}\geq R(1-\frac{5j}{\log L})\). This completes the induction step.
**Corollary 5.7**.: _For sufficiently large \(L\) and \(R>(\log L)^{3}\), the number of live sticks ever is bounded below by_
\[R\sqrt{\log L} \tag{5.2.14}\]
_with probability at least_
\[1-\frac{4(\log L)^{5/2}}{R}. \tag{5.2.15}\]
_This is also a lower bound for the number of dead sticks ever, with the same probability._
Proof.: Let \(n_{i}\) be the number of active sticks at level \(i\). First, note that the number of dead sticks generated at level \(i\) is \(2n_{i-1}-n_{i}\), and summing this from \(i=1\) to infinity yields \(2n_{0}+n_{1}+n_{2}+\cdots\) which is bounded below by the total number of live sticks in all levels. Now our lower bound follows from Lemma 5.6 by taking \(j=\lfloor 2\sqrt{\log L}\rfloor\) and \(L\) large enough so that all assumptions there hold and that
\[\frac{5\cdot\lfloor 2\sqrt{\log L}\rfloor}{\log L}\ <\ \frac{1}{2}. \tag{5.2.16}\]
Let \(M\) be the total number of dead sticks. We have that, with probability going to \(1\), \(M\geq R\sqrt{\log L}\). Now, we wish to show that \(\mathbb{E}[P_{R,L}(s)]\to\log_{B}(s)\). It suffices to show that \(\mathbb{E}[\varphi_{s}(X_{i})]\to\log_{B}(s)\) uniformly for a proportion of \(X_{i}\) going to \(1\). We first show that almost all sticks die after \(\frac{1}{2}\log\log L\) levels. First, note that there are at most \(R2^{i}\) alive sticks at level \(i\) and thus at most \(R2^{i-1}\cdot 2=R2^{i}\) new dead sticks are generated at level \(i\). Thus, the number of dead sticks generated at or before level \(j\) is \(\sum_{i=1}^{j}R2^{i}\leq R2^{j+1}\). Thus, the number of sticks before level \(\frac{1}{2}\log\log L\) is at most
\[2R\cdot 2^{(\log\log L)/2}\ =\ 2R(\log L)^{(\log 2)/2}\ =\ o(R\sqrt{\log L}). \tag{5.2.17}\]
That is, the proportion of sticks before level \(\frac{1}{2}\log\log L\) goes to \(0\). Thus, we may assume that \(X_{i}\) dies at a later level. However, it is a product of independent random variables, each chosen from some finite set. Moreover, the length of this product is increasing in \(L\), so by Theorem 2.2, \(\mathbb{E}[\varphi_{s}(X_{i})]\) approaches \(\log_{B}(s)\) uniformly, and the conclusion follows.
We now wish to show that \(\operatorname{Var}[P_{R,L}(s)]\to 0\). The same strategy as in the proof of Theorem 1.7 works with slight modifications that we highlight below. Recall that the goal is to show that
\[\frac{1}{M^{2}}\sum_{i,j}\mathbb{E}[\varphi_{s}(X_{i}X_{j})]\ \to\ \log_{B}(s)^{2}, \tag{5.2.18}\]
where \(X_{i},X_{j}\) denote a pair of dead sticks. Based on our observation above, we may restrict our attention to the collection of pairs of sticks only involving those that die after at least \(\frac{1}{2}\log\log L\) levels. Note that running the exact same argument as in the proof of Theorem 1.7 with \(k=2\), we obtain that the number of pairs with high dependency (as describe in (2) in that proof) is bounded above by \(M(\log R)^{1+\log 2}=o(M^{2})\), so we are done.
#### 5.2.2. Proof of Second Item
**Lemma 5.8**.: _At each level of \(\mathcal{P}\), given that a stick of length \(\ell\) breaks into sticks of lengths \(X\) and \(\ell-X\), with ratio \(p\) in process \(\mathcal{Q}\), we have_
\[\left|\frac{X}{\ell}-p\right|\ \leq\ \frac{n+1}{\ell}. \tag{5.2.19}\]
_This also implies that_
\[\left|\frac{\ell-X}{\ell}-(1-p)\right|\ \leq\ \frac{n+1}{\ell}, \tag{5.2.20}\]
_so we have the same bound for the error between the corresponding ratios in \(\mathcal{P}\) and \(\mathcal{Q}\) regardless of which child we look at._
Proof.: We prove this for \(r\neq 0\). Since \(m=\left\lfloor\frac{\ell-1-r}{n}\right\rfloor+1\), we have
\[\frac{\ell-1-r}{n}\ \leq\ m\ \leq\ \frac{\ell-1-r}{n}+1.\]
Now \(X=\left\lfloor pm\right\rfloor n+r\) whenever \(r\neq 0\). (Note that here if \(r=0\), we have \(m=\left\lfloor\frac{\ell-1}{n}\right\rfloor\) and \(X=\left\lfloor pm\right\rfloor n+n\) instead.) So
\[p(\ell-1-r)-n+r\ \leq\ X\ \leq\ p(\ell-1-r)+pn+r\]
\[\implies p-\frac{p+pr+n-r}{\ell}\ \leq\ \frac{X}{\ell}\ \leq\ p+\frac{pn+r-p-pr}{ \ell}. \tag{5.2.21}\]
Notice that
\[\left|p+pr+n-r\right|\ =\ \left|n+p-(1-p)r\right|\ \leq\ n+1 \tag{5.2.22}\]
and
\[\left|pn+r-p-pr\right|\ =\ \left|p(n-1)+(1-p)r\right|\ \leq\ n, \tag{5.2.23}\]
so we have the desired. One easily verifies the result for \(r=0\) following a similar calculation.
**Corollary 5.9**.: _Consider a pair of sticks \((\ell_{j},h_{j})\) at level \(j\geq 1\), where \(\ell_{j}\) is in process \(\mathcal{P}\) and \(h_{j}\) is the corresponding one in process \(\mathcal{Q}\). Denote their ancestors as \((\ell_{i},h_{i})\) for \(0\leq i\leq j-1\), with \(\ell_{0}=h_{0}=L\). Suppose \(h_{i+1}=p_{i}h_{i}\) for all \(0\leq i\leq j-1\). Then we have_
\[h_{j}\prod_{i=0}^{j-1}\left(1-\frac{n+1}{p_{i}\ell_{i}}\right)\ \leq\ \ell_{j}\ \leq\ h_{j}\prod_{i=0}^{j-1}\left(1+\frac{n+1}{p_{i}\ell_{i}}\right). \tag{5.2.24}\]
Proof.: By Lemma 5.8, we have for all \(1\leq i\leq j\),
\[p_{i-1}\left(1-\frac{n+1}{p_{i-1}\ell_{i-1}}\right)\ \leq\ \frac{\ell_{i}}{\ell_{i-1}}\ \leq\ p_{i-1}\left(1+\frac{n+1}{p_{i-1}\ell_{i-1}}\right), \tag{5.2.25}\]
and the corollary follows by taking the product over all such \(i\).
**Corollary 5.10**.: \[h_{j}\prod_{i=1}^{j}\left(1-\frac{n+1}{\ell_{i}-n-1}\right)\ \leq\ \ell_{j}\ \leq\ h_{j}\prod_{i=1}^{j}\left(1+\frac{n+1}{\ell_{i}-n-1}\right).\] (5.2.26)
Proof.: This follows from Corollary 5.9 using the lower bound
\[p_{i-1}\ell_{i-1}\ \geq\ \ell_{i}-n-1 \tag{5.2.27}\]
which follows from Lemma 5.8.
**Lemma 5.11**.: _Let \(f(L),g(L),h(L)\) be some functions in \(L\) that go to infinity as \(L\to\infty\) with \(g(L)=o(f(L))\). Then for any dead stick \(\ell_{j}\) with \(j<g(L)\), if \(\ell_{j}>f(L)+n+1\) and the corresponding sticks \(h_{j}>h(L)\), we have_
\[h_{j}(1-o(1))\ \leq\ \ell_{j}\ \leq\ h_{j}(1+o(1)). \tag{5.2.28}\]
Proof.: From Corollary 5.2, we have
\[\ell_{j}\ \geq\ h_{j}\left(1-\sum_{i=1}^{j}\frac{n+1}{\ell_{i}-n-1}\right)\ \geq\ h_{j}\left(1-g(L)\frac{n+1}{f(L)-n-1}\right)\ =\ h_{j}(1-o(1)). \tag{5.2.29}\]
For the upper bound, we have that
\[\ell_{j}\ \leq\ h_{j}\prod_{i=1}^{j}\left(1+\frac{n+1}{\ell_{i}-n-1}\right)\ \leq\ h_{j}\left(1+\frac{n+1}{f(L)-n-1}\right)^{g(L)}. \tag{5.2.30}\]
As \(L\to\infty\), the expression above multiplying \(h_{j}\) approaches
\[\lim_{L\to\infty}\exp\left(g(L)\frac{n+1}{f(L)-n-1}\right)\ =\ 1 \tag{5.2.31}\]
so \(\ell_{j}\leq h_{j}(1+o(1))\), as desired.
Now the goal is to determine \(f\) and \(g\) so that
\[\mathbb{P}(\text{A stick dies within $g(L)$ levels})\ =\ 1-o(1) \tag{5.2.32}\]
and
\[\lim_{L\to\infty}\frac{\#\text{dead sticks with length larger than $f(L)+n+1$}}{\#\text{all dead sticks}}\ =\ 1. \tag{5.2.33}\]
The intuition is that we want to show most sticks die within the first \(g(L)\) levels, and that most sticks that ever occur are long, i.e., larger than \(f(L)\).
**Lemma 5.12**.: _Let \(M\) be the number of dead sticks ever in a process starting with \(R\) sticks of length \(L\). Then_
\[\mathbb{P}(M<R(\log L)\nu(L))\ \to\ 1 \tag{5.2.34}\]
_as \(L\to\infty\), where \(\nu(L)\) is any function that goes to infinity as \(L\to\infty\)._
Proof.: Let \(M_{L}\) be the number of dead sticks resulting from the process of breaking a single stick of length \(L\). We have that \(M_{L}=1\) whenever \(L\in\mathfrak{S}\). We prove by induction on \(L\) that when \(L\notin\mathfrak{S}\)
\[\mathbb{E}[M_{L}]\ \leq\ 6n^{2}\log L. \tag{5.2.35}\]
(Here \(\log(x)\) is short-hand for \(\log_{\text{e}}(x)\).) When \(1<L\leq 3n^{2}\) this is clear since
\[6n^{2}\log L\ \geq\ 3n^{2}\cdot 2\log(2)\ \geq\ 3n^{2}\ \geq\ L \tag{5.2.36}\]
and \(M_{L}\leq L\) for all \(L\).
When \(L>3n^{2}\) and \(L\notin\mathfrak{S}\), we have
\[\mathbb{E}[M_{L}] = \frac{1}{L-1}\sum_{1\leq\ell\leq L-1}(\mathbb{E}[M_{\ell}]+\mathbb{ E}[M_{L-\ell}]) \tag{5.2.37}\] \[= \frac{2}{L-1}\sum_{1\leq\ell\leq L-1}\mathbb{E}[M_{\ell}]\] \[\leq \left(\frac{2}{L-1}\sum_{\begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\in\mathfrak{S}\end{subarray}}1\right)+\left(\frac{2}{L-1}\sum_{ \begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}-1}6n^{2}\log(\ell)\right)\] \[\leq \frac{2}{L-1}\left(\frac{L-1}{2}+\frac{n}{2}+1\right)+\frac{12n^ {2}}{L-1}\log\left(\prod_{\begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}}\ell\right)\] \[\leq 1+\frac{n+2}{L-1}+6n^{2}\log\left(\left(\prod_{\begin{subarray} {c}1\leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}}\ell\right)^{\frac{2}{L-1}}\right)\] \[\leq 1+6n^{2}\log L.\]
Note that we used the fact that
\[|[1,L-1]\cap\mathfrak{S}|\ \leq\ \left(\left\lfloor\frac{L-1}{n}\right\rfloor+1 \right)\cdot\frac{n}{2}+1\ \leq\ \frac{L-1}{2}+\frac{n}{2}+1. \tag{5.2.38}\]
To see the last inequality,
\[1+\frac{n+2}{L-1}+6n^{2}\log\left(\left(\prod_{\begin{subarray} {c}1\leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}}\ell\right)^{\frac{2}{L-1}}\right)\ \leq\ 1+6n^{2}\log L\] \[\iff \frac{n+2}{6n^{2}}+\log\left(\left(\prod_{\begin{subarray}{c}1 \leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}}\ell\right)^{2}\right)\ \leq\ (L-1)\log L\] \[\iff e^{(n+2)/(6n^{2})}\ \leq\ \frac{L^{L-1}}{\left(\prod_{ \begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}}\ell\right)^{2}}\] \[\iff e^{1/(3n)}n^{n}\ \leq\ \frac{L^{L-1}}{\left(\prod_{ \begin{subarray}{c}n+1\leq\ell\leq L-1\\ \ell\notin\mathfrak{S}\end{subarray}}\ell\right)^{2}}. \tag{5.2.39}\]
Note that on the RHS, the product has at most \(\frac{L-1}{2}\) terms and the first \(n/2\) terms are at most \(2n\), so we have
\[\frac{L^{L-1}}{\left(\prod_{n<\ell\leq L-1}\ell\right)^{2}}\ \geq\ \frac{L^{n}}{(2n)^{n}}\cdot\frac{L^{L-n-1}}{ \left(\prod_{2n<\ell\leq L-1}\ell\right)^{2}}\ \geq\ (3n)^{n}\ \geq\ e^{1/(3n)}n^{n}. \tag{5.2.40}\]
Thus the induction step is complete. By Markov's inequality,
\[\mathbb{P}(M>R(\log L)\nu(L))\ \leq\ \frac{R\mathbb{E}[M_{L}]}{R(\log L)\nu(L)} \ \leq\ \frac{6n^{2}R\log L}{R(\log L)\nu(L)}\ =\ O\left(\frac{1}{\nu(L)}\right)\ \to\ 0 \tag{5.2.41}\]
as \(L\to\infty\).
Since at each level before the process ends, the number of sticks increase by at least \(1\), we have that the total number of levels at most \(R(\log L)\nu(L)\) with probability going to \(1\) as \(L\to\infty\). Thus we can take
\[g(L)\ =\ R(\log L)\nu(L). \tag{5.2.42}\]
**Lemma 5.13**.: _Let \(M_{\ell,k}\) denote the number of dead sticks with length smaller than \(k\) coming from a process starting with a stick of length \(\ell\). Let \(c=24n^{2}\). Then for any \(k\geq 2n,\ell>1\), we have_
\[\mathbb{E}[M_{\ell,k}]\ \leq\ c\log(k). \tag{5.2.43}\]
_In particular,_
\[\mathbb{E}[M_{L,\log^{2}(L)}]\ \leq\ 2c\log\log L. \tag{5.2.44}\]
Proof.: When \(\ell\leq k\), we have trivially
\[\mathbb{E}[M_{\ell,k}]\ =\ \mathbb{E}[M_{\ell}]\ \leq\ \frac{c}{4}\log(\ell)\ \leq\ \frac{c}{4}\log(k) \tag{5.2.45}\]
by (5.2.35). When \(\ell>k\) and \(\ell\in\mathfrak{S}\), we have
\[\mathbb{E}[M_{\ell,k}]\ =\ 0. \tag{5.2.46}\]
For a fixed \(k\), we prove the result by induction on \(\ell\).
\[\mathbb{E}[M_{\ell,k}] =\ \frac{1}{\ell-1}\sum_{1\leq x\leq\ell-1}(\mathbb{E}[M_{x,k}]+ \mathbb{E}[M_{\ell-x,k}])\] \[=\ \frac{2}{\ell-1}\sum_{1\leq x\leq\ell-1}\mathbb{E}[M_{x,k}]\] \[=\ \frac{2}{\ell-1}\left(\sum_{1\leq x\leq k}\mathbb{E}[M_{x,k}]+ \sum_{\begin{subarray}{c}x\not\in\mathfrak{S}\\ k<x\leq\ell-1\end{subarray}}\mathbb{E}[M_{x,k}]\right)\] \[\leq\ \frac{2}{\ell-1}\left(k\cdot\frac{c}{4}\log(k)+\frac{\ell-k-1+n}{2} \cdot c\log(k)\right)\] \[=\ c\log(k)\frac{\frac{1}{2}k+(\ell-k-1+n)}{\ell-1}\] \[\leq\ c\log(k), \tag{5.2.47}\]
where the last step uses \(\frac{k}{2}\geq n\)
**Corollary 5.14**.: _Let \(\ell_{i}\) denote a stick occurring at level \(i\) in process \(\mathcal{P}\)._
\[\frac{\#\{\ell_{i}>\log^{2}(L):i\leq g(L)\}}{\#\{\ell_{i}:i\leq g(L)\}}\ \to\ 1 \tag{5.2.48}\]
_as \(L\to\infty\) with probability going to 1._
Proof.: We have from Lemma 5.13 that
\[\mathbb{E}[\#\{\ell_{i}\leq\log^{2}(L):i\leq g(L)\}]\ \leq\ R\cdot 2c\log\log L. \tag{5.2.49}\]
By Markov's inequality,
\[\mathbb{P}(\#\{\ell_{i}\leq\log^{2}(L):i\leq g(L)\}>R(\log L)^{1/3})\ \leq\ \frac{R\cdot 2c\log\log L}{R(\log L)^{1/3}}\ \to\ 0 \tag{5.2.50}\]
as \(L\to\infty\). In other words,
\[\#\{\ell_{i}>\log^{2}(L):i\leq g(L)\}\ <\ R(\log L)^{1/3} \tag{5.2.51}\]
with probability going to 1. On the other hand, by Corollary 5.7, we have as long as \(R>(\log L)^{3}\),
\[\#\{\ell_{i}:i\leq g(L)\}\ \geq\ R(\log L)^{1/2} \tag{5.2.52}\]
with probability going to 1. Since
\[\frac{R(\log L)^{1/3}}{R(\log L)^{1/2}}\ \to\ 0 \tag{5.2.53}\]
as \(L\to\infty\), we have the desired.
Now to see that item (2) is true, it suffices to show that the premises of Lemma 5.11 are satisfied for most dead sticks. By Lemma 5.12, almost all sticks die within the first \(g(L)=R(\log L)\nu(L)\) levels, where \(\nu(L)\) is any function that blows up as \(L\to\infty\). By Corollary 5.14, almost all dead sticks are at least \(f(L)=\log^{2}(L)\) in length. We can choose \(\nu\) such that \(g(L)=o(f(L))\).
### When \(|S|\neq n/2\)
It is natural to ask what happens when \(|S|\) is not exactly \(n/2\). In this section, we present a result showing that when \(|S|<n/2\), the final stick lengths are non-Benford. Moreover, we state a more specific conjecture on the behavior of the limiting distribution when \(|S|\neq n/2\). Simulation results are also presented to support our conjecture.
**Theorem 5.15**.: _If \(|S|<n/2\), then as \(R\to\infty\) and \(L\to\infty\), the collection of mantissas of ending stick lengths does not converge to any continuous distribution on \([0,1]\). In particular, it does not converge to strong Benford behavior._
**Theorem 5.16**.: _If \(|S|>n/2\), then as \(R\to\infty\) and \(L\to\infty\) with the condition \(R=\omega(L^{2})\), the collection of mantissas of ending stick lengths does not converge to the uniform distribution on \([0,1]\) provided that the base \(B\) is greater than \(3^{6n^{3}/|S|}\)._
The above theorem says that in the case \(|S|>n/2\), as long as the base \(B\) is large enough, the final distribution does not converge to Benford. We conjecture that this is in fact true regardless of the base.
**Conjecture 5.1** (Other Bases).: _The final collection of stick lengths does not converge to strong Benford behavior for any base \(B\) if the size of \(S\) is not \(n/2\). Specifically, if \(|S|>n/2\), then the limiting distribution depends on the mantissa of \(L\) base \(B\), and the density function of \(\log_{B}(X/L)\pmod{1}\) is skewed towards 1._
Note that the proofs of Theorem 5.16 already strongly indicate that the above conjecture is true, although it remains an interesting open question to describe precisely what the distribution looks like.
We have also obtained strong empirical evidence for the conjecture. Figure 1 shows a clear skewness of the limiting distribution after normalizing by the starting length \(L\). In this simulation, we used inputs \(n=12\), \(\mathfrak{S}=\{1,2,3,4,5,6,9,10\}\), \(L=82\cdot 10^{12000}\) and \(R=1000\). It is not hard to see that Theorem 5.16 does not apply when \(B=10\). It is worth noting that even when we vary the stopping set and the significand of \(L\), the pattern persists, as long as \(n/2<|\mathfrak{S}|<n\).
A heuristic for this is that when \(|\mathfrak{S}|=n\), all processes end at the first level, and the resulting stick lengths follow the uniform distribution on \(\{1,\ldots,L-1\}\). It is not hard to show that for a random variable \(X\) uniformly distributed, \(\log(X)\) has smaller mantissa with lower probability and larger mantissa with higher probability.
#### 5.3.1. Proof of Theorem 5.15
**Lemma 5.17**.: _Let_
\[M_{L,m}\ :=\ \#\text{dead sticks generated by a stick of length $L$ that are of length less than $m$.} \tag{5.3.1}\]
_Then for all \(L\notin\mathfrak{S}\), there exists constants \(m,c\) only depending on \(k\) and \(n\) such that_
\[\mathbb{E}(M_{L,m})\ \geq\ c\mathbb{E}(M_{L})+1. \tag{5.3.2}\]
Proof.: Let
\[c=\frac{1}{2n+1},\quad m=2n^{2}.\]
Figure 1. Stopping at 8 residues modulo 12 with \(L=82\cdot 10^{12000}\) and \(R=1000\)
For \(L\leq m\), the result is clear since \(\mathbb{E}(M_{L,m})=\mathbb{E}(M_{L})\geq 2\). We now proceed with induction and assume the result is true for positive integers less than \(L>m\). We have,
\[\mathbb{E}(M_{L,m}) = \frac{2}{L-1}\sum_{\ell=1}^{L-1}\mathbb{E}(M_{\ell,m}) \tag{5.3.3}\] \[= \frac{2}{L-1}\sum_{\begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\not\in\mathfrak{S}\end{subarray}}\mathbb{E}(M_{\ell,m})\] \[\geq \frac{2}{L-1}\sum_{\begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\not\in\mathfrak{S}\end{subarray}}c\mathbb{E}(M_{\ell})+\frac{2}{L-1}|[L- 1]\setminus\mathfrak{S}|.\]
We now show that
\[|[L-1]\setminus\mathfrak{S}|\ \geq\ \frac{L-1}{2}+c|[L-1]\cap\mathfrak{S}|. \tag{5.3.4}\]
Note that
\[|[L-1]\setminus\mathfrak{S}|\ \geq\ (n-|S|)\left(\frac{L-n-1}{n}\right)\ \geq\ \frac{(n+1)(L-n-1)}{2n} \tag{5.3.5}\]
and
\[|[L-1]\cap\mathfrak{S}|\ \leq\ |S|\left(\frac{L+n-1}{n}\right)\ \leq\ \frac{(n-1)(L+n-1)}{2n} \tag{5.3.6}\]
so it suffices to show that
\[\frac{(n+1)(L-n-1)}{2n} \geq \frac{L-1}{2}+c\frac{(n-1)(L+n-1)}{2n} \tag{5.3.7}\] \[\Longleftrightarrow\ c\ \leq\ \frac{(n+1)(L-n-1)-n(L-1)}{(n-1)(L+n-1)}.\]
We have
\[\frac{(n+1)(L-n-1)-n(L-1)}{(n-1)(L+n-1)} \geq \frac{L-1-n^{2}-n}{(n-1)(L-1+n)} \tag{5.3.8}\] \[\geq \frac{(2n^{2}-n^{2}-n}{(n-1)(2n^{2}+n-1)}\] \[\geq \frac{n}{2n^{2}+n-1}\] \[\geq \frac{1}{2n+1}\ =\ c.\]
Hence, (5.3.4) is true, and can be plugged into (5.3.3) to obtain
\[\mathbb{E}(M_{L.m}) \geq \frac{2}{L-1}\sum_{\begin{subarray}{c}1\leq\ell\leq L-1\\ \ell\not\in\mathfrak{S}\end{subarray}}c\mathbb{E}(M_{\ell})+\frac{2c}{L-1}|[ L-1]\cap\mathfrak{S}|+1 \tag{5.3.9}\] \[\geq 1+\frac{2}{L-1}\sum_{\ell=1}^{L-1}c\mathbb{E}(M_{\ell})\] \[\geq c\mathbb{E}(M_{L})+1.\]
The induction is complete.
**Lemma 5.18**.: _Let \(M_{L}^{R}\) be the total number of dead sticks coming from a process of breaking \(R\) identical sticks of length \(L\), and \(M_{L,m}^{R}\) be the number of those shorter than \(m\). Then for \(m\) and \(c\) satisfying the conclusion of Lemma 5.17, we have as \(R\to\infty\),_
\[\mathbb{P}\left(\frac{M_{L,m}^{R}}{M_{L}^{R}}\leq\frac{c}{3}\right)\ \to\ 0. \tag{5.3.10}\]
Proof.: By Chernoff's inequality, we have
\[\mathbb{P}\left(M_{L,m}^{R}\leq\frac{1}{2}R\mathbb{E}(M_{L,m})\right)\ \leq\ e^{-R\mathbb{E}(M_{L,m})/8} \tag{5.3.11}\]
and
\[\mathbb{P}\left(M_{L}^{R}\ \geq\frac{3}{2}R\mathbb{E}(M_{L})\right)\ \leq\ e^{-R\mathbb{E}(M_{L})/10}. \tag{5.3.12}\]
So
\[\mathbb{P}\left(M_{L,m}^{R}\ \geq\frac{1}{2}R\mathbb{E}(M_{L,m})\ \text{and}\ M_{L}^{R}\ \leq\frac{3}{2}R\mathbb{E}(M_{L})\right)\ \geq\ 1-e^{-R\mathbb{E}(M_{L,m})/8}-e^{-R\mathbb{E}(M_{L})/10}. \tag{5.3.13}\]
In that case,
\[\frac{M_{L,m}^{R}}{M_{L}^{R}}\ \geq\ \frac{\frac{1}{2}R\mathbb{E}(M_{L,m})}{ \frac{3}{2}R\mathbb{E}(M_{L})}=\frac{\mathbb{E}(M_{L,m})}{3\mathbb{E}(M_{L})} \ \geq\ c/3. \tag{5.3.14}\]
Therefore it suffices to show that
\[1-e^{-R\mathbb{E}(M_{L,m})/8}-e^{-R\mathbb{E}(M_{L})/10}\ \to\ 0 \tag{5.3.15}\]
as \(R\to\infty\). To do this, it again suffices to show that \(\mathbb{E}(M_{L,m})>0\) and \(\mathbb{E}(M_{L})>0\). Clearly, \(\mathbb{E}(M_{L})\geq 1\), so it follows from Lemma 5.17 that \(\mathbb{E}(M_{L,m})\geq c+1>0\).
Now to conclude the proof of (2), note that if the limiting distribution were Benford, by Definition 1.2, the collection of mantissas \(M_{B}(X)\) of dead sticks converges to uniform distribution on \([0,1]\), which is continuous. So we must have that
\[\frac{M_{L,m}}{M_{L}}\ =\ \frac{\#\{X:M_{B}(X)\in\{M_{B}(1),M_{B}(2),\ldots,M_{B} (m-1)\}\}}{M_{L}}\ \to\ 0 \tag{5.3.16}\]
as \(L\to\infty\) with probability going to \(1\) as \(R\to\infty\). In fact, this shows that the collection of mantissas of such a process does not converge to _any_ continuous distribution on \([0,1]\) as \(R\to\infty\) and \(L\to\infty\).
#### 5.3.2. Proof of Theorem 5.16
Let \(M_{L}^{R}\) be the total number of dead sticks obtained starting from \(R\) sticks of length \(L\).
**Lemma 5.19**.: _We have that_
\[\mathbb{E}[M_{L}^{R}]\ \leq\ 2n^{2}R. \tag{5.3.17}\]
Proof.: We show the result when \(R=1\) via induction on \(L\). Let \(M_{L}^{1}=M_{L}\). The result is clearly true for \(L\leq 2n^{2}\) since \(M_{L}\leq L\), so assume that \(L>2n^{2}\) and the result holds for all positive integers
smaller than \(L\). We have that,
\[\mathbb{E}[M_{L}] =\ \frac{1}{L-1}\sum_{\ell=1}^{L-1}(\mathbb{E}[M_{\ell}]+\mathbb{E}[M _{L-\ell}])\] \[=\ \frac{2}{L-1}\sum_{\ell=1}^{L-1}\mathbb{E}[M_{\ell}]\] \[\leq\ 2+\frac{2}{L-1}\sum_{\begin{subarray}{c}1\leq\ell\leq L-1 \\ \ell\not\in\mathfrak{S}\end{subarray}}2n^{2}\] \[\leq\ 2+\frac{2}{L-1}\left\lceil\frac{L-1}{n}\right\rceil(n-|S|) \cdot 2n^{2}\] \[\leq\ 2+\frac{2}{L-1}\left(\frac{L+n-1}{n}\right)\left(\frac{n-1}{ 2}\right)2n^{2}\] \[=\ 2+2n(n-1)\frac{L+n-1}{L-1}\] \[\leq\ 2+2n(n-1)\frac{2n^{2}+n}{2n^{2}}\] \[=\ 2n^{2}-n+1\ \leq\ 2n^{2}. \tag{5.3.18}\]
The induction is complete. The result for general \(R\) follows from linearity of expectation.
**Corollary 5.20**.: _With probability at least_
\[1-\frac{L^{2}}{n^{4}R} \tag{5.3.19}\]
_we have \(M_{L}^{R}\leq 3n^{2}R\)._
Proof.: First, note that, trivially, \(\mathrm{Var}[M_{L}]\leq L^{2}\) so that \(\mathrm{Var}[M_{L}^{R}]\leq RL^{2}\). Then, Chebyshev's inequality implies that
\[\mathbb{P}(M_{L}^{R}>3n^{2}R)\ \leq\ \mathbb{P}(|M_{L}^{R}-\mathbb{E}(M_{L}^{R} )|>n^{2}R)\ \leq\ \frac{RL^{2}}{(n^{2}R)^{2}}\ =\ \frac{L^{2}}{n^{4}R}. \tag{5.3.20}\]
**Lemma 5.21**.: _Let \(a>2\) be some real number and assume \(L>\frac{2an}{a-2}\). With probability at least_
\[1-\frac{16n^{2}}{|S|^{2}R} \tag{5.3.21}\]
_the number of dead sticks in the first level of length at least \(L/a\) is at least \(|S|R/(2n)\)._
Proof.: Denote this quantity by \(M_{L,a}^{R}(1)\). Then, given a stick of length \(L\), the number of ways the left child can die and be of length at least \(L/a\) is bounded below by
\[|S|\left\lfloor\frac{L-L/a}{n}\right\rfloor\ \geq\ |S|\left(\frac{L}{n}\left(1- \frac{1}{a}\right)-1\right)\ \geq\ \frac{|S|L}{2n}. \tag{5.3.22}\]
Thus, the probability of any arbitrary child being of length at least \(L/a\) and dead is at least \(|S|/(2n)\). It follows that
\[\mathbb{E}[M^{R}_{L,a}(1)]\ \geq\ \frac{|S|R}{n}. \tag{5.3.23}\]
Furthermore,
\[\operatorname{Var}[M^{R}_{L,a}(1)]\ \leq\ 4R \tag{5.3.24}\]
by independence. Thus, we have, by Chebyshev's inequality,
\[\mathbb{P}\left(M^{R}_{L,a}(1)\leq\frac{|S|R}{2n}\right)\ \leq\ \mathbb{P}\left(\left|M^{R}_{L,a}(1)-\mathbb{E}[M^{R}_{L,a}(1)]\right|\geq \frac{|S|R}{2n}\right)\ \leq\ \frac{4R}{\left(\frac{|S|R}{2n}\right)^{2}}\ =\ \frac{16n^{2}}{|S|^{2}R}. \tag{5.3.25}\]
Now, set \(a=3\) and let \(L>6n\). Note that by Lemma 5.21 and Corollary 5.20, with probability at least
\[1-\frac{L^{2}}{n^{4}R}-\frac{16n^{2}}{|S|^{2}R} \tag{5.3.26}\]
the proportion of dead sticks of length at least \(L/3\) is bounded below by
\[\frac{|S|R}{2n}(3n^{2}R)^{-1}\ =\ \frac{|S|}{6n^{3}}. \tag{5.3.27}\]
As \(L,R\to\infty\) in a manner such that \(R\) grows faster than \(L^{2}\), this probability approaches \(1\). Now let,
\[B\ >\ 3^{6n^{3}/|S|}. \tag{5.3.28}\]
We obtain that
\[\log_{B}(3)\ <\ \frac{|S|}{6n^{3}} \tag{5.3.29}\]
but at least \(\frac{|S|}{6n^{3}}\) of the dead sticks are in \([L/3,L]\) so that at least the same fraction of normalized mantissas of dead sticks are in \([1-\log_{B}(3),1]\). It follows that the distribution of mantissas of dead sticks cannot approach the uniform distribution as \(R\to\infty\) for any \(L>6n\), nor can such be the case as \(L\to\infty\).
### General Number of Parts
Given Theorem 1.10, it seems likely that a similar result would hold for the discrete analogue. Indeed, we make the following conjecture, which is supported by our simulation results (see, for example, Figure 2).
**Conjecture 5.2** (General number of parts).: _Fix some positive integer \(k\geq 2\), and consider the process where we break each stick into \(k\) pieces by choosing \(k-1\) cut points recursively following the uniform distribution3. Fix a modulus \(n=tk\) for some \(t\geq 1\) and a subset \(S\subset\{0,\ldots,n-1\}\) of size \((t-1)k\) representing the residue classes. Let the stopping set be_
Footnote 3: Namely, choose the first cut point according to the uniform distribution as usual, and then choose the next cut point on the second fragment according to the uniform distribution on that fragment, and so on. If at some point the second fragment has length \(1\), then the breaking stops - so when the stick is short, it is possible that it only breaks into less than \(k\) pieces.
\[\mathfrak{S}\ :=\ \{1\}\cup\{m\in\mathbb{Z}_{+}:m=qn+r,\ r\in S,q\in\mathbb{Z}\}. \tag{5.4.1}\]
_If we start with \(R\) identical sticks of positive integer length \(L\notin\mathfrak{S}\), then the collection of ending stick lengths converges to strong Benford behavior given that \(R>f(L)\) as \(L\to\infty\), where \(f(L)\) is some function that goes to infinity as \(L\to\infty\). Moreover, if the number of residue classes constituting the stopping set is not equal to \((t-1)k\), then the resulting stick lengths do not converge to strong Benford behavior._
## 6. Acknowledgements
The authors are supported by NSF Grant DMS2241623, NSF Grant DMS1947438, Williams College, and University of Michigan.
|
2310.02012
|
Towards Training Without Depth Limits: Batch Normalization Without
Gradient Explosion
|
Normalization layers are one of the key building blocks for deep neural
networks. Several theoretical studies have shown that batch normalization
improves the signal propagation, by avoiding the representations from becoming
collinear across the layers. However, results on mean-field theory of batch
normalization also conclude that this benefit comes at the expense of exploding
gradients in depth. Motivated by these two aspects of batch normalization, in
this study we pose the following question: "Can a batch-normalized network keep
the optimal signal propagation properties, but avoid exploding gradients?" We
answer this question in the affirmative by giving a particular construction of
an Multi-Layer Perceptron (MLP) with linear activations and batch-normalization
that provably has bounded gradients at any depth. Based on Weingarten calculus,
we develop a rigorous and non-asymptotic theory for this constructed MLP that
gives a precise characterization of forward signal propagation, while proving
that gradients remain bounded for linearly independent input samples, which
holds in most practical settings. Inspired by our theory, we also design an
activation shaping scheme that empirically achieves the same properties for
certain non-linear activations.
|
Alexandru Meterez, Amir Joudaki, Francesco Orabona, Alexander Immer, Gunnar Rätsch, Hadi Daneshmand
|
2023-10-03T12:35:02Z
|
http://arxiv.org/abs/2310.02012v1
|
# Towards Training Without Depth Limits:
###### Abstract
Normalization layers are one of the key building blocks for deep neural networks. Several theoretical studies have shown that batch normalization improves the signal propagation, by avoiding the representations from becoming collinear across the layers. However, results on mean-field theory of batch normalization also conclude that this benefit comes at the expense of exploding gradients in depth. Motivated by these two aspects of batch normalization, in this study we pose the following question: "Can a batch-normalized network keep the optimal signal propagation properties, but _avoid_ exploding gradients?" We answer this question in the affirmative by giving a particular construction of an _Multi-Layer Perceptron (MLP) with linear activations_ and batch-normalization that provably has _bounded gradients_ at any depth. Based on Weingarten calculus, we develop a rigorous and non-asymptotic theory for this constructed MLP that gives a precise characterization of forward signal propagation, while proving that gradients remain bounded for linearly independent input samples, which holds in most practical settings. Inspired by our theory, we also design an activation shaping scheme that empirically achieves the same properties for certain non-linear activations.
+
Footnote †: Code is available at: github.com/alexandrumeterez/bngrad
+
Footnote †: Code is available at: github.com/alexandrumeterez/bngrad
## 1 Introduction
What if we could train even deeper neural networks? Increasing depth empowers neural networks, by turning them into powerful data processing machines. For example, increasing depth allows large language models (LLMs) to capture longer structural dependencies (Devlin et al., 2018; Liu et al., 2019; Brown et al., 2020; Goyal et al., 2021; Raffel et al., 2020). Also, sufficiently deep convolutional networks can outperform humans in image classification (Liu et al., 2022; Woo et al., 2023; Wu et al., 2021). Nevertheless, increasing depth imposes an inevitable computational challenge: deeper networks are harder to optimize. In fact, standard optimization methods exhibit a slower convergence when training deep neural networks. Hence, computation has become a barrier in deep learning, demanding extensive research and engineering.
A critical problem is that deeper networks suffer from the omnipresent issue of rank collapse at initialization: the outputs become collinear for different inputs as the network grows in depth (Saxe et al., 2013a). Rank collapse is not only present in MLPs and convolutional networks (Feng et al., 2022; Saxe et al., 2013a; Daneshmand et al., 2020), but also in transformer architectures (Dong et al., 2021; Noci et al., 2022). This issue significantly contributes to the training slowdown of deep neural networks. Hence, it has become the focus of theoretical and experimental studies (Saxe et al., 2013a; Feng et al., 2022; Daneshmand et al., 2021; Noci et al., 2022).
One of the most successful methods to avoid rank collapse is Batch Normalization (BN) (Ioffe and Szegedy, 2015), as proven in a number of theoretical studies (Yang et al., 2019; Daneshmand et al., 2020, 2021). Normalization imposes a particular bias across the layers of neural networks (Joudaki et al., 2023a). More precisely, the representations of a batch of inputs become more orthogonal after each normalization (Joudaki et al., 2023a). This orthogonalization effect precisely avoids the rank collapse of deep neural networks at initialization (Yang et al., 2019; Joudaki et al., 2023a; Daneshmand et al., 2021; Joudaki et al., 2023b).
While batch normalization effectively avoids rank collapse, it causes numerical issues. The existing literature proves that batch normalization layers cause exploding gradients in MLPs in an activation-independent manner (Yang et al., 2019). Gradient explosion limits increasing depth by causing numerical issues during backpropagation. For networks without batch normalization, there are effective approaches to avoid gradient explosion and vanishing, such as tuning the variance of the random weights based on the activation and the network width (He et al., 2015; Glorot and Bengio, 2010). However, such methods cannot avoid gradient explosion in the presence of batch normalization (Yang et al., 2019). Thus, the following important question remains unanswered:
Is there any network with batch normalization without gradient explosion and rank collapse issues?
Contributions.We answer the above question affirmatively by giving a specific MLP construction initialized with orthogonal random weight matrices, rather than Gaussian. To show that the MLP still has optimal signal propagation, we prove that the MLP output embeddings become isometric (equation 5), implying the output representations becomes more orthogonal with depth. For a batch of linearly independent inputs, we prove
\[\mathbb{E}\big{[}\text{isometry gap}\big{]}=\mathcal{O}\left(e^{-\text{ depth}/C}\right), \tag{1}\]
where \(C\) is a constant depending only on the network width and input and the expectation is taken over the random weight matrices. Thus, for sufficiently deep networks, the representations rapidly approach an orthogonal matrix. While Daneshmand et al. (2021) prove that the outputs converge to within an \(\mathcal{O}(\text{width}^{-1/2})\)-ball close to orthogonality, we prove that the output representations become perfectly orthogonal in the infinite depth limit. This perfect orthogonalization turns out to be key in proving our result about avoiding gradient explosion. In fact, for MLPs initialized with Gaussian weights and BN, Yang et al. (2019, Theorem 3.9) prove that the gradients explode at an exponential rate in depth. In a striking contrast, we prove that gradients of an MLP with BN and orthogonal weights remain bounded as
\[\mathbb{E}\big{[}\log\left(\text{gradient norm for each layer}\right)\big{]}= \mathcal{O}\left(\text{width}^{5}\right). \tag{2}\]
Thus, the gradient is bounded by a constant that only depends on the network width where the expectation is taken over the random weight matrices. It is worth noting that both isometry and log-norm gradient bounds are derived _non-asymptotically_. Thus, in contrast to the previously studied mean-field or infinite width regime, our theoretical results hold in practical settings where the width is finite.
The limitation of our theory is that it holds for a simplification in the BN module and linear activations. However, our results provide guidelines to avoid gradient explosion in MLPs with non-linear activations. We experimentally show that it is possible to avoid gradient explosion for certain non-linear activations with orthogonal random weights together with "activation shaping" (Martens et al., 2021). Finally, we experimentally demonstrate that avoiding gradient explosion stabilizes the training of deep MLPs with BN.
## 2 Related work
The challenge of depth in learning.Large depth poses challenges for the optimization of neural networks, which becomes slower by increasing the number of layers. This depth related slowdown is mainly attributed to: (i) gradient vanishing/explosion, and (ii) the rank collapse of hidden representations. (i) Gradient vanishing and explosion is a classic problem in neural networks (Hochreiter, 1998). For some neural architectures, this issue can be effectively solved. For example, He et al. (2015) propose a particular initialization scheme that avoids gradient vanishing/explosion for neural networks with rectifier non-linearities while Glorot and Bengio (2010) study the effect of initialization on sigmoidal activations. However, such initializations cannot avoid gradient explosion for networks with batch normalization (Yang et al., 2019; Lubana et al., 2021). (ii) Saxe et al. (2013a) demonstrate that outputs become independent from inputs with growing depth, which is called the rank collapse issue (Daneshmand et al., 2020; Dong et al., 2021). Various techniques have been developed to avoid rank collapse such as batch normalization (Ioffe and Szegedy, 2015),
residual connections (He et al., 2016), and self-normalizing activations (Klambauer et al., 2017). Here, we focus on batch normalization since our primary goal is to avoid the systemic issue of gradient explosion for batch normalization.
Initialization with orthogonal matrices.Saxe et al. (2013) propose initializing the weights with random orthogonal matrices for linear networks without normalization layers. Orthogonal matrices avoid the rank collapse issue in linear networks, thereby enabling a depth-independent training convergence. Pennington et al. (2017) show that MLPs with sigmoidal activations achieve dynamical isometry when initialized with orthogonal weights. Similar benefits have been achieved by initializing CNNs with orthogonal or almost orthogonal kernels (Xiao et al., 2018; Mishkin and Matas, 2015), and by initializing RNN transition matrices with elements from the orthogonal and unitary ensembles (Arjovsky et al., 2016; Le et al., 2015; Henaff et al., 2016). Similarly, we use orthogonal random matrices to avoid gradient explosion. What sets our study apart from this literature is that our focus is on batch normalization and the issue of gradient explosion.
Networks with linear activation functions.Due to its analytical simplicity, the identity function has been widely used in theoretical studies for neural networks. Studies on identity activations date back to at least two decades. Fukumizu (1998) studies batch gradient descent in linear neural networks and its effect on overfitting and generalization. Baldi and Hornik (1995) provide an overview over various theoretical manuscripts studying linear neural networks. Despite linearity, as Saxe et al. (2013, 2013) observe, the gradient dynamics in a linear MLP are highly nonlinear. In a line of work, Saxe et al. (2013, 2013) study the training dynamics of deep neural networks with identity activations and introduce the notion of dynamical isometry. Baldi and Hornik (1989) and Yun et al. (2017) study the mean squared error optimization landscape in linear MLPs. More recently, the optimum convergence rate of gradient descent in deep linear neural networks has been studied by Arora et al. (2018) and Shamir (2019). Du and Hu (2019) prove that under certain conditions on the model width and input degeneracy, linear MLPs with Xavier initialized weights (Glorot and Bengio, 2010) converge linearly to the global optimum. Akin to these studies, we also analyze networks with linear activations. However, batch normalization is a non-linear function, hence the network we study in this paper is a highly non-linear function of its inputs.
Mean field theory for random neural networks.The existing analyses for random networks often rely on mean-field regimes where the network width tends to infinity (Pennington et al., 2017; Yang et al., 2019; Li et al., 2022; Pennington and Worah, 2017). However, there is a discrepancy between mean-field regimes and the practical regime of finite width. While some analyses attempt to bridge this gap (Joudaki et al., 2023; Daneshmand et al., 2021), their results rely on technical assumptions that are hard to validate. In contrast, our non-asymptotic results hold for standard neural networks used in practice. Namely, our main assumption for avoiding rank collapse and gradient explosion is that samples in the input batch are not linearly dependent, which we will show is necessary. To go beyond mean-field regimes, we leverage recent theoretical advancements in Weingarten calculus (Weingarten, 1978; Collins, 2003; Banica et al., 2011; Collins and Sniady, 2006; Collins et al., 2022).
## 3 Main results
We will develop our theory by constructing networks that do not suffer from gradient explosion (Sec. 3.3) and still orthogonalize (Sec. 3.1). The construction is similar to the network studied by Daneshmand et al. (2021): an MLP with batch normalization and linear activations. Formally, let \(X^{\ell}\in\mathbb{R}^{d\times n}\) denote the representation of \(n\) samples in \(\mathbb{R}^{d}\) at layer \(\ell\), then
\[X_{\ell+1}=\textsc{Bn}(W_{\ell}X_{\ell}), \ell=0,\ldots,L, \tag{3}\]
where \(W_{\ell}\in\mathbb{R}^{d\times d}\) are random weights. Analogous to recent theoretical studies of batch normalization (Daneshmand et al., 2021, 2020), we define the BN operator \(\textsc{Bn}:\mathbb{R}^{d\times n}\to\mathbb{R}^{d\times n}\) as
\[\textsc{Bn}(X)=\text{diag}(XX^{\top})^{-\frac{1}{2}}X,\quad\textsc{Bn}(X)_{ ij}=\frac{X_{ij}}{\sqrt{\sum_{k=1}^{d}X_{ik}^{2}}}. \tag{4}\]
Note that compared to the standard BN operator, mean reduction in equation 4 is omitted. Our motivation for this modification, similar to Daneshmand et al. (2021), is purely technical and to streamline our theory. We will experimentally show that using standard BN modules instead does not influence our results on gradient explosion and signal propagation (for more details see Figure G6). A second minor difference is
that in the denominator, we have omitted a \(\frac{1}{n}\) factor. However, this only amounts to a constant scaling of the representations and does not affect our results.
Compared to Daneshmand et al. (2021), we need two main modifications to avoid gradient explosion: (i) \(n=d\), and (ii) \(W_{\ell}\) are random _orthogonal_ matrices. More precisely, we assume the distribution of \(W_{\ell}\) is the Haar measure over the orthogonal group denoted by \(\mathbb{O}_{d}\)(Collins and Sniady, 2006). Such an initialization scheme is widely used in deep neural networks without batch normalization (Saxe et al., 2013; Xiao et al., 2018; Pennington et al., 2017). For MLP networks with BN, we prove such initialization avoids the issue of gradient explosion, while simultaneously orthogonalizing the inputs.
### Tracking signal propagation via orthogonality
As discussed, batch normalization has an important orthogonalization bias that influences training. Without normalization layers, representations in many architectures face the issue of rank-collapse, which happens when network outputs become collinear for arbitrary inputs, hence their directions become insensitive to the changes in the input. In contrast, the outputs in networks with batch normalization become increasingly orthogonal through the layers, thereby enhancing the signal propagation in depth (Daneshmand et al., 2021). Thus, it is important to check whether the constructed network maintains the important property of orthogonalization.
Isometry gap.Our analysis relies on the notion of _isometry gap_, \(\phi:\mathbb{R}^{d\times d}\to\mathbb{R}\), introduced by Joudaki et al. (2023). Isometry gap is defined as
\[\phi(X)=-\log\left(\frac{\det(X^{\top}X)^{\frac{1}{2}}}{\frac{1}{d}\mathrm{Tr }(X^{\top}X)}\right). \tag{5}\]
One can readily check that \(\phi(X)\geq 0\) and it is zero when \(X\) is an orthogonal matrix, i.e., \(XX^{\top}=I_{d}\). The _isometry_ denoted by \(\mathcal{I}:\mathbb{R}^{d\times d}\to\mathbb{R}\) is defined as \(\mathcal{I}(X)=\exp(-\phi(X))\).
Geometric interpretation of isometry.While the formula for isometry gap may seem enigmatic at first, it has a simple geometric interpretation that makes it intuitively understandable. The determinant \(\det(X^{\top}X)=\det(X)^{2}\) is the squared volume of the parallelepiped spanned by the columns of \(X\), while \(\mathrm{Tr}(X^{\top}X)\) is the sum squared-norms of the columns of \(X\). Thus, the ratio between the two provides a scale-invariant notion of volume and isometry. On the one hand, if there is any collinearity between the columns, the volume will vanish and the isometry gap will be infinity, \(\phi(X)=\infty\). On the other hand, \(\phi(X)=0\) implies \(X^{\top}X\) is a scaled identity matrix. We will prove \(\phi\) serves as a Lyapunov function for the chain of hidden representations \(\{X_{\ell}\}_{\ell=0}^{\infty}\).
Theory for orthogonalization.The following theorem establishes the link between orthogonality of representations and depth.
**Theorem 1**.: _There is an absolute constant \(C\) such that for any layer \(\ell\leq L\) we have_
\[\mathbb{E}\phi(X_{\ell+1})\leq\phi(X_{0})e^{-\ell/k},\hskip 56.905512pt \text{where}\hskip 56.905512ptk:=Cd^{2}(1+d\phi(X_{0})). \tag{6}\]
Theorem 1 states that if the samples in the input batch are not linearly dependent, representations approach orthogonality at an exponential rate in depth. The orthogonalization in depth ensures the avoidance of the rank collapse of representations, which is a known barrier to training deep neural networks (Daneshmand et al., 2020; Saxe et al., 2013; Bajcuk et al., 2018).
Figure 1 compares the established theoretical decay rate of \(\phi\) with the practical rate. Interestingly, the plot confirms that the rate depends on width in practice, akin to the theoretical rate in Theorem 1.
It is worth mentioning that the condition on the input samples to not be linearly dependent is necessary to establish this result. One can readily check that starting from a rank-deficient input, neither matrix products, nor batch-normalization operations can increase the rank of the representations. Since this assumption is quantitative, we can numerically verify it by randomly drawing many input mini-batches and check if they are linearly dependent. For CIFAR10, CIFAR100, MNIST and FashionMNIST, we empirically tested that most batches across various batch sizes are full-rank (see Section D for details on the average rank of a batch in these datasets).
Theorem 1 distinguishes itself from the existing orthogonalization results in the literature (Yang et al., 2019; Joudaki et al., 2023a) as it is non-asymptotic and holds for networks with finite width. Since practical networks have finite width and depth, non-asymptotic results are crucial for their applicability to real-world settings. While Daneshmand et al. (2021) provide a non-asymptotic bound for orthogonalization, the main result relies on an assumption that is hard to verify.
Proof idea of Theorem 1.: We leverage a recent result established by Joudaki et al. (2023a), proving that the isometry gap does not decrease with BN layers. For all non-degenerate matrices \(X\in\mathbb{R}^{d\times d}\), the following holds
\[\mathcal{I}(\text{Bn}(X))\geq\left(1+\frac{\text{variance}\{\|X_{j}\|\}_{j=1} ^{d}}{(\text{mean}\{\|X_{j}.\|\}_{j=1}^{d})^{2}}\right)\mathcal{I}(X)\.\]
Using the above result, we can prove that matrix multiplication with orthogonal weights also does not decrease isometry as stated in the next lemma.
**Lemma 2** (Isometry after rotation).: _Let \(X\in\mathbb{R}^{d\times d}\) and \(W\in\mathbb{R}^{d\times d}\) be an orthogonal matrix and \(X^{\prime}=WX\); then,_
\[\mathcal{I}(\text{Bn}(X^{\prime}))\geq\left(1+\frac{\text{variance}\{\|X^{ \prime}_{j}.\|\}_{j=1}^{d}}{(\text{mean}\{\|X^{\prime}_{j}.\|\}_{j=1}^{d})^{ 2}}\right)\mathcal{I}(X). \tag{7}\]
It is straightforward to check that there exists at least an orthogonal matrix \(W\) for which \(\mathcal{I}(\text{Bn}(WX))=1\) (see Corollary A.3). Thus, \(\mathcal{I}(\cdot)\) strictly increases for some weight matrices, as long as \(X\) is not orthogonal. When the distribution of \(W\) is the Haar measure over the orthogonal group, we can leverage recent developments in Weingarten calculus (Weingarten, 1978; Banica et al., 2011; Collins and Sniady, 2006; Collins et al., 2022) to calculate a rate for the isometry increase in expectation:
**Theorem 3**.: _Suppose \(W\sim\mathbb{O}_{d}\) is a matrix drawn from \(\mathbb{O}_{d}\) such that the distribution of \(W\) and \(UW\) are the same for all orthogonal matrices \(U\). Let \(\{\lambda_{i}\}_{i=1}^{d}\) be the eigenvalues of \(XX^{\top}\). Then,_
\[\mathbb{E}_{W}\left[\mathcal{I}(\text{Bn}(WX))\right]\geq\left(1-\frac{\sum_{ k=1}^{d}(\lambda_{k}-1)^{2}}{2d^{2}(d+2)}\right)^{-1}\mathcal{I}(X) \tag{8}\]
_holds for all \(X=\text{Bn}(\cdot)\), with equality for orthogonal matrices._
The structure in \(X\) induced by BN ensures its eigenvalues lie in the interval \((0,1]\), in that the multiplicative factor in the above inequality is always greater than one. In other words, \(\mathcal{I}(\cdot)\) increases by a constant factor in expectation that depends on how close \(X\) is to an orthogonal matrix.
The connection between Theorem 3 and the main isometry gap bound stated in Theorem 1 is established in the following Corollary (recall \(\phi=-\log\mathcal{I}\)).
**Corollary 4** (Isometry gap bound).: _Suppose the same setup as in Theorem 3, where \(X^{\prime}=WX\). Then, we have:_
\[\mathbb{E}_{W}[\phi(X^{\prime})|X]\leq\phi(X)+\log\left(1-\frac{\sum_{k}( \lambda_{k}-1)^{2}}{2d^{2}(d+2)}\right). \tag{9}\]
Notice that the term \(\frac{\sum_{k=1}^{d}(\lambda_{k}-1)^{2}}{2d^{2}(d+2)}=\mathcal{O}(\frac{1}{d})\), yielding \(\log\left[1-\frac{\sum_{k=1}^{d}(\lambda_{k}-1)^{2}}{2d^{2}(d+2)}\right]\leq 0\). The rest of proof is based on an induction over the layers, presented in Appendices A and B.
Figure 1: Isometry gap (y-axis, log-scale) in depth for an MLP with orthogonal weights, over randomly generated data. As predicted by Theorem 1, isometry gap of representations vanishes at an exponential rate. The solid traces are averaged over 10 independent runs, and the dashed traces show the theoretical prediction from Theorem 1.
### Orthogonalization and gradient explosion
There is a subtle connection between orthogonalization and gradient explosion. Suppose the input batch is rank-deficient, i.e., degenerate. As elaborated above, since all operations in our MLP can be formulated as matrix products, they cannot recover the rank of the representations, which thus remain degenerate. By perturbing the input such that it becomes full-rank, the output matrix becomes orthogonal, hence non-degenerate at an exponential rate in depth as proven in Theorem 1.
Thus, a slight change in inputs leads to a significant change in outputs from degeneracy to orthogonality. Considering that the gradient measures changes in the loss for infinitesimal inputs changes, the large changes in outputs potentially lead to gradient explosion. While this is only an intuitive argument, we observe that in practice the gradient does explode for degenerate inputs, as shown in Figure 2.
Nonetheless, in Figure 2 we observe that for non-degenerate inputs the gradient norm does not explode. In fact, we observe that inputs are often non-degenerate in practice (see Table D1 for details). Thus, an important question is whether the gradient norm remains bounded for non-degenerate input batches. Remarkably, we can not empirically verify that for _all degenerate inputs_ the gradient norm remains bounded. Therefore, a theoretical guarantee is necessary to ensure avoiding gradient explosion.
### Avoiding gradient explosion in depth
So far, we have proven that the constructed network maintains the orthogonalization property of BN. Now, we turn our focus to the gradient analysis. The next theorem proves that the constructed network does not suffer from gradient explosion in depth for non-degenerate input matrices.
**Theorem 5**.: _Let the loss function \(\mathcal{L}:\mathbb{R}^{d\times d}\to\mathbb{R}\) be \(\mathcal{O}(1)\)-Lipschitz, and input batch \(X_{0}\) be non-degenerate. Then, there exists an absolute constant \(C\) such that for all \(\ell\leq L\) it holds_
\[\mathbb{E}\left[\log\|\nabla_{W_{\ell}}\mathcal{L}(X_{\ell})\|\right]\leq Cd ^{5}(\phi(X_{0})^{3}+1) \tag{10}\]
_where the expectation is over the random orthogonal weight matrices._
**Remark 1**.: _For degenerate inputs \(\phi(X_{0})=\infty\) holds, in that the bound becomes vacuous._
**Remark 2**.: _The \(\mathcal{O}(1)\)-Lipschitz condition holds in many practical settings. For example, in a classification setting, MSE and cross entropy losses obey the \(\mathcal{O}(1)\)-Lipschitz condition (see Lemma C.2)._
Note that the bound is stated for the expected value of log-norm of the gradients, which can be interpreted as bits of precision needed to store the gradient matrices. Thus, the fact that depth does not appear in any form in the upper bound of Theorem 5 points out that training arbitrarily deep MLPs with orthogonal weights will not face numerical issues that arise with Gaussian weights (Yang et al., 2019) as long as the inputs are non-degenerate. Such guarantees are necessary to ensure backpropagation will not face numerical issues.
Theorem 5 states that as long as the input samples are not linearly dependent, the gradients remain bounded for any arbitrary depth \(L\). As discussed in the previous section and evidenced in Figure 2, this is necessary to avoid gradient explosion. Therefore, the upper bound provided in Theorem 5 is tight in terms of inputs constraints. Furthermore, as mentioned before, random batches sampled from commonly used benchmarks, such as CIFAR10, CIFAR100, MNIST, and FashionMNIST, are non-degenerate in most practical cases (see Section D for more details). Thus, the assumptions and thereby assertions of the theorem are valid for all practical purposes.
To the best of our knowledge, Theorem 5 is the first non-asymptotic gradient analysis that holds for networks with batch normalization and finite width. Previous results heavily rely on mean field analyses in asymptotic regimes, where the network width tends to infinity (Yang et al., 2019). While mean-field analyses have
Figure 2: Logarithmic plot for the gradient norm of the first layer for networks with different number of layers evaluated on degenerate (orange) and non-degenerate (blue) inputs. The degenerate inputs contain repeated samples from CIFAR10 in the batch, measured at initialization for MLPs of various depths. While gradients explode for degenerate inputs, there is no explosion for non-degenerate inputs. Traces are averaged over 10 independent runs.
brought many insights about the rate of gradient explosion, they are often specific to Gaussian weights. Here, we show that non-Gaussian weights can avoid gradient explosion, which has previously been considered "unavoidable" (Yang et al., 2019). Figure 3 illustrates this pronounced discrepancy.
Proof idea of Theorem 5.: The first important observation is that, due to the chain rule, we can bound the log-norm of the gradient of a composition of functions, by bounding the summation of the log-norms of the input-output Jacobian of each layer, plus two additional terms corresponding to the loss term and the gradient of the first layer in the chain. If we discount the effect of the first and last terms, the bulk of the analysis is dedicated to bounding the total sum of log-norms of per layer input-output Jacobian, i.e., the fully connected and batch normalization layers. The second observation is that because the weights are only rotations, their Jacobian has eigenvalues equal to 1. Thus, the log-norm of gradients corresponding to fully connected layers vanish. What remains is to show that for any arbitrary depth \(\ell\), the log-norm of gradients of batch normalization layers also remains bounded. The main technical novelty for proving this step is showing that the log-norm of the gradient of Bn layers is upper bounded by the isometry gap of pre-normalization matrices. Thus, we can invoke the exponential decay in isometry gap stated in Theorem 1 to establish a bound on the log-norm of the gradient of these layers. Finally, since the decay in isometry gap is exponentially fast, the bound on the total sum of log-norm of the gradients amounts to a geometric sum that remains bounded for any arbitrary depth \(\ell\).
## 4 Implications on training
In this section, we experimentally validate the benefits of avoiding gradient explosion and rank collapse for training. Thus far, we have proved that our constructed neural network with BN does not suffer from gradient explosion in Theorem 5, and does not have the rank collapse issue in depth via the orthogonalization property established in Theorem 1. We find that the constructed MLP is therefore less prone to numerical issues that arise when training deep networks.
Figure 4: Contrasting the training accuracy of MLPs with BN and shaped sin, shaped tanh and identity activations, on the CIFAR10 dataset. The identity activation performs much worse than the nonlinearities, confirming that the sin and tanh networks are not operating in the linear regime. The networks are trained with vanilla SGD and the hyperparameters are width 100, batch size 100, learning rate 0.001.
Figure 3: Logarithmic plot for the gradient norm of the first layer for networks with different number of layers evaluated on CIFAR10. For Gaussian weights (orange) the gradient-norm grows at an exponential rate, as predicted by Yang et al. (2019, Theorem 3.9), while for orthogonal weights (blue) gradients remain bounded by a constant, validating Theorem 5. Traces are averaged over 10 runs and shaded regions denote the 95% confidence intervals.
linear MLPs. In other words, the number of iterations to get to a certain accuracy does not vary widely between networks with different depths. Figure 4 (c) shows the convergence of SGD for CIFAR10, with learning rate 0.001, for MLPs with width \(d=100\) and batch size \(n=100.\) While the SGD trajectory strongly diverges from the initial conditions that we analyze theoretically, Figure 4 shows that the gradients remain stable during training, as well as the fact that different depths exhibit largely similar accuracy curves.
While the empirical evidence for our MLP with linear activation is encouraging, non-linear activations are essential parts of feature learning (Nair and Hinton, 2010; Klambauer et al., 2017; Hendrycks and Gimpel, 2016; Maas et al., 2013). However, introducing non-linearity violates one of the key parts of our theory, in that it prevents representations from reaching perfect isometry (see Figure G5 in the Appendix for details on the connection between non-linearities and gradient explosion in depth). Intuitively, this is due to the fact that non-linear layers, as opposed to rotations and batch normalization, perturb the isometry of representations and prevent them from reaching zero isometry gap in depth. This problem turns out to be not just a theoretical nuisance, but to play a direct role in the gradient explosion behavior. While the situation may seem futile at first, it turns out that activation shaping (Li et al., 2022; Zhang et al., 2022; Martens et al., 2021; He et al., 2023; Noci et al., 2023) can alleviate this problem, which is discussed next. For the remainder of this section, we focus on the training of MLPs with non-linear activations, as well as standard batch normalization and fully connected layers.
## 5 Activation shaping based on the theoretical analysis
In recent years, several works have attempted to overcome the challenges of training very deep networks by parameterizing activation functions. In a seminal work, Martens et al. (2021) propose _deep kernel shaping_, which is aimed at facilitating the training of deep networks without relying on skip connections or normalization layers, and was later extended to LeakyReLU in _tailored activation transformations_(Zhang et al., 2022). In a similar direction, Li et al. (2022) propose _activation shaping_ in order to avoid a degenerate output covariance. Li et al. (2022) propose shaping the negative slope of LeakyReLU towards identity to ensure that the output covariance matrix remains non-degenerate when the networks becomes very deep.
Since kernel and activation shaping aim to replace normalization, they have not been used in conjunction with normalization layers. In fact, in networks with batch normalization, even linear activations have non-degenerate outputs (Daneshmand et al., 2021; Yang et al., 2019) and exploding gradients (Yang et al., 2019). Thus, shaping activations towards identity in the presence of normalization layers may seem fruitless. Remarkably, we empirically demonstrate that we can leverage activation shaping to avoid gradient explosion in depth by using a pre-activation gain at each layer.
Inspired by our theory, we develop a novel activation shaping scheme for networks with BN. The main strategy consists of shaping the activation function towards a linear function across the layers. Our activation shaping consists of tuning the gain of the activation, i.e., tuning \(\alpha\) for \(\sigma(\alpha x).\) We consider non-linear activations \(\sigma\in\{\tanh,\sin\}.\)
The special property that both \(\tanh\) and \(\sin\) activations have in common is that they are centered, \(\sigma(0)=0,\) are differentiable around the origin \(\sigma^{\prime}(0)=1,\) and have bounded gradients \(\sigma^{\prime}(x)\leq 1,\forall x\). Therefore, by tuning the per-layer pre-activation gain \(\alpha_{\ell}\) towards \(0,\) the non-linearities behave akin to the identity function. This
Figure 5: Logarithmic plot contrasting the effect of gain on the gradient at initialization of the first layer, for networks with different number of layers initialized with orthogonal weights, BN and different activations, evaluated on CIFAR10. The networks have hyperparameters width 100, batch size 100. Traces are averaged over 10 independent runs, with the shades showing the 95% confidence interval.
observation inspires us to study the relationship between the rate of gradient explosion for each layer as a function of the gain parameter \(\alpha_{\ell}\). Formally, we consider an MLP with shaped activations using gain \(\alpha_{\ell}\) for the \(\ell\)th layer, that has the update rule
\[X_{\ell+1}=\sigma(\alpha_{\ell}\text{Bn}(W_{\ell}X_{\ell})). \tag{11}\]
Since the gradient norm has an exponential growth in depth, as shown in Figure 5, we can compute the slope of the linear growth rate of log-norm of gradients in depth. We define the rate of explosion for a model of depth \(L\) and gain \(\alpha_{\ell}\) at layer \(\ell\) as the slope of the log norm of the gradients \(R(\ell,\alpha_{\ell})\). We show in Figure 5 that by tuning the gain properly, we are able to reduce the exponential rate of the log-norm of the gradients by diminishing the slope of the rate curve and achieve networks trainable at arbitrary depths, while still maintaining the benefits of the non-linear activation. The main idea for our activation shaping strategy is to have a bounded total sum of rates across layers, by ensuring faster decay than a harmonic series (see App. E for more details on activation shaping). Figure 5 illustrates that this activation shaping strategy effectively avoids gradient explosion while maintaining the signal propagation and orthogonality of the outputs in depth. Furthermore, Figure 4 shows that the training accuracy remains largely depth-independent. For further experiments using activation shaping, see Appendix G.
## 6 Discussion
Implicit bias of SGD towards orthogonality in optimization.Optimization over orthogonal matrices has been an effective approach for training deep neural networks. Enforcing orthogonality during training ensures that the spectrum of the weight matrices remains bounded, which prevents gradient vanishing and explosion in depth. Vorontsov et al. (2017) study how different orthogonality constraints affect training performance in RNNs. For example Lezcano-Casado and Martinez-Rubio (2019) leverage the exponential map on the orthogonal group, Jose et al. (2018) decompose RNN transition matrices in Kronecker factors and impose soft constraints on each factor and Mhammedi et al. (2017) introduce a constraint based on Householder matrices.
While these studies _enforce_ orthogonality constraints, one of our most striking empirical observations is that when our MLP grows very deep, the middle layers remain almost orthogonal even after many steps of SGD. As shown in Figure 6, for 1000 layer networks, the middle layers remain orthogonal during training. One could hypothesize that this is due to small gradients in these layers. In Figure 6, we observe that the gradients of these middle layers are not negligible. Thus, in our MLP construction, both with linear activation and with activation shaping, the gradient dynamics have an _implicit bias_ to optimize over the space of orthogonal matrices. The mechanisms underlying this implicit orthogonality bias will be an ample direction for future research.
#### Author Contributions
Alexandru Meterez: proofs of Section A, driving experiments, and writing the paper. Amir Joudaki: proofs of Section B and Section C, designing the activation shaping scheme, and writing the paper. Francesco Orabona:
Figure 6: **Implicit orthogonality bias of SGD.** Training an MLP with width \(d=100\), batch size \(n=100\), and depth \(L=1000\), activation tanh, using SGD with lr = 0.001 (a) Isometry gap (y-axis; log-scale) of weight matrices across all layers throughout training. (b) Gradient norms at each layer during training.
proposing the idea of using orthogonal weights to achieve prefect isometry, reading the proofs, help with writing. Alexander Immer: reading the proofs, help with experiments and paper writing. Gunnar Ratsch: help with experimental designs for activation shaping and paper writing. Hadi Daneshmand: proposed using orthogonal weights to avoid gradients explosion, leading the proofs help with paper writing.
#### Acknowledgments
Amir Joudaki is funded through Swiss National Science Foundation Project Grant #200550 to Andre Kahles. We acknowledge support from the NSF TRIPODS program (award DMS-2022448). Alexander Immer acknowledges support from the Max Planck ETH Center for Learning Systems. Amir Joudaki and Alexander Immer were partially funded by ETH Core funding (to G.R.).
|
2307.08696
|
A Note on Reproducing Kernels for Sobolev Spaces
|
In this note, we compute the reproducing kernel for the RKHS of functions on
$\mathbb{R}^n$ in a sufficiently high Sobolev norm.
|
Steven Rosenberg
|
2023-07-17T17:57:42Z
|
http://arxiv.org/abs/2307.08696v1
|
# A note on reproducing kernels for Sobolev spaces
###### Abstract.
In this note, we compute the reproducing kernel for the RKHS of functions on \(\mathbb{R}^{n}\) in a sufficiently high Sobolev norm.
## 1. Introduction
An RKHS is a Hilbert space of complex-valued functions \(\mathcal{H}\subset\{f:M\to\mathbb{R}\}\) for some topological space \(M\) with the property that the evaluation/delta functions \(\delta_{x}(f)=f(x)\) are continuous for all \(x\in M.\) Thus there exists \(d_{x}\in\mathcal{H}\) such that \(\delta_{x}(f)=\langle d_{x},f\rangle_{\mathcal{H}}\). For \(M\) an \(n\)-dimensional closed manifold, the Sobolev space \(H_{s}(M)\) is an RKHS for integers \(s>\dim(M)/2.\) In this note, we explicitly compute \(d_{x}\) for \(M=\mathbb{R}^{n}\). This produces \(d_{x}\) on a general manifold via partition of unity, but the explicitness is lost.
## 2. The Computation
Let \(N(x,\sigma)(x)\) be the multivariate normal distribution with mean \(x\) and variance \(\sigma=\sigma\cdot\mathrm{Id}.\) For \(\sigma\approx 0\), \(N(x,\sigma)\) acts like a delta function: for \(f\in H_{s}(\mathbb{R}^{n}),\)
\[\lim_{\sigma\to 0}\int_{\mathbb{R}^{n}}N(x,\sigma)(x)f(x)dx=f(x).\]
This uses that \(f\) is continuous, so the delta function is a continuous functional on \(H_{s}\) by the Sobolev embedding theorem. Hence, \(f\) is in \(H_{-s}\), and \(\lim_{\sigma\to 0}N(x,\sigma)=\delta_{x}\) in \(H_{-s}(\mathbb{R}^{n}).\) Therefore
\[\lim_{\sigma\to 0}\int_{\mathbb{R}^{n}}N(x,\sigma)f(x)dx =f(x)=\langle d_{x},f\rangle_{s}=\int_{\mathbb{R}^{n}}\widehat{d_ {x}}(\xi)\hat{f}(\xi)(1+|\xi|^{2})^{s}d\xi\] \[=\int_{\mathbb{R}}f(x)\mathcal{F}^{-1}\left(\widehat{d_{x}}(1+| \xi|^{2})^{s}\right),\]
with \(\mathcal{F}^{-1}\) the inverse Fourier transform. (This uses the fact that the Fourier transform is a bijection on \(H_{-s}\).)
Thus \(\delta_{x}=\mathcal{F}^{-1}\left(\widehat{d_{x}}(1+|\xi|^{2})^{s}\right)\in H_ {-s}(\mathbb{R}^{n}).\) This implies \(\widehat{d_{x}}=\mathcal{F}(\delta_{x})(1+|\xi|^{2})^{-s}\). Using \(\mathcal{F}^{-1}(\mathcal{F}(f)g)=f*\mathcal{F}^{-1}(g)\), where \(*\) is convolution, we get
\[d_{x}(x)\] \[=\left(\delta_{x}*\mathcal{F}^{-1}\left((1+|\xi|^{2})^{-s}\right) \right)(x)=\lim_{\sigma\to 0}\int_{\mathbb{R}^{n}}N(x,\sigma)(y)\mathcal{F}^{-1} \left((1+|\xi|^{2})^{-s}\right)(x-y)dy\] \[=\mathcal{F}^{-1}\left((1+|\xi|^{2})^{-s}\right)(x-x)=\frac{1}{( 2\pi)^{n}}\int_{\mathbb{R}^{n}}e^{i(x-x)\cdot\xi}(1+|\xi|^{2})^{-s}d\xi\] \[=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}}\cos((x-x)\cdot\xi)(1+| \xi|^{2})^{-s}d\xi,\]
since \(\sin\) is an odd function.
We begin with the case \(n=1\), so we are considering \(H_{s}(\mathbb{R}).\)
**Lemma 2.1**.: \[d_{x}(y)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i|x-y|\xi}(1+\xi^{2})^{-s}d \xi=\frac{e^{-|x-y|}|x-y|^{s-1}}{2^{s}(s-1)!}\sum_{k=0}^{s-1}\frac{(s+k-1)!}{k! (s-k-1)!}(2|x-y|)^{-k}.\]
In particular,
\[d_{x}(x)=\binom{2s-2}{s-1}2^{-2s+1}.\]
We can use \(|x-y|\) in place of \(x-y\) since \(\cos\) is an even function.
Proof.: Fix \(a\in\mathbb{R}\). We compute
\[\lim_{R\to\infty}\oint_{C_{R}}e^{iaz}(1+z^{2})^{-s},\]
where \(C_{R}\) is the contour given by going along the \(x\)-axis from \(-R\) to \(R\) and then the semicircle in the upper half plane centered at the origin and radius \(R\), traveled counterclockwise. By the Jordan Lemma, the integral over the semicircle goes to zero as \(R\to\infty\), so
\[\lim_{R\to\infty}\oint_{C_{R}}e^{iaz}(1+z^{2})^{-s}=\int_{-\infty}^{\infty}e^{ iax}(1+x^{2})^{-s}dx,\]
the integral we want.
The only pole of the integrand inside the contour is at \(z=i\), where \((1+z^{2})^{-s}=(i+z)^{-s}(-i+z)^{-s}\) contributes a pole of order \(s\). By the Cauchy residue formula and the standard formula for computing residues,
\[\oint_{C_{R}}e^{iaz}(1+z^{2})^{-s}\] \[=2\pi i\operatorname{Res}_{z=i}e^{iaz}(1+z^{2})^{-s}\] \[=2\pi i\frac{1}{(s-1)!}\frac{d^{s-1}}{dz^{s-1}}\bigg{|}_{z=i}(z-i )^{s}(1+z^{2})^{-s}e^{iaz}\] \[=2\pi i\frac{1}{(s-1)!}\frac{d^{s-1}}{dz^{s-1}}\bigg{|}_{z=i}(z+i )^{-s}e^{iaz}\] \[=2\pi i\frac{1}{(s-1)!}\sum_{k=0}^{s-1}\binom{s-1}{k}(-1)^{k}s(s+ 1)(s+k-1)(2i)^{-s-k}e^{-a}(ia)^{s-1-k}\] \[=2\pi ie^{-a}\frac{1}{(s-1)!}\sum_{k=0}^{s-1}(-1)^{k}\frac{(s-1)!}{k!(s-k-1)!}\frac{(s+k-1)!}{(s-1)!}i^{-1-2k}2^{-s-k}a^{s-1-k}\] \[=\frac{2\pi e^{-a}}{(s-1)!}\sum_{k=0}^{s-1}\frac{(s+k-1)!}{k!(s-k -1)!}2^{-s-k}a^{s-1-k}\] \[=\frac{\pi e^{-a}a^{s-1}}{2^{s-1}(s-1)!}\sum_{k=0}^{s-1}\frac{(s +k-1)!}{k!(s-k-1)!}2^{-k}a^{-k}.\]
Letting \(R\to\infty\), replacing \(a\) by \(|x-y|\) (since \(\cos\) is even), and remembering to divide by \(2\pi\) in the definition of the Fourier transform, we get
\[d_{x}(y)=\frac{e^{-|x-y|}|x-y|^{s-1}}{2^{s}(s-1)!}\sum_{k=0}^{s-1}\frac{(s+k- 1)!}{k!(s-k-1)!}(2|x-y|)^{-k}.\]
Note that for \(x\to y\), we get a nonzero contribution only for \(k=s-1\), so the right hand side equals
\[\binom{2s-2}{s-1}2^{-2s+1}.\]
**Remark 2.1**.: The modified Bessel function of the second kind satisfies
\[K_{\nu}(z)=\frac{\Gamma(\nu+1/2)(2z)^{\nu}}{\sqrt{\pi}}\int_{0}^{\infty}\cos(at)a ^{-2\nu}\left(t^{2}+\left(\frac{z}{a}\right)^{2}\right)^{-\nu-1/2}dt,\]
for \(a>0\) and \(Re(\nu+1/2)>0.\) Combining this formula with the half-integer formula
\[K_{(s-1)+1/2}(\nu)=\sqrt{\pi/2\nu}\cdot e^{-\nu}\sum_{k=0}^{s-1}\frac{(s-1+k)!} {k!(s-1-k)!(2\nu)^{k}}\]
[1, p. 80, Eq. (12) and p. 172, Eq. (1)] gives another proof of Lemma 2.1.
Now we consider the case of \(H_{s}(\mathbb{R}^{n},\mathbb{R}).\) Here
\[d_{x}(y)=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}}e^{i(x-y)\cdot\xi}\left(1+| \xi|^{2}\right)^{-s}d\xi. \tag{1}\]
Assume \(x\neq x.\) Find \(B\in SO(n)\) with \(Be_{n}=\frac{x-y}{|x-y|}.\) Then
\[\begin{split}&\int_{\mathbb{R}^{n}}e^{i(x-y)\cdot\xi}\left(1+| \xi|^{2}\right)^{-s}d\xi\\ &=\int_{\mathbb{R}^{n}}e^{i(x-y)\cdot B(\xi)}\left(1+|B(\xi)|^{2 }\right)^{-s}\det(B)d\xi=\int_{\mathbb{R}^{n}}e^{i(B^{-1}(x-y))\cdot\xi}\left( 1+|\xi|^{2}\right)^{-s}d\xi\\ &=\int_{\mathbb{R}^{n}}e^{i|x-y|\xi_{n}}\left(1+|\xi|^{2}\right)^ {-s}d\xi\end{split} \tag{2}\]
\[\begin{split}&=\int_{\mathbb{R}}e^{i|x-y|\xi_{n}}\left(\int_{ \mathbb{R}^{n-1}}\left(1+\xi_{n}^{2}+\xi_{1}^{2}+\ldots+\xi_{n-1}^{2}\right)^ {-s}d\xi_{1}\ldots d\xi_{n-1}\right)d\xi_{n}\\ &=\int_{\mathbb{R}}e^{i|x-y|\xi_{n}}\left(\int_{\mathbb{R}^{n-1}} \left(1+\xi_{n}^{2}\right)^{-s}\left(1+\frac{\xi_{1}^{2}}{1+\xi_{n}^{2}}+ \ldots+\frac{\xi_{n-1}^{2}}{1+\xi_{n}^{2}}\right)^{-s}d\xi_{1}\ldots d\xi_{n- 1}\right)d\xi_{n}\\ &\xi_{i}\mapsto\xi_{i}(\frac{1+\xi_{n}^{2}}{=})^{-1/2}\int_{ \mathbb{R}}e^{i|x-y|\xi_{n}}\left(1+\xi_{n}^{2}\right)^{-s+(n-1)/2}\left(\int_ {\mathbb{R}^{n-1}}\left(1+\xi_{1}^{2}+\ldots+\xi_{n-1}^{2}\right)^{-s}d\xi_{ 1}\ldots d\xi_{n-1}\right)d\xi_{n}\\ &=\left(\int_{\mathbb{R}}e^{i|x-y|\xi_{n}}\left(1+\xi_{n}^{2} \right)^{-s+(n-1)/2}d\xi_{n}\right)\left(\int_{0}^{\infty}\int_{S^{n-2}}(1+r^{ 2})^{-s}r^{n-2}drd\theta_{1}\ldots d\theta_{n-2}\right)\\ &=\left(\int_{\mathbb{R}}e^{i|x-y|\xi_{n}}\left(1+\xi_{n}^{2} \right)^{-s+(n-1)/2}d\xi_{n}\right)\left(\int_{0}^{\infty}(1+r^{2})^{-s}r^{n- 2}dr\right)\operatorname{vol}(S^{n-2})\\ &=\left(\int_{\mathbb{R}}e^{i|x-y|\xi_{n}}\left(1+\xi_{n}^{2} \right)^{-s+(n-1)/2}d\xi_{n}\right)\left(\int_{0}^{\infty}(1+r^{2})^{-s}r^{n- 2}dr\right)\frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}.\end{split} \tag{3}\]
The first term on the last line is computed in Lemma 2.1, and is valid for \(-s+(n-1)/2<-1,\) i.e. for
\[s>(n+1)/2.\]
So we have to calculate \(\int_{0}^{\infty}(1+r^{2})^{-s}r^{n-2}dr.\)
**Case I:**\(n\) odd
We use integration by parts repeatedly:
\[\int_{0}^{\infty}(1+r^{2})^{-s}r^{n-2}dr\] \[=\int_{0}^{\infty}\left[(1+r^{2})^{-s}\cdot\frac{2r}{-s+1}\right]r^ {n-3}dr\left(\frac{-s+1}{2}\right)\] \[=-\int_{0}^{\infty}(1+r^{2})^{-s+1}r^{n-4}dr\cdot\left(\frac{-s+ 1}{2}\right)(n-3)\] (Step 1) \[=-\int_{0}^{\infty}\left[(1+r^{2})^{-s+1}\cdot\frac{2r}{-s+2} \right]r^{n-5}dr\cdot\frac{(-s+2)(-s+1)}{2^{2}}\cdot(n-3)\] \[=\int_{0}^{\infty}(1+r^{2})^{-s+2}r^{n-6}dr\cdot\frac{(-s+2)(-s+ 1)}{2^{2}}\cdot(n-3)(n-5)\] (Step 2).
To get the term \(r^{1}\) in the integrand, we need \(\frac{n-3}{2}\) steps. (The exponent drops by \(2b+2\) at the \(b^{\rm th}\) step, so solve \(2b+2=n-1\) for \(b\).) At Step \(\frac{n-3}{2}\), the sign in front of the integral is \((-1)^{(n-3)/2}\), the exponent of \(1+r^{2}\) is \(-s+\frac{n-3}{2}=\frac{-2s+n-3}{2}\), and the constant after the integral is
\[\frac{\left(-s+\frac{n-3}{2}\right)\left(-s+\frac{n-5}{2}\right)\cdot\ldots \cdot(-s+1)(n-3)(n-5)\cdot\ldots\cdot 2}{2^{\frac{n-3}{2}}}.\]
(For the final \(2\) in the numerator, at the \(b^{\rm th}\) step we get \(n-(2b+1)=n-(n-3+1)=2\).) This equals
\[\frac{\left(-2s+n-3\right)(-2s+n-5)\cdot\ldots\cdot(-2s+4)(-2s+2) (n-3)!!}{2^{n-3}}\] \[=(-1)^{\frac{n-3}{2}}\frac{(2s-n+3)(2s-n+5)\cdot\ldots\cdot(2s-4 )(2s-2)(n-3)!!}{2^{n-3}}\] \[=(-1)^{\frac{n-3}{2}}\frac{(2s-2)!!(n-3)!!}{2^{n-3}(2s-n+1)!!}.\]
Thus
\[\int_{0}^{\infty}(1+r^{2})^{-s}r^{n-2}dr\] \[=(-1)^{\frac{n-3}{2}}\int_{0}^{\infty}(1+r^{2})^{(-2s+n-3)/2}rdr \cdot(-1)^{\frac{n-3}{2}}\frac{(2s-2)!!(n-3)!!}{2^{n-3}(2s-n+1)!!}\] \[=\frac{1}{2s-n+1}\cdot\frac{(2s-2)!!(n-3)!!}{2^{n-3}(2s-n+1)!!}.\]
Thus for \(A=s+\frac{-n+1}{2}\), with \(s\geq(n+1)/2\) an integer,
\[d_{x}(y)\] \[=\frac{1}{(2\pi)^{n}}\left(\int_{\mathbb{R}}e^{i|x-y|\xi}\left(1+ \xi^{2}\right)^{-s+(n-1)/2}d\xi_{n}\right)\left(\int_{0}^{\infty}(1+r^{2})^{-s }r^{n-2}dr\right)\frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}\] \[=\frac{1}{(2\pi)^{n}}\left(\int_{\mathbb{R}}e^{i|x-y|\xi}\left(1+ \xi^{2}\right)^{-s+(n-1)/2}d\xi_{n}\right)\cdot\frac{1}{2s-n+1}\cdot\frac{(2s- 2)!!(n-3)!!}{2^{n-3}(2s-n+1)!!}\cdot\frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left( \frac{n-1}{2}\right)}\] \[=\frac{1}{\pi^{\frac{n+1}{2}}}\left(\int_{\mathbb{R}}e^{i|x-y| \xi}\left(1+\xi^{2}\right)^{-s+(n-1)/2}d\xi\right)\cdot\frac{1}{2s-n+1}\cdot \frac{(2s-2)!!(n-3)!!}{2^{2n-4}(2s-n+1)!!}\cdot\frac{1}{\left(\frac{n-3}{2} \right)!}\] \[=\frac{1}{\pi^{\frac{n+1}{2}}}\frac{e^{-|x-y|}|x-y|^{A-1}}{2^{A} (A-1)!}\sum_{k=0}^{A-1}\frac{(A+k-1)!}{k!(A-k-1)!}(2|x-y|)^{-k}\] \[\qquad\cdot\frac{1}{2s-n+1}\cdot\frac{(2s-2)!!(n-3)!!}{2^{2n-4}( 2s-n+1)!!}\cdot\frac{1}{\left(\frac{n-3}{2}\right)!}. \tag{3}\]
To treat \(x=x\), we can use the fact that \(d_{x}(y)\) is highly differentiable, and let \(x\to y\) in the last formula. As in the one-dimensional case, we get
\[d_{x}(y)=\frac{1}{\pi^{\frac{n+1}{2}}}\binom{2s-n-1}{2}2^{-2s+n}\frac{1}{2s-n+ 1}\cdot\frac{(2s-2)!!(n-3)!!}{2^{2n-4}(2s-n+1)!!}\cdot\frac{1}{\left(\frac{n- 3}{2}\right)!}.\]
**Case II:**\(n\) even
Now it takes \(\frac{n}{2}-1\) steps to reduce \(\int_{0}^{\infty}(1+r^{2})^{-s}r^{n-2}dr\) to a constant times
\(\int_{0}^{\infty}(1+r^{2})^{-s+n/2-1}dr\). This gives
\[\int_{0}^{\infty}(1+r^{2})^{-s}r^{n-2}dr\] \[=(-1)^{\frac{n}{2}-1}\int_{0}^{\infty}(1+r^{2})^{-s+n/2-1}dr\] \[\qquad\cdot\frac{\left(-s+\frac{n}{2}-1\right)\left(-s+\frac{n}{2 }-2\right)\cdot\ldots\cdot(-s+1)(n-3)(n-5)\cdot\ldots\cdot 1}{2^{\frac{n}{2}-1}}\] \[=\int_{0}^{\infty}(1+r^{2})^{-s+n/2-1}dr\cdot\frac{(s-1)(s-3) \cdot\ldots\cdot(s-\frac{n}{2}+1)\left(n-3\right)!!}{2^{\frac{n}{2}-1}}\]
By Wolfram Alpha, we get
\[\int_{0}^{\infty}(1+r^{2})^{-k}=\frac{\sqrt{\pi}\;\Gamma(k-\frac{1}{2})}{2 \Gamma(k)},\]
so
\[\int_{0}^{\infty}(1+r^{2})^{-s/2}r^{n-2}dr=\frac{\sqrt{\pi}\Gamma(s-\frac{n+1 }{2})}{\Gamma(s-\frac{n}{2}+1)}\cdot\frac{(s-1)(s-3)\cdot\ldots\cdot(s-\frac{n }{2}+1)\left(n-3\right)!!}{2^{\frac{n}{2}-1}}.\]
Using \(\Gamma(n+(1/2))=\frac{(2n)!}{4^{n}n!}\sqrt{\pi}\), we obtain for \(k=(2s-n+3)/2\),
\[\int_{0}^{\infty}(1+r^{2})^{-s/2}r^{n-2}dr\] \[=\int_{0}^{\infty}(1+r^{2})^{(-s+n-2)/2}dr\cdot\frac{(s/2-1)(s/2-1 )\cdot\ldots\cdot(s/2-\frac{n}{2}+1)\left(\frac{n-3}{2}\right)!!}{2^{\frac{n}{ 2}-1}}\] \[=\frac{\sqrt{\pi}\;\Gamma\left(\frac{s-n+5}{2}\right)}{\Gamma \left(\frac{s-n+4}{2}\right)}\frac{(s/2-1)(s/2-1)\cdot\ldots\cdot(s/2-\frac{n} {2}+1)\left(\frac{n-3}{2}\right)!!}{2^{\frac{n}{2}-1}}\] \[=2^{s-n+3}\left[\binom{s-n+3}{s-\frac{n+3}{2}}\right]^{-1}\cdot \frac{(s/2-1)(s/2-1)\cdot\ldots\cdot(s/2-\frac{n}{2}+1)\left(\frac{n-3}{2} \right)!!}{2^{\frac{n}{2}-1}}.\]
By (2), we have for \(A=(s-n+1)/2\) and \(x\neq x\),
\[d_{x}(y)=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}}e^{i(x-x)\cdot \xi}\left(1+|\xi|^{2}\right)^{-s/2}d\xi\] \[=\frac{1}{(2\pi)^{n}}\left(\int_{\mathbb{R}}e^{i|x-y|\xi_{n}} \left(1+\xi_{n}^{2}\right)^{-s/2+(n-1)/2}d\xi_{n}\right)\left(\int_{0}^{ \infty}(1+r^{2})^{-s/2}r^{n-2}dr\right)\operatorname{vol}(S^{n-2})\] \[=\frac{1}{(2\pi)^{n}}\left(\frac{e^{-|x-y|}|x-y|^{A-1}}{2^{A}(A-1 )!}\sum_{k=0}^{A-1}\frac{(A+k-1)!}{k!(A-k-1)!}(2|x-y|)^{-k}\right)\] \[\qquad\cdot\left(2^{s-n+3}\left[\binom{s-n+3}{\frac{s-n+3}{2}} \right]^{-1}\cdot\frac{(s/2-1)(s/2-1)\cdot\ldots\cdot(s/2-\frac{n}{2}+1)\left( \frac{n-3}{2}\right)!!}{2^{\frac{n}{2}-1}}\right)\] \[\qquad\cdot\frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}\] \[=\frac{2^{(s+5)/2}\left[\left(\frac{s-n+3}{2}\right)!\right]^{2} (s/2-1)(s/2-3)\cdot...\cdot(s/2-n/2+1)(n-1)!}{\pi^{(n/2)+1}(s-n+3)!\left( \frac{s-n-1}{2}\right)!(2n-2)!}\] \[\qquad\cdot e^{-|x-y|}|x-y|^{A-1}\sum_{k=0}^{A-1}\frac{(A+k-1)!}{ k!(A-k-1)!}(2|x-y|)^{-k}\] \[:=C_{s,n}e^{-|x-y|}|x-y|^{A-1}\sum_{k=0}^{A-1}\frac{(A+k-1)!}{k!(A -k-1)!}(2|x-y|)^{-k}. \tag{4}\]
Again, to have \(n\) even, \(s\in 2\mathbb{Z}+1\), and \(s>n+1\), we need \(s\geq n+3\).
For \(x=x\), we get for the only nonzero term \(k=A-1=(s-n-1)/2\),
\[d_{x}(y)=C_{s,n}\frac{(s-n-1)!}{\left(\frac{s-n+1}{2}\right)!}2^{(-s+n+1)/2},\]
which can be somewhat simplified.
**Simplifying the results**
The basic fact is that \(H_{n+k+\epsilon}\subset C^{n/2+k}(\mathbb{R}^{n})\) for any \(\epsilon>0\). Recall that we can choose \(s=n+3\), in which case \(d_{x}(y)\in H_{n+3}\subset C^{[n/2]+1}(\mathbb{R}^{n})\).
**Case I:**\(n\) odd, \(s=n+3\), \(A=(s-n+1)/2=2\)
By (3), we get
\[d_{x}(y) =C_{n}e^{-|x-y|}|x-y|\left(1+|x-y|^{-1}\right)=C_{n}e^{-|x-y|}\left( 1+|x-y|\right);\] \[C_{n} =\frac{1}{\pi^{\frac{n+1}{2}}}\frac{1}{2^{2}}\frac{1}{4}\cdot \frac{(n+1)!!(n-3)!!}{2^{2n-4}(4)!!}\cdot\frac{1}{\left(\frac{n-3}{2}\right)!}. \tag{5}\]
The dimension constant \(C_{n}\) can be simplified, since
\[(n-3)!!=2^{(n-3)/2}\frac{n-3}{2}\cdot\frac{n-3}{5}\cdot...\cdot 1\] \[\Rightarrow C_{n}=\frac{1}{\pi^{\frac{n+1}{2}}}\frac{1}{2^{7}} \frac{2^{(n+1)/2}2^{(n-3)/2}(n-1)!}{2^{2n-4}}=\frac{1}{\pi^{\frac{n+1}{2}}} \frac{(n-1)!}{2^{n+4}}.\]
In any case, \(d_{x}(y)\) for \(H_{n+3}(\mathbb{R}^{n})\) is a simple expression times a dimension constant.
**Case II:**\(n\) even, \(s=n+3\), \(A=(s-n+1)/2=2\)
By (4), we get
\[d_{x}(y)=\frac{2^{(n/2)+5}(3!)^{2}\left(\frac{n+1}{2}\right)\left(\frac{n-3}{2 }\right)\left(\frac{n-7}{2}\right)\cdot...\cdot\left(\frac{5}{2}\right)(n-1)! }{\pi^{(n/2)+1}6!(2n-2)!}\left(e^{-|x-y|}\left(1+|x-y|\right)\right). \tag{6}\]
|
2304.11697
|
Informative Data Selection with Uncertainty for Multi-modal Object
Detection
|
Noise has always been nonnegligible trouble in object detection by creating
confusion in model reasoning, thereby reducing the informativeness of the data.
It can lead to inaccurate recognition due to the shift in the observed pattern,
that requires a robust generalization of the models. To implement a general
vision model, we need to develop deep learning models that can adaptively
select valid information from multi-modal data. This is mainly based on two
reasons. Multi-modal learning can break through the inherent defects of
single-modal data, and adaptive information selection can reduce chaos in
multi-modal data. To tackle this problem, we propose a universal
uncertainty-aware multi-modal fusion model. It adopts a multi-pipeline loosely
coupled architecture to combine the features and results from point clouds and
images. To quantify the correlation in multi-modal information, we model the
uncertainty, as the inverse of data information, in different modalities and
embed it in the bounding box generation. In this way, our model reduces the
randomness in fusion and generates reliable output. Moreover, we conducted a
completed investigation on the KITTI 2D object detection dataset and its
derived dirty data. Our fusion model is proven to resist severe noise
interference like Gaussian, motion blur, and frost, with only slight
degradation. The experiment results demonstrate the benefits of our adaptive
fusion. Our analysis on the robustness of multi-modal fusion will provide
further insights for future research.
|
Xinyu Zhang, Zhiwei Li, Zhenhong Zou, Xin Gao, Yijin Xiong, Dafeng Jin, Jun Li, Huaping Liu
|
2023-04-23T16:36:13Z
|
http://arxiv.org/abs/2304.11697v1
|
# Informative Data Selection with Uncertainty for Multi-modal Object Detection
###### Abstract
Noise has always been nonnegligible trouble in object detection by creating confusion in model reasoning, thereby reducing the informativeness of the data. It can lead to inaccurate recognition due to the shift in the observed pattern, that requires a robust generalization of the models. To implement a general vision model, we need to develop deep learning models that can adaptively select valid information from multi-modal data. This is mainly based on two reasons. Multi-modal learning can break through the inherent defects of single-modal data, and adaptive information selection can reduce chaos in multi-modal data. To tackle this problem, we propose a universal uncertainty-aware multi-modal fusion model. It adopts a multi-pipeline loosely coupled architecture to combine the features and results from point clouds and images. To quantify the correlation in multi-modal information, we model the uncertainty, as the inverse of data information, in different modalities and embed it in the bounding box generation. In this way, our model reduces the randomness in fusion and generates reliable output. Moreover, we conducted a completed investigation on the KITTI 2D object detection dataset and its derived dirty data. Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation. The experiment results demonstrate the benefits of our adaptive fusion. Our analysis on the robustness of multi-modal fusion will provide further insights for future research.
autonomous driving, multi-modal fusion, object detection, noise
## I Introduction
Recent success in deep learning has contributed much to computer vision, such as semantic segmentation, object detection, and object tracking. However, for some application areas like autonomous driving, there are more requirements for vision models[1]. Though current models can perform well on most tasks, they have a limitation on dirty data and fail to meet the practical standard of industrial application[2, 3, 4]. For instance, when a self-driving vehicle (SDV) runs on the road, complex traffic scenarios, unpredictable weather conditions, and potential sensor failure can lead to pattern shifts and inaccurate object recognition. Therefore, robustness and generalization are gradually brought into focus in model development. In the following, we explain why multi-modal learning and adaptive selection of informative data are important for improving model generalization.
Many methods have been proposed to solve the problem, varying from the hardware side like using more elaborate sensors and the software side like adaptive algorithms. Some algorithms estimate the correct features for different data, including data augmentation[5], domain adaptation[6, 7], feature enhancement[3], noise estimation[8], and so on. They avoid the affects from noise in data by grabbing the invariant features or modifying the training data distribution. However, most of these methods are developed on single-modal models, which rely on the specific sensor measurement
Fig. 1: Illustration of the measurement noise in perception models. Tests on datasets take fixed data points as input and generate statistical conclusions. While in practical application, various environment and hardware conditions produce potential noise in the measurement. It will result in a shift in multi-modal features and the errors in the final output.
essentially. As illustrated in the Fig.1, generally we train and evaluate our models on the given datasets. Due to the limited amount of data, the statistical results on the datasets cannot reflect the true performance in reality, especially when sensors meet error beyond the cases in datasets, namely the out-of-distribution(OOD) problem. Therefore, redundancy in the data is also a vital aspect in practical deployment.
For the reasons above, multi-modal fusion methods have gained attention in recent years[1, 9]. SDVs are equipped with different sensors for more complete and precise perception. On one hand, multi-modal sets can provide complementary measurements, like cameras record the colors and textures, LiDARs provide the 3D structure of objects, and Radars observe the velocity of moving targets. On the other hand, multi-modal fusion can provide redundant information for stable recognition. Different sensors have specific working conditions, which means they all will potentially fail in some environments. For example, in the dark or foggy weather or with sharp illumination changes, the images from the camera may fail to recognize objects. As for LiDARs, particulate matters in the air can influence the LiDAR imaging in rainy, snowy, and sandy weather. With data fusion, sensing systems can avoid severe crash. Therefore, we mainly discuss the adaptive strategy of multi-modal fusion for SDV perception in this paper, especially on the object detection task.
There has been much research in the multi-modal object detection of autonomous driving since the KITTI benchmark was released in 2012[10]. It is also recognized as a potential approach to realize robust detection. After that, various datasets and approaches have been proposed to accelerate the development of this community and achieve higher recognition accuracy as well[11, 12]. In the early stage, model-based methods used bagging methods for result fusion[13, 14]. They generally have individual pipelines to process different data and merge the bounding boxes to generate the final results. Latest data-driven methods mainly applied feature fusion (boosting or stacking) for a more profound information mixture that fuses multi-modal data in the feature extraction or Region of Interest (ROI) generation stage[15, 16, 3].
However, existing fusion methods focus on the quantified scores in standard vision benchmarks, while few contribute to the robustness or generalization of fusion[17]. Models that can adapt to different inference datasets are critical for real-world applications. They can aggregate useful information from the high-dimensional feature space, thus avoiding the effect of noise on the results. It was pointed out a multi-modal model seek a balance between fitting data and generalization[18]. Its performance will meet a bottleneck without proper information reduction in fusion channels. Due to the same reason, it is more challenging to extend the training data via multi-modal data augmentation. In addition, when a multi-modal model has not been trained well as in Fig.1, it will generate greater variance with dirty multi-modal data. As a result, multi-modal fusion does not essentially guarantee an incrementation in performance and robustness. In other words, multi-modal models are not always effective in identifying and exploiting information in diverse data.
Different from those learning-based feature fusion models, we claim that adaptive fusion models should select informatively features from multi-source data and avoid noise in them. It can be viewed as one form of data dimensionality reduction, which is expected to be explainable. The amount of information is measured by entropy, so it can be equivalent to the calculation of the feature distribution. Since it is difficult to directly calculate the amount of information or entropy, we can choose to reflect the uncertainty of feature distribution to learn to characterize the amount of information. Driven by this idea, we propose an adaptive fusion method with a result fusion architecture. Considering that different modal data require specific operators and parameter optimization in feature processing, we adopt a loosely coupled network architecture, which is general but practical. Multi-modal data are fed to individual pipelines that are connected in the boxes filtering stage. Then we apply decision-level fusion by fusing the box proposals in the improved Non-Maximum Suppression (NMS). We will describe why our simple fusion strategy can filter information in data.
To select informative results and achieve reliable fusion, we introduce uncertainty quantification into our model. Proper uncertainty quantification indicates the prediction deviation of the model, therefore, it has been viewed as a potential approach towards the interpretable neural network and an emerging method in autonomous driving[20, 1]. In our model, we predict the uncertainty, as the inverse of information amount, for each data point. Through joint training on multi-modal data, our model learns a universal uncertainty measurement that can be used as the boxes filter index in NMS. To demonstrate the benefits of our design, and explore the noise affect in fusion models as well, we have evaluated the models on the KITTI 2D object detection dataset. Point clouds and RGB images were progressively perturbed to simulate multi-level dirty data. Then, we conducted experiments with both raw data and dirty data. For clean data, our fusion model achieved sub-optimal but competitive results, with only 0.36 mAP lower than Depth-model, while 4.24 mAP higher than RGB-model. As for dirty data, our achieved 51.61 mAP higher than RGB-model and 34.20 mAP higher than Depth-model on average. Our main contributions can be concluded as:
* We explored the influence of multi-level noise on LiDAR
Fig. 2: Visualization of the simulated noise[19]. From top left to bottom right are eight common noises in nature.
point clouds and Camera RGB images and reveal the attenuation law for object detection task;
* We proposed a universal fusion model with informative data selection, which can be implemented with different modal data and fuse their predictions adaptively;
* We conducted sufficient experiments on the KITTI dataset, that demonstrate our model has strong robustness and generalizes to noisy data beyond the train set.
In the following sections, we first review the recent progress in anti-noise object detection, multi-modal fusion, information in fusion, and uncertainty for computer vision. Then, we introduce our proposed model from baseline, uncertainty modeling, fusion step, and implementation. After that, we detail the experiment process, including data pre-processing, noise simulation, results and analysis.
## II Related Work
### _Anti-Noise Object Detection_
Noisy data can mislead detection models because their object features are out of the distribution of the model fitting domain. Therefore, it is practical to extend the training set to provide more diverse data, or create feature filters to regularize noisy features. Data augmentation is one of the most common approaches for the former. PointDrop[5] learns to drop some key points as features, thus generating more challenging point samples for training. Ofori-Oduro et al. used antibodies generated using Artificial Immune Systems in training[21]. Loh et al. proposed to collect data under different illumination conditions to enhance the model's robustness[22]. Michaelis et al.[19] provided a benchmark to simulate multiple noises in a natural environment that is presented in the Fig.2. Besides, domain adaptation is another practical method. Transferring learned parameters(knowledge) among datasets can aggregate features distributions in different data domain[6]. Khodabandeh et al. proposed to use noisy-label in training to enhance the generalization[23]. Instead, researcher also concerns that whether we can directly extract and enhance related features. Bijelic et al. proposed a multi-sensor model and mix features in multiple levels for mutual activation[3]. Others also try to estimate the noise from the opposite perspective. Yang et al. built a model to estimate the accuracy of Laser measurement under foggy conditions[8], and Tian et al. quantified the uncertainty level of features for adaptive fusion[24]. But most of them are developed on single-modal models, which indicates that they would fail when the sensor meets severe fault.
### _Multi-modal fusion for object detection_
#### Ii-B1 Multi-modal object detection
To date, several studies have investigated multi-modal fusion for 2D and 3D object detection. Frustum PointNets[16] extract the 3D bounding frustum of an object by extruding 2D bounding boxes from image detectors. PointFusion[14] combines a CNN and a PointNet[25] architecture respectively to process images and raw point clouds then predict 3D boxes. PointPainting[26] projects LiDAR points into the output of an image-only semantic segmentation network, and appends the class scores to each point. All these fusion methods of RGB and LiDAR achieve high average precision on the benchmarks, however, the coupling or interrelation of two modalities will cause the whole system to fail easily once part of the sensors break down. Besides, the methods above only provide a deterministic predict result, making it risky to carry out in the real application.
#### Ii-B2 Adaptive fusion
Several new studies have proposed self-adaptive techniques in computer vision. Therefore, the robustness of those tasks can be improved to some extent. Adaptnet[27] uses a convoluted mixture of deep experts(CMoDE) fusion techniques to learn features from complementary modalities and spectra. And SSMA[28] further proposes a self-supervised model adaptation fusion mechanism and a segmentation architecture termed AdapNet++ to improve the robustness of the system. UNO[24] proposes a Noisy- or Gate, then the model tends to accept the more reliable modal data. The self-adaptive network can recognize potential degradation of the input data such as rain, fog, or image blur, however, they only validate their scheme in a simulation environment, making it less conceivable to real world application.
And all of these methods above are designed for semantic segmentation task, which is less complicated than object detection task. Choosing Smartly[29] provides an adaptive fusion technique for multi-modal object detection in changing environments, which predicts weights for different modalities online based on CNN experts. Zhang et al.[30] propose the PMC method to adaptive generate and fuse different modal data even when they are lost, to achieve cross-domain multi-modal learning. Zhao et al.[31] propose to apply a gate module for adaptive feature fusion in learning. Kim. et al.[32] propose a gated information fusion network for robust deep multi-modal object detection using separate CNN and SSD(Single Shot Detector)[33] as backbone. The network learns weights from input feature map from each modality in the presence of the modalities degraded in quality. Their following work proposes to design a specific loss function and feature fusion module to avoid noisy data in training[34]. Different from those feature fusion models, Lee et al. proposed a late(result-level) fusion method named DBF[35], that applied fuzzy state estimation for multiple detectors outputs and fused them accordingly for a single image. Inspired by their work, we consider late fusion for adaptive multi-modal fusion.
As presented in the papers[2, 18] that conducted several noisy data experiments on the KITTI dataset, existing methods have some problems when apply to real scenarios given that the network is hard to train so that weight may be apt to rely more on those easy-to-learn models than on harder one. Furthermore, in terms of the back propagation, due to the lack of noise data in the training data, it is difficult to completely cover the existing noise types. Its distribution is always unbalanced relative to that in the real world. As a result, the parameter optimization after back propagation is not suitable for the case that the data contains interference, namely the OOD problem. Therefore, the fusion must be done in a way that is independent of the data distribution. The information-driven data selection proposed in this paper is implemented through a non-parameterized NMS process to avoid the problem of data distribution.
### _Information in multi-modal fusion_
Information theories are proposed by C.Shannon based on the information quantification models[36]. It measures the amount of value transferred by signals. But counting information in deep networks is difficult because of the high-dimension data and models. MacKay et al. have discussed modeling a neuron or a network as a channel[37]. N. Tishby et al. proposed the information bottleneck theory (IB) with a variational principle to reflect on signal processing problems[38, 39]. But these researches mainly focus on simple networks. Belghazi et al.[40] proposed a mutual information measurement method, that can be applied in simple informative data selection models[41]. A similar conclusion can be found in this paper[42], which solved the domain adaptation problem via feature selection. Zou et al.[18] tried to reveal the principle in deep multi-modal networks with information communication models, while they lack compelling modeling, inference, and validation. All in all, we still lack a simple, efficient, and interpretable information fusion method. There are other works like [43] leverage self-defined information measurement in vision tasks, but they mainly focus on low-level tasks like image recovery or super-resolution, which may not fit high-level tasks.
### _Uncertainty estimation in computer vision_
Recently, more attention focus on the provision of interpretability, beyond this, uncertainty estimation is one of the most important partition. General uncertainty estimation on computer vision including classification and regression. Gal et al.[44] propose a framework to combine aleatoric uncertainty and epistemic uncertainty. They apply the framework to segmentation tasks and achieve new state-of-the-art results on depth regression and semantic segmentation benchmarks. UNO[24] presents an uncertainty-aware fusion scheme and an additional data-dependent spatial temperature scaling method to complement the uncertainty estimation in semantic segmentation. There are also several techniques for uncertainty estimation in object detection. He et al.[45] provide a bounding box regression with KL loss and substitute softer-NMS for traditional NMS so that they achieved more accurate object localization. Choi et al.[46] modified parameters of YOLO network to gaussian distribution and use loss attenuation to compute the uncertainty of bounding box, and they also improve the mean average precision on the benchmark dataset. Similarly, Lee et al.[47] purpose a method named Gaussian-FCOS which uses an anchor-free backbone and even achieves better performance. Kowol et al.[48] also present an uncertainty-based strategy with camera and Radar with YOLOv3[49] as baseline and use gradient boosting to make decision. They demonstrated that the strategy is effective in night scenes compared with single sensor baseline. Feng et al. progressively provide three works on 3D object detection based on uncertainty estimation, from pure point clouds to multi-modal fusion[50, 51, 17]. With precise estimation, they can avoid the uncertain information from noisy samples in training, and aid the prediction for test data. Russell et al.[52] show that for the visual tracking problem, accurate multivariate uncertainty quantification can have a great impact on performance for both in-domain and out-of-domain evaluation data.
Most of the models above avoid specific types of noisy data. But most of these techniques do not consider either multi-modal situations or robustness of the model under various noise attacks. For example, Gaussian YOLOv3[46] is developed for single-modal data, UNO[24] is evaluated on semantic segmentation tasks, Feng[17] and Zhang[30] ignore many other types of noise data from environment. Hence, we intend to synthesize the advantages of those techniques in an uncertainty-aware multi-modal object detection model, and evaluate it with common types of dirty data.
## III Method
In the following, we first formulate the noisy-data object detection problem, and describe our baseline fusion model. To improve the robustness and generate informative data selection fusion strategy, we build an uncertainty-aware model by estimating the aleatoric uncertainty of each bounding box of each modal via loss attenuation. Then it will be applied for boxes filtering in the NMS stage. To balance speed and accuracy, we choose a light-weight one-stage object detection model as our baseline, while our method should be easily extended to other classical object detection models.
### _Problem Statement_
Noisy-data object detection aims to locate and classify targets from data that is disturbed with natural noise. Generally an object detection model \(\mathcal{D}(\cdot)\) take an image, point clouds, or a group of multi-modal data \(X=\{X_{1},X_{2},...\}\) as input, where subscripts indicate the modality. Then it returns the expected coordinates \(\{x,y,w,h\}\) and categories \(\{c\}\) of the targets. For noisy-data detection, we assume that clean data will be measured by noisy function \(\mathcal{F}(\cdot)\). Our goal is to minimize the recognition deviation while maximizing the influence of noise as much as possible:
\[\min_{\mathcal{D}}\max_{\mathcal{F}}\mathcal{L}(\mathcal{D}(\mathcal{F}(X)), \{x,y,w,h,c\}) \tag{1}\]
As we approach the minimum, we can guarantee the generalization and robustness of our detection models over the most
Fig. 3: A simple model of informative data selection in fusion. When multi-source data or features are combined, the model should measure their information amount as a reference. In the case of noisy-data detection, it is critical to pick out those related element and filter out the noise.
severe noise. Specifically, for multi-modal fusion models, to simplify the problem, we assume that \(X\) contains at least two modal data, namely two types of sensors at least. In this paper, we focus on LiDAR point clouds and camera RGB images. In addition, noise will only be added to one modal data in a single experiment, to avoid the worst-case scenario where all sensors fail and nothing will work. In the case of a multimodal model, the Expr.1 becomes:
\[\min_{\mathcal{D}}\max_{\mathcal{F}}\mathcal{L}(\mathcal{D}(\mathcal{F}(X_{l}),\mathcal{F}(X_{c})),\{x,y,w,h,c\}) \tag{2}\]
where \(X_{l}\) is LiDAR data and \(X_{c}\) is camera data. For our informative data selection sub-problem, it will be revised as,
\[\min_{\mathcal{D}}\min_{\mathcal{S}}\max_{\mathcal{F}}\mathcal{L}(\mathcal{S} (\mathcal{D}(\mathcal{F}(X_{l})),\mathcal{D}(\mathcal{F}(X_{c}))),\{x,y,w,h,c \}) \tag{3}\]
where \(\mathcal{S}(\cdot)\) is the selective fusion function as shown in Fig.3.
### _Detection Baseline_
The YOLOv3[49] is a modified version of the YOLO series models, achieving practicality in terms of accuracy and speed. The structure of YOLOv3 has been shown in the Fig.4 above, consisting mainly of the DarkNet-53 backbone network and the three-branch decoder. The backbone network is composed of residual blocks and convolution blocks, while each decoder branch mainly contains the convolution blocks. A convolution block comprises a 2D convolution layer, a batch normalization layer, and a Leaky ReLU activation. A residual block comprises various residual units, and each of them has two convolution blocks and a skip-layer concatenation. Decoders process the feature maps transmitted by the former blocks and the previous branch, which will be up-sampled to realize multi-scale detection. These designs improve the detection robustness and speed of YOLOv3. To apply YOLOv3 for LiDAR point clouds, we project points onto a 2D plane as the camera image, and process it as a 2D depth image.
### _Multi-modal Fusion_
As mentioned above, in most existing multi-modal fusion networks, the modalities are heavily coupled and do not consider any safety-critical environment. To weaken the interfere for fear that the modality will harm each other, we use parallel pipelines and use late fusion to combine the final result. The overview architecture are presented in the Fig.4. RGB images and the projected depth images will be calibrated in space and size. Then they will be fed to individual pipelines. Though we apply YOLOv3 for both modalities, they also accepted different combinations of models, whether 3D or 2D, one-stage or two-stage. Besides, a potential scenario is to establish communication between the two models by fusing two modal features in the middle of the models[17, 18]. Says it may get better characteristics, but in our exploring uncertainty-aware detection model, we don't do this in order to avoid the model non-convergence issue and two modal data uncertainty interfere with each other. For the basic fusion model, we can simply fuse all the proposed bounding boxes and filter them in the joint NMS process, or apply NMS for each modality and conduct a second selection for them afterward. In the final version, this process will be revised to the uncertainty-aware multi-source NMS. The fusion model universally outputs all boxes onto the image 2D plane. Because of our loosely coupled design, the proposed fusion model is compatible with more different modalities and tasks, even with the addition of uncertainty fusion factors. But we will take LiDAR point clouds and camera images as example in our experiments.
### _Informative Data Selection with Uncertainty_
Considering that it is difficult to calculate an accurate amount of information in data, we leverage a related statistic index uncertainty as the alternative. Both information and uncertainty describe the credibility of data to the target object category and locations, while uncertainty represents the inverse meaning of information or mutual information. In our fusion strategy, we mainly consider aleatoric uncertainty, since the noises in each modality caused by sensor failure or extreme weather can be better explained by aleatoric uncertainty. Generally speaking, aleatoric uncertainty can be interpreted as noise or vague in data or a label that is hard to fit.
Though aleatoric uncertainty cannot be eliminated by adding more training data, it could be reduced with additional features or views. Therefore, our strategy can also be interpreted as a type of method to reduce aleatoric uncertainty using multiple modalities. To estimate the aleatoric uncertainty of each object in each modality, we apply the loss attenuation that integrates uncertainty estimation function in the training loss, and optimizes them together with neural networks. For object detection, we focus on the coordinates regression and classification. The traditional loss function is:
\[\mathcal{L}_{NN}(\theta)=\frac{1}{N}\sum_{i=1}^{N}||y_{i}-f(x_{i})||^{2} \tag{4}\]
where \(NN\) is the neural network, \(N\) is the number of samples, \(\{x_{i}\}\) and \(\{y_{i}\}\) are the input and target output for \((x,y,w,h)\) in the Eq.1. To recognize the uncertain predictions in fusion, we expect the model can assign high uncertainty to inaccurate results and low for the rest. Then the Eq.4 should be redesigned as[46]:
\[\mathcal{L}_{NN}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{2\sigma(x_{i})^{2}}|| y_{i}-f(x_{i})||^{2}+\frac{1}{2}log\sigma(x_{i})^{2} \tag{5}\]
where \(\sigma(\cdot)\) is the variance estimation function. In particular, in the case of a single target, the model may output 0 to N prediction boxes. In the training stage, the optimization of this loss function can constrain the prediction result of the model to be as close to the real value as possible, and the variance between the prediction boxes will not be too large. In the inference stage, based on the prediction variance and error obtained by this publicity, the quality of the prediction frame can be judged, so as to carry out effective fusion. This method and conclusion can be easily generalized to the case of multiple objectives. With the loss attenuation, the model is expected to predict proper uncertainty for boxes.
Following the setting of Gaussian-YOLOv3[46], we extend the prediction to \((\mu_{x},\Sigma_{x},\mu_{y},\Sigma_{y},\mu_{w},\Sigma_{w},\mu_{h},\Sigma_{h})\), where \(\mu,\Sigma\) indicate the mean and variance of the target elements. Then
the mean should converge to the ground truth, while smaller variances are predicted for accurate boxes and larger variances are predicted for inaccurate boxes. Then the uncertainty can be indicated by variance. Due to the penalty during training, the samples with high uncertainty tend to generate high variance during optimization. Additionally, to fit the prediction mode of YOLOv3, we need to conduct sigmoid function for each value and scale them to the range of (0,1).
In our experiment, we examine the validation of uncertainty estimation for point clouds model and RGB model on the KITTI 2D object detection dataset. The relationship among confidence(conf), uncertainty(\(\sigma^{2}\)), and localization(IoU) are visualized in the Fig.5. In the figures, the more linearly correlated the distributions of two variables are, the more correlated, or interpretable, they are. In Fig.5, (a) and (b) present the joint distribution of \(\sigma^{2}\) and conf. Their distributions are not so closer to linear, indicating that their uncertainty are less related to confidence. Compared with this group, it is more interpretable for (IoU, \(\sigma^{2}\)) and (IoU, conf). The data point in RGB sub-figures are more concentrated than in points sub-figures. That means our uncertainty estimation fits RGB models better than point cloud models. For the relationship of (IoU, \(\sigma^{2}\)) and (IoU, conf), the figures also indicate that the correlation between confidence and positioning accuracy is not significant as uncertainty. To sum up, it is reasonable to fuse and optimize the candidate boxes using uncertainty.
In this way, we model every bounding box in each object with an uncertainty value indicating the quality of the object:
\[Bbox_{i}\sim N(\mu_{i},\Sigma_{i}^{2}) \tag{6}\]
Eq.6 above is applied for the four elements \(\{x,y,w,h\}\). As for our multi-modal model, we have individual pipelines for each input data and correspondingly individual predicted boxes for them, that means we need to model the uncertainty in each pipelines. Because the uncertainty(variance), only reflects the properties of the boxes, we can think of these estimates as belonging to the same scale with similar meanings.
For optimization, Eq.5 is revised as follows:
\[\mathcal{L}_{x}=-\sum_{i=1}^{W}\sum_{j=1}^{H}\sum_{k=1}^{K}\lambda_{ijk}log(P( x_{ijk}^{GT}|\mu_{x}(x_{ijk}),\Sigma_{x}(x_{ijk}))+\epsilon) \tag{7}\]
\[\lambda_{ijk}=\frac{(2-w^{GT}\times h^{GT})\times\delta_{ijk}^{obj}}{2} \tag{8}\]
where \(K\) is the number of anchors, and \(\lambda_{ijk}\) is set as a penalty coefficient of weight. \(P(\cdot)\) is an expected distribution probability function for boxes. Generally we take Gaussian distribution here. \(\delta\) represents a gate between prediction and anchors. In our experiment, \(\delta=1\) when the IoU is over 0.3, otherwise it is 0. The Eq.7 is also applied for the four
Fig. 4: Architecture of the proposed 2D object detection model. The top sub-figure presents the overview of YOLOv3 network[49]. Details of the network layers refer to the original paper. The bottom sub-figure shows our loosely-coupled multi-modal detection model with a join NMS for universal prediction in the 2D space. Besides, our model can be easily implemented with other backbones like SSD[33].
elements \(\{x,y,w,h\}\) according to the specific loss function. In the setting of our result-level fusion, we predict proposed bounding boxes from two pipelines and fuse them in single NMS module, though we optimize the Eq.7 individually for two pipelines output. However, uncertainty estimation is not sufficient for adaptive fusion, we further design an uncertainty-aware multi-source NMS algorithm to achieve the goal.
### _Data Selection in NMS_
We finally integrate the selection process in the improved-NMS. NMS can filter bounding box according to IOU and classification score. But the original algorithm only considers classification score and ignore the accuracy of localization. Typically, when considering model uncertainty, the classification score generated by softmax layer are probable to overestimate the score. Besides, lower-scored boxes may have higher localization confidence. To integrate uncertainty and combine the entire results of each modality in our model, softer-NMS[45] has been proposed to substitute NMS. It calculates the weighted average of all based on box-level aleatoric uncertainty and update the localization parameters for prediction. For example, \(x_{1}\) is updated by:
\[x_{1,i}=\frac{\sum_{j}x_{1,j}/\sigma_{x_{1,j}}^{2}}{\sum_{j}1/\sigma_{x_{1,j}}^{ 2}}\qquad s.t.IoU(b_{i},b)>0.5 \tag{9}\]
where \(\sigma_{x,i}^{2}\) is aleatoric uncertainty of bounding box, and \(b\) indicates the predicted boxes. All eight parameters from two pipelines will be updated with the same approach.
In the case of multi-source fusion, we have predictions from multiple pipelines. If we mix the predictions of multiple modalities directly, we will ignore the pattern correlation across different modalities, and consistency within each modality respectively. Compared with general NMS and other fusion methods, that have different distributions for different datasets, our fusion strategy keeps the high consistency over different modality data and multi-source data, and significant relationship with localization and classification. Therefore, given two threshold \(t_{1}\) and \(t_{2}\), we can classify the relationship between the predictions of the two modalities \(A,B\) into three cases:
* Case1: when \(IoU(A,B)\in[t_{2},1]\), the area is activated by two modal data with high confidence;
* Case2: when \(IoU(A,B)\in[t_{1},t_{2})\), the area exists confusing patterns from different modalities;
* Case3: when \(IoU(A,B)\in[0,t_{1})\), different modal data detect objects in different areas that are not correlated.
These two parameters can obtain an optimal value through experiments. The reference values we provide are 0.3 and 0.5.
According to the definition above, we propose the extended softer-NMS in the Algorithm.1 to replace the joint NMS, titled uncertainty-aware multi-source NMS. To adapt the confidence attenuation strategy in softer-NMS, we need to merge and rerank the multi-modal predictions \(A,B\) first. Because the predicted values from different modalities have a consistent range of values both in terms of box attributes and uncertainty, the fusion of predicted values becomes simpler.
The algorithm first mixes all predictions in one pool to pick up a high-confidence box sequentially. Confidence smoothing[53] will be applied for all these boxes and variable methods can be chosen. For boxes from each single modality, we calculate the IoU with boxes from other modal detections and generate three cases. The logic gate aims to filter out those that are mutually verified. And others will be mixed in voting progress like normal NMS. For Case1, we focus on the highly similar boxes from all modal data. For Case2, we apply general softer-NMS to fuse boxes adaptively. However, in Case3, boxes from another modality are severely off position or even empty, which is more likely to drop the localization accuracy. We ignore them to avoid potential influence from them. The iteration is conducted according to the confidence scores, and will not affect the box adjustment due to its disorderliness. Uncertainty scores are applied in the following step. This fusion method can easily be extended to more modes or more sensors with little need to change the algorithm.
Fig. 5: The validity of uncertainty estimation. We visualize the relationship among uncertainty(variance \(\sigma^{2}\)), classification confidence, and localization accuracy IoU above. All of these are computed for point-based and image-based models separately as shown in the six sub-figures. On a scatter plot, the degree to which all points converge to a ray from the origin indicates the degree of regression of these points. The degree to which these points are correlated. Therefore, we believe that we can intuitively show the correlation between confidence, uncertainty and IoU three commonly used indicators.
## IV Experiment
### _Dataset_
#### Iv-A1 Kittli
We select the KITTI 2D object detection dataset[10] for problem investigation and model valuation, including 7481 pairs of camera images and LiDAR point clouds. It provides over 80K 2D boxes annotation with seven different classes: car, van, pedestrian, tram, van, truck, and cyclist. To avoid the influence of imbalance data distribution(we focus on multi-modal robustness), we combine them into three classes: car, pedestrian, and cyclist. In our experiment, the dataset is randomly split into train, validation, and test set by the ratio of 6:2:2.
#### Iv-A2 Projection
Before training, we project the 3D point clouds onto the image plane to be 2D Depth images. Given a point \(P_{v}=(x_{v},y_{v},z_{v})^{T}\), we calculate:
\[P_{v}^{\prime}=K_{v}[R_{v}|T_{v}]P_{v} \tag{10}\]
where \(K_{v},R_{v},T_{v}\) refer to the camera calibration matrix, rotation matrix, and translation matrix. The projected front-view point cloud depth map will be cropped to the same size as RGB images. We set the size as \(128\times 512\) in our experiment. Afterward, the value of both the depth map and the RGB images are normalized to [0,1] interval.
```
Input:\(\mathcal{B}_{r}=\{b_{1},...,b_{n}\}\): boxes from RGB images; \(\mathcal{S}_{r}=\{s_{1}^{2},...,s_{n}^{2}\}\): confidence of \(\mathcal{B}_{r}\); \(\mathcal{C}_{r}=\{\sigma_{1}^{2},...,\sigma_{n}^{2}\}\): variance of \(\mathcal{B}_{r}\); \(\mathcal{B}_{p}=\{b_{1},...,b_{m}\}\): boxes from point clouds; \(\mathcal{S}_{p}=\{s_{1}^{2},...,s_{m}^{2}\}\): confidence of \(\mathcal{B}_{p}\); \(\mathcal{C}_{p}=\{\sigma_{1}^{2},...,\sigma_{m}^{2}\}\): variance of \(\mathcal{B}_{p}\). Output:\(\mathcal{D}\): the final set of detections. \(\mathcal{D}\leftarrow\{\}\), \(\mathcal{T}=\mathcal{B}\leftarrow\mathcal{B}_{r}\cup\mathcal{B}_{p}\); \(\mathcal{S}\leftarrow\mathcal{S}_{r}\cup\mathcal{S}_{p}\), \(\mathcal{C}\leftarrow\mathcal{C}_{r}\cup\mathcal{C}_{p}\); while\(\mathcal{T}\neq\varnothing\)do \(m\gets argmax\mathcal{S}\); \(\mathcal{M}\gets b_{m}\); \(\mathcal{T}\gets\mathcal{T}-\mathcal{M}\); \(\mathcal{S}\leftarrow\mathcal{S}f(IoU(\mathcal{M},\mathcal{T}))\); if\(\mathcal{M}\in\mathcal{B}\),then if\(\mathcal{B}_{p}\neq\varnothing\)then\(iou\leftarrow\max IoU(\mathcal{M},\mathcal{B}_{p})\); if\(iou\geq t_{2}\)then/* Case1 */ \(idx\gets IoU(\mathcal{M},\mathcal{B})\geq t_{2}\); elseif\(iou\geq t_{1}\)then/* Case2 */ \(idx\gets IoU(\mathcal{M},\mathcal{B})\geq t_{1}\); elseif\(iou<t_{1}||B_{p}=\varnothing\)then/* Case3 */ \(idx\gets IoU(\mathcal{M},\mathcal{B}_{r})\geq t_{1}\); end if end if elseif\(\mathcal{M}\in\mathcal{B}_{p}\)then repeat algorithm above; map \(idx\) to \(\mathcal{B}\); \(\mathcal{M}\leftarrow\mathcal{B}[idx]/\mathcal{C}[idx]/sum(1/\mathcal{C}[idx])\); \(\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{M}\); end if return\(\mathcal{D},\mathcal{S}\);
```
**Algorithm 1**Uncertainty-aware Multi-source NMS
#### Iv-A3 Noise Simulation
To achieve the upper bound of noise inference in Expr.1, we leverage the noise benchmark[19] and select sufficient noise levels to modify KITTI data. The benchmark contains 15 corruptions on 5 severity levels, including gaussian noise, blur, extreme weather, etc. Because some types of noise cannot be applied to the point clouds like weather aspects, we chose three representative interference methods: Gaussian noise, motion blur, and frost noise, with five-level noise intensity.
### _Implementation_
With the proposed multi-source NMS, we replace the joint NMS in the fusion model Fig.4 with it. Our fusion strategy can be easily changed to be based on other types of NMS, or even adapted to NMS-free models. For fusion models, we first train the pipelines separately to provide stable convergence for fusion. Then we concatenate two pipelines with joint NMS or our proposed method for joint optimization. We set the IoU threshold in NMS as 0.45 for the two single-modal models, as set the two thresholds as 0.45 and 0.7 in our proposed models.
We have also implemented our method with SSD[33] in the experiment, which can achieve even better performance for reference. For comparison, we use average box selection(AvgFusion) as the default joint NMS method. Half of
Fig. 6: Performance on normal KITTI dataset. (a) presents the comparison based on Gaussian-YOLOv3[46] and SSD[33]. (b) presents the details on three categories of Gaussian-YOLOv3.
the boxes from two pipelines will be randomly dropped, and processed in one NMS step. In all experiments, we set the batch size as 16 and the initial learning rate as 0.0001.The experiment is conducted on NVIDIA GTX 1080Ti with CUDA 8.0 and cuDNN v7.
### _Basic Results_
Preliminary experimental results on the original KITTI dataset are shown in the results are shown in Fig.6. In Fig.6(a), we compared two implementation versions based on Gaussian-YOLOv3[46] and SSD[33], which has been revised to Gaussian-SSD. The results showed that SSD achieved better performance in all models. That means our approach can generalize to other detection models and achieve better performance by incorporating a better baseline.
However, in the subsequent experiments, we choose to use Gaussian-YOLOv3 model to highlight the effect of the fusion algorithm. We present the comparison on three main categories in Fig.6(b). Both AvgFusion and our model achieve the sub-optimal performance between RGB and depth models due to the lack of prior information in box selection/fusion. But our model has higher accuracy, and approaches the optimal modal performance on all objects, which demonstrates the benefits of our method on clean data, and the accuracy of our informative data selection that reflects the quality of fusion object boxes.
## V Overall Analysis
We have further investigated the model's performance with noisy KITTI data for a comprehensive understanding of the robustness in multi-modal fusion. First, we evaluate the performance of single-modal models with noisy-data to validate the simulated noise in Table.I. Then, we conduct a similar test for our proposed fusion model in Table.II. NR-D means fusion with noisy RGB data, while R-ND means fusion with noisy depth data. We finally provide supplement conclusions based on the results. In all experiments, models are trained with clean data and tested with noisy data.
### _Detection Model Degradation under Noise_
The experiments show that the single-modal model is severely affected by the noisy data, and the effect increases with the degree of noise disturbance. The experimental results are shown in Table.I. The numbers under the noise name indicate the inference level. We list the mAP of the proposed model under different noise settings. Among them, the prediction accuracy of the RGB model shows a nonlinear decrease for the three types of noise, with mAP decreasing from 76.34 to 0.28(Gaussian), 6.30(Motion), and 7.23(Frost), respectively. Compared with the detection results of clean data, the accuracy of all 15 experiments decreases. Similarly, the prediction accuracy of the Depth model also decreases nonlinearly with increasing noise intensity, with mAP decreasing from 76.34 to 1.69(Gaussian), 31.62(Motion), and 24.86(Frost), respectively. The accuracy decreases in 14/15 experiments.
In addition, the sensitivity of both modal data to these noises is consistent. The severity of the noise impact, from greatest to least, is
\[Gaussian\ Noise>Frost\ Noise>Motion\ Blur \tag{11}\]
In most cases, for the same class and interference level of noise, the Depth model has higher detection accuracy. In all 15 sets of experiments, the Depth model outperformed the RGB model in terms of accuracy, which is the same as the comparison results on clean data. The average accuracy increments are, 7.74+ for Car, 20.57+ for Pedestrian, 22.26+ for Cyclist, and 16.86+ for mAP.
### _Informative Data Selection is Generalized to Noise_
Through the Fig.6(b) we learn that on clean data, our model obtains sub-optimal results close to the highest single-modal accuracy. Further, we tested the performance of the fusion model in the case subjected to single-modal data noise, as shown in the Table.II.
The results show that the degradation of detection accuracy is smaller when our proposed RGB-Depth fusion model is affected by the noise of only one modality. When the fusion model receives noisy RGB and Depth data(NR-D), only 5/15 experiments show accuracy degradation. Its average accuracy changes for the three RGB noisy data are -3.40 (Gaussian), -4.05 (Motion), and -3.74 (Frost). mAP decreases from 80.58 to 77.18, 76.53, and 76.84, respectively. it can be seen that the noise for the RGB data has very little effect on our proposed fusion model. The effect of noise on our proposed fusion model is very small. In addition, the detection error of Car is positively correlated with the noise intensity, but the other categories are not. This leads to a slight increase of mAP from the lowest point during the increasing noise intensity (in 10/15 experiments). This issue needs to be further explored in the subsequent work.
Such a situation also exists for the noisy Depth data. When the fusion model receives RGB and noisy Depth data(R-ND), the average accuracy decreases are: -6.70(Gaussian), -10.63(Motion), and -5.66(Frost), respectively, although there is a decrease in accuracy in 14/15 experiments. mAP decreases from 80.58 to 73.88, 69.95, and 74.92. However, there is no increase in mAP with increasing noise intensity.
Furthermore, we compare the results on the same models with/without our informative data selection in fusion in Table.III. It shows that even simple fusion can improve performance compared with single-modal models, but selective fusion can exceed it significantly in noisy data.
Overall, our proposed fusion model is robust to single modal data noise and there is no substantial change in detection accuracy. Next, we will analyze the gain of the fusion model on the single-modal models.
### _Further Investigation on Noisy-data Fusion_
We further compare the fusion model with the unimodal model. Although the fusion model outperforms the unimodal model on noisy data, this is not incremental, as it may be degrading for the unimodal model with clean data. For example,
the fusion model that received noisy data outperformed the RGB model in all tests (average mAP increase of 51.61), but not as well as the Depth model and the fusion model with clean data: NRGB\(<\)NR-D\(<\)Depth\(<\)Fusion.
\[NRGB<NR-D<Depth<Fusion \tag{12}\]
The fusion model that received noisy data was better than the Depth model in all tests (average mAP increase of 34.20), but not as good as the RGB model and the fusion model with clean data. When there is less noise,
\[NDepth<RGB<R-ND<Fusion \tag{13}\]
when there is more noise,
\[NDepth<R-ND<RGB<Fusion \tag{14}\]
The results suggest that models with noisy data improve when they fuse clean data, and conversely, models with clean data may deteriorate when they fuse noisy data. However, there is also a smaller probability of an increase in accuracy in specific experiments, although that is difficult to explain now.
In addition, the sensitivity of fusion to RGB noise is:
\[Motion\;Blur>Frost>Gaussian \tag{15}\]
and to Depth noise is:
\[Motion\;Blur>Gaussian>Frost \tag{16}\]
, but overall they both have a relatively small effect. The average accuracy decreases are: -3.73(NR-D) and -7.66(R-ND). For specific detection targets, RGB noise in multi-modal data has a greater effect on Car, and NR-D has lower detection
\begin{table}
\begin{tabular}{c|c|c|c c c c c|c c c c|c c c c c} \hline \hline & \multicolumn{2}{c|}{Noise Type} & \multicolumn{4}{c|}{Gaussian Noise} & \multicolumn{4}{c|}{Motion Blur} & \multicolumn{4}{c}{Frost Noise} \\ \hline
**Modality** & **Category** & Clean & L1 & L2 & L3 & L4 & L5 & L1 & L2 & L3 & L4 & L5 & L1 & L2 & L3 & L4 & L5 \\ \hline \multirow{4}{*}{**NR-D**} & **Car** & 92.10 & 90.36 & 88.97 & 87.75 & 87.78 & 88.16 & 90.91 & 89.11 & 87.14 & 86.54 & 87.38 & 88.62 & 88.06 & 87.85 & 87.76 & 87.72 \\ & **Ped.** & 75.48 & 70.52 & 65.64 & 69.40 & 72.03 & 72.77 & 69.16 & 69.68 & 72.30 & 72.48 & 72.68 & 67.24 & 68.43 & 71.38 & 71.71 & 72.31 \\ & **Cyclist** & 74.17 & 69.34 & 68.81 & 68.59 & 69.77 & 70.62 & 68.90 & 67.38 & 67.21 & 69.07 & 69.53 & 68.39 & 68.95 & 69.31 & 69.95 & 70.50 \\ & **mAP** & 80.58 & 76.74 & 74.47 & 75.25 & 76.53 & 77.18 & 76.32 & 75.39 & 75.55 & 76.03 & 76.53 & 74.75 & 75.14 & 76.18 & 76.47 & 76.84 \\ \hline \multirow{4}{*}{**R-ND**} & **Car** & 92.10 & 91.40 & 90.60 & 90.62 & 90.73 & 90.87 & 91.97 & 91.44 & 89.79 & 88.17 & 87.40 & 90.31 & 90.41 & 90.62 & 90.72 & 90.70 \\ & **Ped.** & 75.48 & 74.03 & 73.11 & 68.80 & 65.07 & 63.81 & 73.25 & 69.91 & 65.29 & 62.59 & 60.07 & 72.42 & 68.23 & 66.26 & 66.85 & 67.04 \\ \cline{1-1} & **Cyclist** & 74.17 & 72.70 & 70.91 & 68.40 & 67.19 & 66.95 & 71.79 & 69.73 & 66.81 & 63.19 & 62.38 & 69.10 & 68.19 & 67.76 & 68.16 & 67.04 \\ \cline{1-1} & **mAP** & 80.58 & 79.38 & 78.21 & 75.94 & 74.33 & 73.88 & 79.00 & 77.03 & 73.96 & 71.32 & 69.95 & 77.28 & 75.61 & 74.88 & 75.24 & 74.92 \\ \hline \hline \end{tabular}
\end{table} TABLE II: AP for our fusion model under level 1-5 noisy-data.
\begin{table}
\begin{tabular}{c|c|c|c c c c c|c c c c|c c c c} \hline \hline & \multicolumn{2}{c|}{Noise Type} & \multicolumn{4}{c|}{Gaussian Noise} & \multicolumn{4}{c|}{Motion Blur} & \multicolumn{4}{c}{Frost Noise} \\ \hline
**Modality** & **Category** & Clean & L1 & L2 & L3 & L4 & L5 & L1 & L2 & L3 & L4 & L5 & L1 & L2 & L3 & L4 & L5 \\ \hline \multirow{4}{*}{**RGB**} & **Car** & 92.39 & 82.13 & 66.31 & 36.81 & 10.04 & 0.39 & 87.68 & 79.01 & 57.36 & 29.83 & 16.32 & 66.19 & 39.30 & 25.34 & 23.17 & 16.38 \\ & **Ped.** & 69.71 & 41.83 & 21.71 & 6.68 & 0.76 & 0.00 & 40.04 & 19.79 & 7.13 & 0.97 & 0.21 & 21.75 & 4.86 & 1.18 & 1.55 & 0.59 \\ & **Cyclist** & 66.92 & 46.70 & 29.85 & 13.45 & 3.49 & 0.47 & 54.24 & 40.84 & 20.43 & 6.64 & 2.37 & 32.83 & 15.88 & 9.80 & 8.46 & 4.72 \\ & **mAP** & 76.34 & 56.89 & 39.29 & 18.98 & 4.76 & 0.28 & 60.66 & 46.55 & 28.31 & 12.48 & 6.30 & 40.26 & 20.01 & 12.11 & 11.06 & 7.23 \\ \hline \multirow{4}{*}{**Depth**} & **Car** & 91.44 & 83.68 & 70.02 & 43.84 & 18.98 & 2.38 & 87.23 & 84.40 & 76.80 & 63.74 & 53.25 & 59.11 & 36.51 & 25.54 & 26.09 & 20.84 \\ & **Ped.** & 76.53 & 60.83 & 43.27 & 21.56 & 5.88 & 0.66 & 68.76 & 61.37 & 41.87 & 24.60 & 13.63 & 42.87 & 28.67 & 21.00 & 23.06 & 19.51 \\ \cline{1-1} & **Cyclist** & 74.85 & 64.71 & 53.84 & 36.32 & 18.43 & 2.04 & 67.03 & 60.81 & 48.68 & 36.76 & 27.99 & 53.24 & 44.58 & 37.50 & 37.94 & 34.24 \\ \cline{1-1} & **mAP** & 80.94 & 69.74 & 55.71 & 33.90 & 14.43 & 1.69 & 74.34 & 68.86 & 55.78 & 41.70 & 31.62 & 51.74 & 36.59 & 28.01 & 29.03 & 24.86 \\ \hline \hline \end{tabular}
\end{table} TABLE I: AP for RGB/Depth single-modal models under level 1-5 noisy-data.
\begin{table}
\begin{tabular}{c|c|c|c c c c c|c c c c c|c c c c c} \hline \hline & \multicolumn{2}{c|}{Noise Type} & \multicolumn{4}{c|}{Gaussian Noise} & \multicolumn{4}{c|}{Motion Blur} & \multicolumn{4}{c}{Frost Noise} \\ \hline
**Modality** & **Category** & Clean & L1 & L2 & L3 & L4 & L5 & L1 & L2 & L3 & L4 & L5 & L1 & L2 & L3 & L4 & L5 \\ \hline \multirow{4}{*}{**NR-ND**} & **car** & 91.92 & 87.41 & 80.04 & 66.12 & 53.21 & 46.65 & 89.69 & 86.81 & 79.50 & 69.35 & 63.35 & 77.28 & 64.91 & 58.68 & 58.27 & 55.26 \\ & **ped** & 73.12 & 62.23 & 52.81 & 43.62 & 38.22 & 36.73 & 63.76 & 56
accuracy than R-ND in all 15 experiments, with an average of 2.11 lower. In contrast, Depth noise has a greater effect on Pedestrian and Cyclist, and R-ND outperforms the NR-D model in most experiments. RGB noise has more influence when noise-level\(<\)3, and Depth noise has more influence when noise intensity rises.
These findings illustrate the complexity of the performance of the fusion model in the face of noisy data. For different targets, recognition tasks, data modality combinations, and noise types, fusion models may exhibit different performances. They do not show a simple linear or nonlinear change for increasing noise intensity, and may even show an increase in accuracy, which requires to be further investigated.
### _Rethink Our Model and Experiments_
The ablation study has been included in the comparative analysis of the single-modal and multi-modal models and are therefore not listed separately. In the following, We rethink our work in terms of data noise, data selection, and fusion models.
#### V-D1 Multi-modal Noise
We selected three general and suitable noises from the image noise benchmark to add to the RGB and Depth data because it is difficult to capture or simulate the corresponding noise data in real scenes. However, due to the gap between data modalities, such an approach still has a large problem to fit the real error and will lead to potential bias in the experimental data. However, for the robustness of the fusion model discussed in this paper, we mainly focus on adding noise interference of sufficient strength, so that such an error may be negligible.
#### V-D2 Data Selection Accuracy
In our setting, the selection accuracy mainly relies on uncertainty estimation. Uncertainty estimation is a prerequisite of the method proposed in this paper, and thus the accuracy of the estimation determines the interpretability of the fusion model. Therefore, we refer to the ECE method for uncertainty calibration[54], as shown in Fig.7 We model the uncertainty as a Gaussian distribution of the bounding box estimation parameters, so the magnitude of the variance corresponds to the potential range of values taken. The error between the true and estimated values can be portrayed by the distance between the curve and the \(y=x\) line. The RGB uncertainty estimates are more accurate, while Depth estimates have deviation, which can also lead to potential model bias.
#### V-D3 Fusion Network Design
We have further implemented the feature fusion model with typical methods[55]. However it is easy to have a dependence on a single modality, i.e., there is a significant primary and secondary relationship. The test results are shown in the Table.4. We apply weighted feature addition at seven layers in the two-pipeline network. The \(2\times 7\) weight parameters will be optimized during training. When trained without any limitation, the network tends to assign more weights for the Depth branch, which will seriously affect the robustness of the model. Then we use imbalance data dropout(10% for RGB and 15% for Depth) and weight limitation(upper to 0.7 in the first five stages) to adjust the fusion weight in training. The adjustment slightly improved the performance. But it is far not enough, and adaptive balance fusion remains an open problem.
## VI Conclusion
In this paper, we explore the robustness of single-modal and multi-modal models under noisy data, and claim that informative data selection is a simple but practical method in data fusion and anti-noise vision tasks. We present the conclusions on the relationship between data information and the performance of deep learning models, especially the performance of multimodal fusion models. It is pointed out that the information-driven selection strategy can improve the robustness of the fusion model. In addition, uncertainty can be used as an effective method to approximate the amount of data information. We analyze and reveal the robustness of multi-modal models on dirty data by comparing the accuracy decay of different models through experiments on the KITTI dataset. The experimental results show that the impact of noisy data on single-modal and multi-modal models is complex and involves many aspects like recognition target, recognition task, data modality combinations, noise type, etc. We also propose a novel multi-modal fused 2D target detection model. It performs selective fusion of bounding boxes generated by multiple independent sub-models based on informative data
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
**Exp.** & **Modal** & **L1** & **L2** & **L3** & **L4** & **L5** & **L6** & **L7** & **NR** & **ND** \\ \hline \multirow{3}{*}{w/o} & RGB & 0.3 & 0.6 & 0.7 & 0.7 & 0.5 & 0.5 & 0.1 & \multirow{3}{*}{83.2} & \multirow{3}{*}{19.8} \\ \cline{2-2} \cline{6-10} & Depth & 0.7 & 0.4 & 0.7 & 0.7 & 0.7 & 0.9 & & \multirow{3}{*}{77.6} & \multirow{3}{*}{39.5} \\ \hline \multirow{3}{*}{w/} & RGB & 0.6 & 0.5 & 0.8 & 0.8 & 0.7 & 0.4 & 0.3 & \\ \cline{1-1} \cline{2-2} \cline{6-10} & Depth & 0.3 & 0.1 & 0.2 & 0.1 & 0.2 & 0.8 & 0.7 & \\ \hline \end{tabular}
\end{table} TABLE IV: Modality bias in feature fusion. L1-L7 indicate seven fusion layers from front to back in the Darknet in YOLOv3. NR and ND represent Noisy-RGB and Noisy-Depth. w/ and w/o indicate whether we limit the fusion weights in training.
Fig. 7: Uncertainty calibration for the proposed model.
selection with uncertainty and can be applied to different modal data. Experimental results show that our fusion model can exhibit high detection accuracy when a clean data modality is present under severe noise interference. In addition, the robustness of the fusion model may show different results for different fusion structures, such as feature fusion. The robustness problem of multi-modal fusion models still needs to be studied in depth, and we will also focus on other tasks such as semantic segmentation and 3D target detection in our subsequent research to further investigate the phenomena observed in this paper.
As for another trend in research, although there is no literature specifically addressing this issue, it can be inferred from the latest research trends that a large model based on transformer can hopefully handle this issue well after training with enough data. However, this article hopes to present a simple and effective solution based on a non-parametric perspective. Compared with fitting the rich and varied real world, our method of directly measuring the information content of data provides researchers with a simple approach.
|
2304.14937
|
Contactless hand tremor amplitude measurement using smartphones:
development and pilot evaluation
|
Background: Physiological tremor is defined as an involuntary and rhythmic
shaking. Tremor of the hand is a key symptom of multiple neurological diseases,
and its frequency and amplitude differs according to both disease type and
disease progression. In routine clinical practice, tremor frequency and
amplitude are assessed by expert rating using a 0 to 4 integer scale. Such
ratings are subjective and have poor inter-rater reliability. There is thus a
clinical need for a practical and accurate method for objectively assessing
hand tremor.
Objective: to develop a proof of principle method to measure hand tremor
amplitude from smartphone videos.
Methods: We created a computer vision pipeline that automatically extracts
salient points on the hand and produces a 1-D time series of movement due to
tremor, in pixels. Using the smartphones' depth measurement, we convert this
measure into real distance units. We assessed the accuracy of the method using
60 videos of simulated tremor of different amplitudes from two healthy adults.
Videos were taken at distances of 50, 75 and 100 cm between hand and camera.
The participants had skin tone II and VI on the Fitzpatrick scale. We compared
our method to a gold-standard measurement from a slide rule. Bland-Altman
methods agreement analysis indicated a bias of 0.04 cm and 95% limits of
agreement from -1.27 to 1.20 cm. Furthermore, we qualitatively observed that
the method was robust to differences in skin tone and limited occlusion, such
as a band-aid affixed to the participant's hand.
Clinical relevance: We have demonstrated how tremor amplitude can be measured
from smartphone videos. In conjunction with tremor frequency, this approach
could be used to help diagnose and monitor neurological diseases
|
James Bungay, Osasenaga Emokpae, Samuel D. Relton, Jane Alty, Stefan Williams, Hui Fang, David C. Wong
|
2023-04-28T15:48:49Z
|
http://arxiv.org/abs/2304.14937v1
|
# Contactless hand tremor amplitude measurement using smartphones: development and pilot evaluation
###### Abstract
Background - Physiological tremor is defined as an involuntary and rhythmic shaking. Tremor of the hand is a key symptom of multiple neurological diseases, and its frequency and amplitude differs according to both disease type and disease progression. In routine clinical practice, tremor frequency and amplitude are assessed by expert rating using a 0 to 4 integer scale. Such ratings are subjective and have poor inter-rater reliability. There is thus a clinical need for a practical and accurate method for objectively assessing hand tremor.
Objective - to develop a proof-of-principle method to measure hand tremor amplitude from smartphone videos.
Methods - We created a computer vision pipeline that automatically extracts salient points on the hand and produces a 1-D time series of movement due to tremor, in pixels. Using the smartphones' depth measurement, we convert this measure into real distance units. We assessed the accuracy of the method using 60 videos of simulated tremor of different amplitudes from two healthy adults. Videos were taken at distances of 50, 75 and 100 cm between hand and camera. The participants had skin tone II and VI on the Fitzpatrick scale. We compared our method to a gold-standard measurement from a side rule. Bland-Altman methods agreement analysis indicated a bias of 0.04 cm and 95% limits of agreement from -1.27 to 1.20 cm. Furthermore, we qualitatively observed that the method was robust to differences in skin tone and limited occlusion, such as a band-aid affixed to the participant's hand.
Clinical relevance - We have demonstrated how tremor amplitude can be measured from smartphone videos. In conjunction with tremor frequency, this approach could be used to help diagnose and monitor neurological diseases.
## I Introduction
Hand tremor is a common symptom of multiple diseases, including Parkinson's disease, essential tremor, and multiple sclerosis. Assessment of tremor activity is an important clinical task that can help in diagnosis of disease and evaluating response to treatment.
Tremor is assessed clinically by considering its frequency and amplitude. The standard clinical methods of measuring both tremor amplitude and frequency are subjective. A clinician visually observes a patient tremor and makes an estimate of both measures, categorising it with a severity rating [1, 2]. Such visual estimates of movement disorders are usually performed in face-to-face consultations, and there is large inter-rater variability between expert clinicians such that tremor diagnoses are frequently incorrect [3, 4]
Objective measurements of tremor frequency is possible using an accelerometer strapped to the hand [5]. Tremor amplitude is rarely derived directly, and instead, the amplitude of the acceleration signal is taken as a proxy for the amplitude of the displacement. The practical use of accelerometers is limited in two respects; it requires non-standard specialist equipment, and also adds weight to the hand in a way that may alter tremor characteristics.
Instead, it may be possible to derive tremor frequency and amplitude measurements directly from smartphone videos. Analysis of smartphones videos has been used to assess other biomarkers of neurological conditions [6, 7]. The ubiquity of smartphones means that such computer vision approaches have the potential to be used in multiple contexts. For example, they could be used for remote consultations or for monitoring of disease progression. Recently, we have developed a method for extracting tremor _frequency_ directly from smartphone video recordings of hand tremor [8].
Here, we propose and demonstrate proof-of-principle for a method that enables measurement of tremor _amplitude_ from smartphone videos.
## II Method
### _Technical Description_
The method consists of four main parts, shown in figure 1. We assume that data has been collected via a modern smartphone that can capture both video and a depth measurement at the centre of the camera frame. In our work, we used an iPhone XR to provide both measurements.
**1. Extract hand features** - Using the video data, we extracted salient points on the hand using the Mediapipe hand tracker [9]. This two-stage process identifies the palm region using a U-net and then fits salient points that are consistent with pre-specified hand pose model. The hand tracker returns a tuple of \(\{x,y,t\}\) corresponding to the \(x\) and \(y\) pixels, and time. For robustness, we monitored movement over multiple points, corresponding to the base (metacarpal), middle (interphalangeal) and tip of thumb and forefinger.
**2. Calculate tremor amplitude in pixels** - In our controlled scenarios, the primary direction of tremor was horizontal. We therefore used only \(\{x,t\}\) to represent the motion waveform over time. To filter out low frequency motion due
to gross arm movements, we processed the waveform by first extracting peaks and troughs. We used a simple forward-difference to estimate the gradient and located zero crossings. The difference in \(x\) between adjacent zero crossings was an instantaneous estimate of the amplitude - this was calculated for all adjacent pairs of zero crossings. From this set, we used the median value to be robust against artificial increase in tremor due to 'ramp-up' of the tremor motion from an at-rest state.
**3. Convert amplitude from pixels to distance units:** Finally, we converted the pixel distance into true distance units. The conversion relies on the distance between the smartphone and the hand. We measured this distance using the Apple TrueDepth sensor, using the front-facing camera on an iPhone XR. In controlled experiments, we assessed the accuracy of the depth sensor by reading the TrueDepth sensor distance to an object at known depths. In these experiments, the camera was affixed to a tripod in good lighting conditions. At a known distance of 40cm, the root mean squared error over six sensor measurements was 0.12 cm. At 100 cm, this error increased slightly to 0.38 cm.
The conversion also requires knowledge of physical characteristics of the camera described in Table I. For a video shot in portrait, the horizontal sensor size of the camera, \(v_{w}\) is given by:
\[v_{w}=\frac{f_{e}a_{v}}{fa_{s}}\]
From this, we can calculate the width of the view in the scene, \(w\), at a given depth, \(d\):
\[w=v_{w}\frac{f}{d}\]
Finally, the conversion between pixels to distance is:
\[dist=pix\frac{w}{r_{h}}\]
Code for extracting the depth measurement and for computing the amplitude is available at [https://github.com/jamesbunagy/cv-tremor-amplitude](https://github.com/jamesbunagy/cv-tremor-amplitude). The output of the entire process, for two example waveforms, is shown in Figure 2. In figure 1(a), the dominant frequency of the tremor is visible, and the median peak-trough distance is 5.77 cm. In contrast, Figure 1(b) shows an example in which there is a gross change in \(x\) over the 12 second video recording, but there is no high frequency oscillation caused by tremor. In this case, the method correctly determines that there is no meaningful tremor (Median tremor amplitude = 0.09 cm)
### _Method Validation_
We undertook a methods agreement analysis to assess the performance of the tremor amplitude algorithm. No participants were recruited; all data were fully anonymous self-recordings. Given this, the University of Manchester advised that local ethics was not required.
_Data Collection:_ We recorded a set of videos of two members of the study team (JB, OE) Videos were recorded using an iPhone XR smartphone at 1080p resolution and at 60 fps. The smartphone was attached to a tripod, and we ensured that the videoed area was lit well using a ring light. A ruler was placed directly behind the hand. This allowed us retrospectively to measure the tremor amplitude from the video recording.
We simulated two common types of tremor. Resting tremor was simulated by the subject resting their forearm on a chair arm and rotating their wrist to create side-to-side motion as shown in Figure 3. Postural tremor was simulated by the subject raising an outstretched arm parallel to the floor, and with the thumb closest to the camera. The subject made oscillatory hand movements up and down. The camera was oriented so that the principal direction of tremor was horizontal, with respect to a portrait video frame.
For both resting and postural tremor, we simulated tremor amplitudes according to five categories (No tremor, small (\(<\)1cm), medium (\(\approx\)2cm), large (\(\approx\)5cm) and very large (\(>\)10cm), which correspond to the tremor categories used within both the Unified Parkinson's Disease Rating Scale and Essential Tremor Rating Assessment Scale [1, 2]. Videos were recorded at three depths 50 cm, 75 cm, 100 cm. The two team members had skin tones of II and VI on the Fitzpatrick scale. In total \(2\times 2\times 5\times 3=60\) videos were recorded.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Camera Property & Symbol & Value \\ \hline \hline Physical Lens Focal Length & \(f\) & 2.87 mm \\ \hline
35mm Equiv. Lens Focal Length & \(f_{e}\) & 32 mm \\ \hline Sensor Aspect Ratio & \(a_{s}\) & 0.75 (i.e. 3:4) \\ \hline Video Aspect Ratio & \(a_{v}\) & 0.5625 (i.e. 9:16) \\ \hline Horizontal Video Resolution & \(r_{h}\) & 1080 px \\ \hline \end{tabular}
\end{table} TABLE I: Specifications of the iPhone XR front facing camera.
Fig. 1: Overview of hand-tremor amplitude measurement using smartphone video analysis
We recorded an additional set of 8 videos to assess how well hand detection worked under various types of occlusion. For both subjects, we recorded videos of each wearing a ring, a plaster on the dorsum of the hand that simulated having an accelerometer strapped to the hand, and a hand with fewer than five fingers showing.
_Data Analysis:_ We compared the gold-standard ruler-measured amplitude with our computer vision approach using Bland-Altman agreement analysis [10]. The output of this is bias, which is the mean difference between the computer vision and gold standard and 95% limits of agreement (LoA), which may be regarded as the maximum difference between the two methods for 95% of future measurements. In subgroup analysis, we assessed whether there were differences in the bias and LoA for skin tones II and VI using a t-test.
The additional videos with occlusion were analysed qualitatively in two stages. First, we assessed whether the hand tracker could correctly and reliably identify the salient points. Second, we compared the calculated tremor amplitude against the ruler-based amplitude measurement. Bland-Altman plots were not plotted in these cases, as the small number of videos would mean that the plots would be meaningless.
a proxy for the true tremor size [11]. In principle, the acceleration signal can be integrated to provide true distance, but the resulting signal is likely to be noisy. Electromagnetic position sensors have been used, and claim a fidelity of 0.45 mm [12]. Our approach differs by using common sensor modalities that are readily available on most modern smartphones.
While our results are slightly poorer than some existing methods, we believe that these are mainly due to limitations with our experimental setup. Our method contains three potential sources of error. First, from rounding error due to discretization of distance in pixels. For a modern smartphone camera with high resolution, we can assume this to be negligible. Second, from errors in depth measurement. In local tests, we showed an average depth error of 0.38 cm at true distance 100 cm. Using trigonometry, we calculate that this would correspond to a possible error of up to \(0.38/100=0.38\%\). These sources of error are limitations of the camera technology and their sum can be considered a maximum lower bound on error. In addition, a third error, caused by rounding rounding the visual gold standard to the nearest centimeter, leads to an error of \(\pm 0.5\) cm.
While this pilot work provides proof of principle, it is limited in a few key respects. First, the tremor amplitude is only calculated in the plane of the camera image. While this is sufficient for some clinical scenarios, we know that tremor can be very heterogeneous, depending on clinical condition. For instance, tremor associated with Parkinson's disease is commonly described as a 'pill-rolling' tremor, which is characterised by rotation of the wrist. Second, the ruler used to provide a reference amplitude measurement was often occluded by the hand. This meant that we were unable to make accurate reference measurements, instead rounding to the nearest centimeter. This in turn led to unreliable estimates of the true agreement with the video-based method. Third, data collection was undertaken using an iPhone only. We note that most modern smartphones contain at least one method for measuring depth data, and that our approach should therefore be generalisable to other devices. An alternative, where there is no direct method to measure depth, would be to estimate depth via depth-from-motion methods [13].
To address these issues, we are currently conducting a larger validation study using data from patients with multiple types and acuity of tremor. In this study, we will also investigate whether full 3D depth map videos can improve estimation of tremor amplitude.
## Conclusion
We have demonstrated a smartphone video approach for measuring tremor amplitude. In conjunction with a method for measuring frequency (see [8]), this method can be objectively and contactlessly measure the key clinical components of tremor in near real-time The method has potential uses for diagnosis, and remote monitoring of disease progression or drug response.
|
2307.12816
|
Non-equilibrium memory effects: granular fluids and beyond
|
In this perspective paper, we look into memory effects in out-of-equilibrium
systems. To be concrete, we exemplify memory effects with the paradigmatic case
of granular fluids, although extensions to other contexts such as molecular
fluids with non-linear drag are also considered. The focus is put on two
archetypal memory effects: the Kovacs and Mpemba effects. In brief, the first
is related to imperfectly reaching a steady state -- either equilibrium or
non-equilibrium, whereas the second is related to reaching a steady state
faster despite starting further. Connections to optimal control theory thus
naturally emerge and are briefly discussed
|
A Patrón, B. Sánchez-Rey, C. A. Plata, A. Prados
|
2023-07-24T14:13:04Z
|
http://arxiv.org/abs/2307.12816v2
|
# Non-equilibrium memory effects: granular fluids and beyond
###### Abstract
In this perspective paper, we look into memory effects in out-of-equilibrium systems. To be concrete, we exemplify memory effects with the paradigmatic case of granular fluids, although extensions to other contexts such as molecular fluids with non-linear drag are also considered. The focus is put on two archetypal memory effects: the Kovacs and Mpemba effects. In brief, the first is related to imperfectly reaching a steady state--either equilibrium or non-equilibrium, whereas the second is related to reaching a steady state faster despite starting further. Connections to optimal control theory thus naturally emerge and are briefly discussed.
## 1 Introduction
Under quite general conditions, many physical systems tend in the long time limit to a state in which all trace of initial conditions is lost. This state is often stationary, either an equilibrium state or a non-equilibrium steady state (NESS), but it also may be a time-dependent "hydrodynamic" state--in which a reduced description in terms of a few "thermodynamic" or "macroscopic" variables accounts for the complete characterisation of the time evolution of the system.
Memory effects are intimately related to aging [1, 2, 3]. A system displays aging when its relaxation or time correlations are not invariant under time translation after being aged for a long waiting time; instead, they explicitly depend on such a time. A memory effect emerges in a physical system when its time evolution depends on the previous history, i.e. on its initial preparation that, in turn, depends on how it has been previously "aged".
A classic example of memory effect is the so-called Kovacs hump, first reported by Kovacs when measuring the volume relaxation of polymeric glasses [4, 5]. Analogous behaviours have been repeatedly observed in many different contexts [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Let us consider a quantity \(P\) of a physical system in contact with a thermal bath. Its equilibrium value is denoted by \(P_{\rm eq}(T)\), which is assumed to be a monotonic function. The Kovacs hump is the non-monotonic response of the system to the two-jump protocol described below.
Figure 1 shows a sketch of the Kovacs protocol and the associated Kovacs response. The system of interest is initially equilibrated at temperature \(T_{i}\), and therefrom it is aged at a lower temperature \(T_{1}<T_{i}\) in the time interval \(0<t<t_{w}\). At \(t=t_{w}\), the instantaneous value of \(P\) is \(P(t_{w})\), and at this point the temperature of the bath is abruptly changed to \(T_{f}\), such that \(P(t_{w})=P_{\rm eq}(T_{f})\)--thus, \(T_{i}>T_{f}>T_{1}\). The system displays the Kovacs effect when, for \(t>t_{w}\), \(P\) departs from its equilibrium value, which \(P\) already has as a consequence of the choice of \(T_{f}\), and presents a non-monotonic behaviour. The ex
Figure 1: Qualitative picture of the Kovacs hump. The time evolution of a physical quantity \(P\) is depicted on the top panel, when the system is submitted to the two-jump protocol in the temperature shown on the bottom panel. The relaxation from \(T_{i}\) to \(T_{i}\) (dashed line) is interrupted at \(t=t_{w}\), when the quantity \(P\) has its equilibrium value at the final temperature \(T_{f}\), \(P(t_{w})=P_{\rm eq}(T_{f})\). Nevertheless, \(P(t)\) deviates from \(P_{\rm eq}(T_{f})\) and passes through a maximum before returning thereto—thus showing the need of additional physical quantities to completely characterise the state of the system.
istence of this Kovacs hump entails that the pair \((T,P)\) does not suffice to completely characterise the state of the system: additional state variables are necessary.
Another example of memory effect is the Mpemba effect [31]. Originally, the Mpemba effect refers to "hot" water freezing faster than "cold" water [31, 32]. In this context, the very existence of the Mpemba effect is still controversial [33, 34]. Recently, the Mpemba effect has attracted the attention of the non-equilibrium physics community, understanding it in a generalised way as follows. The relaxation of two samples of the same system to a common final steady state is considered. Under certain conditions, the sample initially further from the steady state relaxes thereto faster than that initially closer, in contradiction with the usual Newton's law of cooling [35].
The Mpemba effect is qualitatively depicted in fig. 2. Both the Mpemba effect--the hotter cools sooner--and the inverse Mpemba effect--the colder heats sooner have been observed in many different physical contexts [23, 29, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. In the theoretical studies of the Mpemba effect, two main frameworks have been employed: the stochastic thermodynamics (or entropic) approach [36], and the kinetic (or thermal) approach [37], which we will describe later in detail. In the former, distance to equilibrium is defined in probability space, e.g. with the Kullback-Leibler distance. In the latter, distance to equilibrium is monitored through the kinetic temperature, which is proportional to the average kinetic energy.1
Footnote 1: For equilibrium systems, due to the equipartition theorem, the kinetic temperature equals the thermodynamic temperature. This is no longer the case for out-of-equilibrium states.
In the Mpemba effect, the system that is further from the steady state somehow takes a shortcut and thus relaxes thereto faster than the closer one. Then, there appears a natural connection with the general field of shortcuts or, employing the terminology introduced in ref. [63], swift state to state transformations. In particular, a related problem is the optimisation of the relaxation route to equilibrium--or to a NESS. For given initial and final states, the minimisation of the connection time between them by engineering the time dependence of some physical quantities, like the temperature or the potential, is a well-defined mathematical problem in optimal control theory [64]. This is the classic brachistochrone problem, which very recently has been addressed for both quantum and non-equilibrium systems [65, 66, 67, 68, 69, 70, 71, 72].
## 2 Kovacs effect
For systems with a master equation dynamics, there are general results for the shape of the Kovacs hump in linear response. These results hold under quite general conditions, basically (i) a canonical form of the equilibrium probability distribution function (pdf), proportional to \(\exp(-\beta H)\), with \(\beta=(k_{B}T)^{-1}\) and \(H\) being the system's Hamiltonian, and (ii) detailed balance in the dynamics [14]. With these assumptions, the form of the Kovacs hump for the energy \(E(t)=\left<H\right>(t)\) is directly related to the form of its "direct" relaxation function \(\phi_{E}(t)\) from \(T_{i}\) to \(T_{f}\), with only one jump.
From the explicit expression of the Kovacs hump in linear response, eq. (43) of ref. [14], one deduces that: (i) the Kovacs hump is always positive, i.e. \(E(t)\geq E_{\rm eq}(T_{f})\), (ii) there is only one maximum of \(E(t)\). Interestingly, the explicit expression of the Kovacs hump derived in ref. [14] resembles the phenomenological expression written by Kovacs [5]. Although the majority of studies are done in the non-linear regime, i.e. with large values of the temperature jumps, the behaviour described by the linear response theory, i.e. (i) and (ii) above, a positive hump with only one maximum, is the one found in glassy and other complex systems [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 24, 25, 27, 29, 30]; thus the term "normal Kovacs hump" has been coined to describe it. The normal hump stems from the structure of the direct relaxation function in linear response, which is a sum of exponentially decreasing modes with positive coefficients.
In glassy systems, the emergence of the Kovacs effect is often explained as a consequence of the complex energy landscape typical thereof. Still, the Kovacs effect has also been observed in systems with a much simpler energy landscape. A paradigmatic case is that of granular gases, which are intrinsically non-equilibrium systems: energy is purely kinetic but it is continuously dissipated in collisions. Therefore, an external mechanism is needed to drive the system to a stationary state, which is always a NESS with a non-Maxwellian velocity distribution function (vdf) [73]. The simplest one is that of the uniformly heated granular gas, in which independent white noise forces with variance \(\chi\) act on all particles. The kinetic temperature--here also called granular temperature--at the NESS is then a certain function of \(\chi\)[74, 75].
Figure 2: Qualitative picture of the Mpemba memory effect. The “hot” sample \(A\), with initial kinetic temperature \(T_{i,A}\), is further from equilibrium at the common bath temperature \(T_{s}\) than the “cold” sample \(B\), with initial temperature \(T_{i,B}<T_{i,A}\). The thermal Mpemba effect emerges when the time evolution of the initially hotter sample (red solid line) overtakes that of the initially colder one (blue dashed). In the stochastic processes approach, the time evolution of the distance in probability space—e.g. the Kullback-Leibler divergence—is monitored instead of the kinetic temperature, and the entropic Mpemba effect arises when a similar crossing of the relaxation towards equilibrium of samples \(A\) and \(B\) is observed.
Despite the simple energy landscape, the Kovacs effect neatly appears when a system of smooth inelastic hard particles2 is submitted to the two-jump Kovacs protocol, with the intensity of the driving playing the role of the bath temperature, \(\chi_{i}\to\chi_{1}\to\chi_{f}\)[19, 76]. This neatly implies that the instantaneous value of the kinetic temperature \(T(t)\) does not suffice to completely describe granular fluids. It is the non-Gaussianities that are responsible for the emergence of the Kovacs effect, and it is thus essential to incorporate them to the physical picture. It suffices to do so in the simplest way by including only the excess kurtosis \(a_{2}(t)\)--the so-called first Sonine approximation.
Footnote 2: In smooth collisions, the tangential component of the relative velocity is conserved, whereas the normal component is reversed and shrunk with the restitution coefficient \(\alpha\); the energy loss is thus proportional to \(1-\alpha^{2}\) and \(\alpha=1\) corresponds to the elastic case.
More interestingly, the sign of the Kovacs hump depends on the inelasticity. Specifically, it depends on the sign of the excess kurtosis at the steady state, \(a_{2}^{\rm s}\), which is negative (positive) for small (large) inelasticity. The key point is the cooling rate being an increasing function of \(a_{2}\). By aging the system with a very low value of the driving \(\chi_{1}\), the system falls onto the homogeneous cooling state (HCS) [77], in which the granular fluid freely cools following Haff's law [78], \(T(t)\propto t^{-2}\), and the excess kurtosis becomes constant and equals \(a_{2}^{\rm HCS}\). One always has \({\rm sgn}(a_{2}^{\rm HCS})={\rm sgn}(a_{2}^{\rm s})\) and \(|a_{2}^{\rm HCS}|>|a_{2}^{\rm s}|\)--the white noise forcing diminishes the non-Gaussian character of the vdf, thus decreasing \(|a_{2}|\). Then, just after the second jump \(\chi_{1}\to\chi_{f}\) at \(t=t_{w}\), despite having the "correct" kinetic temperature \(T_{f}\), the system is cooling slower (faster) than at the steady state when \(a_{2}^{\rm HCS}-a_{2}^{\rm s}\), or simply \(a_{2}^{\rm s}\), is negative (positive), i.e. for small (large) inelasticity.
For \(t>t_{w}\), the discussion above entails the following. The kinetic temperature \(T(t)\) initially increases (decreases) and passes through a maximum (minimum) before going back to \(T_{f}\) when \(a_{2}^{\rm s}<0\) (\(a_{2}^{\rm s}>0\)), i.e. for small (large) inelasticity. Therefore, the Kovacs hump is _normal_, similar to that of molecular fluids, positive and with only one maximum for small inelasticities, whereas the Kovacs hump turns out to be _anomalous_, using the terminology introduced in ref. [19], for large inelasticities: negative with one minimum. Figure 3 shows two examples of the aforementioned behaviours. In the granular gas, both the normal and the anomalous Kovacs effect persist in the linear response regime [28].
The Kovacs effect has also been investigated in a granular fluid of rough particles. In addition to inelastic, collisions have a certain degree of roughness, i.e. the tangential component of the relative velocity is not conserved in collisions. This induces a coupling between the translational and rotational degrees of freedom. More complex Kovacs responses emerge, which may involve several extrema [79].
The linear response theory for molecular systems [14] has been generalised to thermal systems [21, 22]. Specifically, the relation between the Kovacs hump and the direct relaxation function remains valid, but the latter is not necessarily a sum of positive modes. It is this fact that makes it possible the emergence of the anomalous Kovacs effect, at least in linear response [28].
Finally, it is interesting to note that the Kovacs effect has also been recently investigated in a variety of systems, such as active matter [22], disordered mechanical systems [80], frictional interfaces [81], fluids with non-linear drag [29], or a levitated colloidal nanoparticle [26].
## 4 Mpemba effect
To start with, we discuss the entropic (stochastic) Mpemba effect, triggered by the seminal Lu and Raz's work [36]. A mesoscopic system is considered and its time evolution is analysed in terms of the pdf of the relevant variables, which obeys a Markovian evolution equation (master equation, Fokker-Planck equation, etc.) with detailed balance. Distance to equilibrium is defined in terms of a functional of the pdf, e.g. the Kullback-Leibler divergence or other norms like the \(\mathcal{L}^{1}\) or \(\mathcal{L}^{2}\) norms. By expanding the solution of the evolution equation in the eigenfunctions of the relevant operator, the entropic Mpemba effect is found when, under appropriate conditions, the amplitude of the slowest relaxation mode presents a non-monotonic dependence with the temperature [36, 38, 45, 48, 57, 58, 61, 82]. Also, a _strong_ Mpemba effect has been reported, which arises when, by adequately choosing the system parameters, the coefficient of the slowest relaxation mode vanishes and the relaxation to equilibrium becomes exponentially faster [36, 38, 58, 82].
In the thermal (kinetic) approach, started with Lasanta _et al._'s analysis of a granular gas [37], a fluid is considered and its time evolution is analysed in terms of the one-particle vdf, which evolves following a kinetic equation--Boltzmann-Fokker-Planck, typically. The relaxation to
Figure 3: Kovacs memory effect for the uniformly heated granular gas. The theoretical curves in the first Sonine approximation for both the normal positive hump (top panel) and the anomalous negative hump (bottom panel) are shown as a function of a dimensionless time \(\tau\). See refs. [19, 76] for more details and the comparison with numerical simulations.
the steady state is monitored by the kinetic temperature. The kinetic approach has been employed for both granular fluids [37, 39, 42, 47, 50, 51], in which collisions between particles are inelastic, and molecular fluids with elastic collisions but with a non-linear drag force [29, 43, 51, 56]. The former relaxes to a NESS that is characterised by the intensity of the driving applied to balance, in average, the energy dissipated in collisions; the latter relaxes to a true equilibrium state with a Maxwellian vdf.
There are some key differences between the stochastic and kinetic approaches to the Mpemba effect. On the one hand, the monitored quantity in the kinetic approach, the kinetic temperature is much closer to an experimentally measurable quantity than the abstract distance between distributions employed in the stochastic approach. In addition, the thermal Mpemba effect typically takes place for short times, far away from the final state--which in principle makes it easier to be observed. On the other hand, the initial conditions in the kinetic approach must be non-stationary and thus, in principle, non-trivial to implement--although for non-linear fluids it has been discussed the aging procedure to obtain these initial conditions, which correspond to a long-lived, metastable, non-equilibrium state [29]; whereas the initial conditions for the entropic Mpemba effect in the stochastic approach are equilibrium states.3
Footnote 3: Initial stationary conditions have also been considered in the granular case, but an unrealistic asymmetric driving mechanism has to be introduced to trigger the Mpemba effect [47, 54].
Now we focus on the kinetic approach to the Mpemba effect. Following Prados and Trizac's analysis of the Kovacs effect in the same system [19, 76], the first Sonine approximation was employed in ref. [37] to analyse the emergence of the Mpemba effect in a granular fluid by incorporating the non-Gaussianities to the physical picture. In fact, it is the non-Gaussian character of the vdf that makes the Mpemba effect possible. If the vdf were Gaussian, the kinetic temperature \(T(t)\) would obey a closed first-order differential equation, without additional variables, and neither the Mpemba effect nor any other memory effect would emerge.
In fig. 4, specific examples of both the Mpemba effect, for \(T_{i,A}>T_{i,B}>T_{s}\), and the inverse Mpemba effect, for \(T_{s}>T_{i,A}>T_{i,B}\), are shown. The hot sample A is prepared in an initial state with kinetic temperature \(T_{i,A}\) and excess kurtosis \(a_{2,i}^{A}\), and cools down to a NESS corresponding to a certain value of the driving \(\chi_{s}\) following the dynamical curve \(T_{A}(t)\) (red solid line). The cold sample is prepared in an initial state with kinetic temperature \(T_{i,B}<T_{i,A}\) and excess kurtosis \(a_{2,i}^{B}\), and also cools down to the same NESS following the dynamical curve \(T_{B}(t)\) (blue dashed). Again, the key point is the cooling rate increasing with \(a_{2}\): if \(a_{2,i}^{A}>a_{2,i}^{B}\), the difference of the initial cooling rates may become large enough to facilitate the crossing of the corresponding time evolutions \(T_{A}(t)\) and \(T_{B}(t)\)--at least for small enough kinetic temperature difference \(\Delta T_{i}\equiv T_{i,A}-T_{i,B}\). As the initial states are not stationary states, the initial values of the kurtosis \(a_{2,i}\) can be tuned to bring the Mpemba effect about.
As the difference of the initial kurtosis \(\Delta a_{2,i}\equiv a_{2,i}^{A}>a_{2,i}^{B}\) increases, the range of initial temperatures \(\Delta T_{i}\equiv T_{i,A}-T_{i,B}\) for which the Mpemba effect is observed increases. Since the cooling rate depends on the inelasticity \(\alpha\), the range of temperatures for which the Mpemba effect emerges depends on the inelasticity as well; decreasing with it and vanishing in the elastic limit \(\alpha\to 1\).
The thermal Mpemba effect has also been investigated for a gas of inelastic rough hard spheres [39]. Therein, the Mpemba effect is giant, much larger than in the smooth granular gas. The initially hotter sample may cool sooner, even when the initial temperatures differ by more than one order of magnitude. The largeness of the memory effect stems from the coupling between the translational and rotational temperatures, which are of the same order--in the smooth case, the Mpemba effect stemmed from the coupling with the (quite small) non-Gaussianities.
It is interesting to note that the Mpemba effect has also recently been found in a molecular fluid, in which the collisions between particles are elastic, with non-linear drag \(\zeta(v)=\zeta_{0}(1+\gamma\,mv^{2}/2k_{B}T_{s})\)[29, 43, 56]. The non-linearity is measured by a dimensionless parameter \(\gamma\), and the relevance of collisions by a dimensionless collision rate \(\xi^{-1}\) (\(\xi=\infty\) thus corresponds to the collisionless case.) The kinetic temperature is not constant due to the interactions with the thermal bath--modelled as a background fluid of particles with comparable mass [83, 84, 85]. The non-linearity of the drag implies that the evolution equation for the temperature is coupled to higher-order cumulants of the vdf, bringing about the possible emergence of memory effects.
One key question, unanswered in previous studies in the granular case [37, 39], is the aging procedure that may give rise to the specific initial non-equilibrium conditions cho
Figure 4: Mpemba memory effect for the uniformly heated granular gas. The dimensionless temperature \(\theta\equiv T/T_{s}\) is plotted as a function of a dimensionless time \(\tau\). Both the Mpemba effect and the inverse Mpemba effect are shown—similarly to fig. 3, only the theoretical curves in the first Sonine approximation. See ref. [37] for the comparison with numerical simulations.
sen to make the Mpemba effect as large as possible. Interestingly, it is possible to give an answer for the non-linear fluid: the hot sample must be prepared by heating it from a much lower temperature, whereas the cold sample must be prepared by cooling it from a much higher temperature. The quench from a very high temperature employed for the cold sample makes it fall in a long-lived far-from-equilibrium state [86], over which the kinetic temperature follows a very slow, algebraic, decay to equilibrium--which (i) increases the magnitude of the Mpemba effect and (ii) makes it universal, in the sense that the curves corresponding to different initial temperatures, non-linearity \(\gamma\), and collision rate \(\xi\) collapse onto a unique master curve upon a suitable rescaling, see fig. 5[29].
## 5 Optimal control
What is the fastest relaxation route between two given states, either equilibrium, NESSs, or arbitrary ones? In general, this is the problem of the brachistochrone, which has recently been addressed in different physical contexts [63, 65, 66, 67, 68, 69, 70, 71, 72]. It is tempting to relate this problem with the Mpemba effect, since the relaxation from the initially further from equilibrium state overtaking that of the initially closer may be interpreted as the former finding a shortcut to the common final state.
The thermal brachistochrone has been recently investigated in uniformly driven granular fluids [68, 69]. It refers to the minimum time connection by controlling the intensity of the stochastic forcing \(\chi\). The protocols minimising the connection time between the initial and final NESSs corresponding to initial and final kinetic temperatures \(T_{i}\) and \(T_{f}\) are of bang-bang type, i.e. they comprise different time intervals in which the thermostat alternates between its maximum and minimum available values [64].
In the granular fluid, the time over the brachistochrone \(t_{f}\) typically beats the experimental relaxation time \(t_{R}\) by at least one order of magnitude--see figure 6. Remarkably, in the usual relaxation experiment with a sudden step at \(t=0\), the relaxation is never complete in a finite time--the empirical relaxation time is defined by estimating that the system is close "enough" to the final state. On the contrary, over the brachistochrone, the system reaches exactly the final state in a finite time.
A similar situation, with the thermal brachistochrone given by bang-bang protocols is found in Fokker-Planck systems [68, 70]. The case of coupled harmonic oscillators that are driven from an initial equilibrium state at temperature \(T_{i}\) to a final equilibrium state at temperature \(T_{f}\) has been analysed in detail, and an unexpected discontinuity of the minimum connection time with increasing dimension has been unveiled [70].
## 6 Discussion
We have reviewed the emergence of non-equilibrium memory effects, mainly in granular fluids and fluids with non-linear drag. Despite being quite different from a fundamental point of view--collisions in granular fluids are inelastic, so they are intrinsically out-of-equilibrium systems with non-Gaussian vdfs even in the stationary state, both types of systems display the Kovacs and the Mpemba memory effects. Still, one key difference between granular and non-linear fluids is the emergence of the anomalous Kovacs effect in the former. Even when a non-linear drag is present, the Kovacs effect is always normal when the stationary state corresponds to equilibrium and the dynamics verify detailed balance.
The existence of these memory effects in the relaxation of the kinetic temperature, proportional to the average kinetic energy, of granular and non-linear fluids stems from its evolution being coupled to additional variables, higher-order cumulants of the velocity that measure the deviation of the vdf from the Gaussian shape. In other words, the
Figure 5: Universal Mpemba memory effect for the molecular fluid with non-linear drag. The dimensionless temperature is again \(\theta=T/T_{s}\). Several cold samples B for different values of \((\theta_{i,B},\gamma,\xi)\) are shown, together with the corresponding hot samples for a fixed ratio of initial temperatures \(\theta_{i,A}/\theta_{i,B}=1.1\). When plotted as a function of a dimensionless time \(s_{B}=\gamma\theta_{i,B}\zeta_{0}t\), all curves—both for the hot and cold samples—collapse onto a universal behavior.
Figure 6: Acceleration factor over the brachistochrone for the granular gas. Specifically, we plot the ratio \(t_{R}/t_{f}\) of the experimental relaxation time to the brachistochrone time as a function of the ratio \(T_{f}/T_{i}\) of the final temperature to the initial one, for \(\alpha=0.3\) (solid line) and \(\alpha=0.8\) (dashed line)—see reference [68] for details.
kinetic temperature does not suffice to univocally determine the macroscopic state of the system. Hence, it is essential in general to keep track of the non-Gaussianities to understand the non-equilibrium behaviour.
The thermal and entropic approaches to the Mpemba effect have been scarcely compared [61, 87]. In ref. [87], it was shown that the thermal Mpemba effect may appear without its entropic counterpart--or vice versa--in a molecular fluid with non-linear drag. Therein, some situations appear in which the kinetic temperature overshoots the stationary value, which makes it necessary to revise the usual definition of the thermal Mpemba effect in this scenario. The authors of ref. [87] propose a separation of the Kullback-Leibler divergence into a "kinetic" contribution plus a "local-equilibrium" distribution that allows for defining a non-equilibrium temperature, not necessarily associated with the average kinetic temperature, for any system relaxing to equilibrium. It seems worth exploring if this line of thought could lead to a unique framework for the thermal and entropic Mpemba effects.
Much progress has been made in the understanding of these memory effects. However, there is still room for further work in this appealing line of research. One perspective is related to their optimal control, e.g. for optimising the "positive" consequences of a (tailored) preparation of the initial state--as in the Mpemba effect. Therein, it seems also worth investigating possible connections between the Mpemba effect and the optimisation of the relaxation route to equilibrium, which has attracted a lot of attention recently from different perspectives: e.g. the impact of a precooling strategy [88] or the possible asymmetry between heating and cooling [89, 90, 91].
###### Acknowledgements.
We acknowledge financial support from Grant PID2021-122588NB-I00 funded by MCIN/AEI/10.13039/ 501100011033/ and by "ERDF A way of making Europe", and also from Grant ProyExcel_00796 funded by Junta de Andalucia's PAIDI 2020 programme. A. Patron acknowledges support from the FPU programme through Grant FPU2019-4110. C. A. Plata acknowledges the funding received from EU Horizon Europe-Marie Sklodowska-Curie 2021 programme through the Postdoctoral Fellowship with Ref. 101065902 (ORION). We are indebted with all the people with whom we have collaborated in this exciting field of memory effects.
|
2302.06065
|
A Systematic Literature Review of Explainable AI for Software
Engineering
|
Context: In recent years, leveraging machine learning (ML) techniques has
become one of the main solutions to tackle many software engineering (SE)
tasks, in research studies (ML4SE). This has been achieved by utilizing
state-of-the-art models that tend to be more complex and black-box, which is
led to less explainable solutions that reduce trust and uptake of ML4SE
solutions by professionals in the industry.
Objective: One potential remedy is to offer explainable AI (XAI) methods to
provide the missing explainability. In this paper, we aim to explore to what
extent XAI has been studied in the SE community (XAI4SE) and provide a
comprehensive view of the current state-of-the-art as well as challenge and
roadmap for future work.
Method: We conduct a systematic literature review on 24 (out of 869 primary
studies that were selected by keyword search) most relevant published studies
in XAI4SE. We have three research questions that were answered by meta-analysis
of the collected data per paper.
Results: Our study reveals that among the identified studies, software
maintenance (\%68) and particularly defect prediction has the highest share on
the SE stages and tasks being studied. Additionally, we found that XAI methods
were mainly applied to classic ML models rather than more complex models. We
also noticed a clear lack of standard evaluation metrics for XAI methods in the
literature which has caused confusion among researchers and a lack of
benchmarks for comparisons.
Conclusions: XAI has been identified as a helpful tool by most studies, which
we cover in the systematic review. However, XAI4SE is a relatively new domain
with a lot of untouched potentials, including the SE tasks to help with, the
ML4SE methods to explain, and the types of explanations to offer. This study
encourages the researchers to work on the identified challenges and roadmap
reported in the paper.
|
Ahmad Haji Mohammadkhani, Nitin Sai Bommi, Mariem Daboussi, Onkar Sabnis, Chakkrit Tantithamthavorn, Hadi Hemmati
|
2023-02-13T02:59:41Z
|
http://arxiv.org/abs/2302.06065v1
|
# A Systematic Literature Review of Explainable AI for Software Engineering
###### Abstract
**Context:** In recent years, leveraging machine learning (ML) techniques has become one of the main solutions to tackle many software engineering (SE) tasks, in research studies (ML4SE). This has been achieved by utilizing state-of-the-art models that tend to be more complex and black-box, which is led to less explainable solutions that reduce trust and uptake of ML4SE solutions by professionals in the industry.
**Objective:** One potential remedy is to offer explainable AI (XAI) methods to provide the missing explainability. In this paper, we aim to explore to what extent XAI has been studied in the SE community (XAI4SE) and provide a comprehensive view of the current state-of-the-art as well as challenge and roadmap for future work.
**Method:** We conduct a systematic literature review on 24 (out of 869 primary studies that were selected by keyword search) most relevant published studies in XAI4SE. We have three research questions that were answered by meta-analysis of the collected data per paper.
**Results:** Our study reveals that among the identified studies, software maintenance (%68) and particularly defect prediction has the highest share on the SE stages and tasks being studied. Additionally, we found that XAI methods were mainly applied to classic ML models rather than more complex models. We also noticed a clear lack of standard evaluation metrics for XAI methods in the literature which has caused confusion among researchers and a lack of benchmarks for comparisons.
**Conclusions:** XAI has been identified as a helpful tool by most studies, which we cover in the systematic review. However, XAI4SE is a relatively new domain with a lot of untouched potentials, including the SE tasks to help with, the ML4SE methods to explain, and the types of explanations to offer. This study encourages the researchers to work on the identified challenges and roadmap reported in the paper.
keywords: Explainable AI for Software Engineering (XAI4SE), Systematic Review,, Machine Learning for Software Engineering (ML4SE), Explainable AI, Interpretable AI +
Footnote †: journal: Information and Software Technology
## 1 Introduction
In the past few decades, advancement in Machine Learning (ML) has exponentially increased with the combination of powerful machines, robust algorithms, and easier access to vast amounts of data. As a result, at present, ML models have been developed in many critical domains such as in healthcare, banking, finance, terrorism detection [1; 2; 3].
In the field of software engineering as well, ML has been dominating many studies and activities. AI-based solutions have been applied on many automation tasks such as source code [4], test case [5], patch [6], specification generation [7], prediction tasks such as defect or vulnerability prediction [8], and recommendation tasks such as API recommendation [9].
Easy access to an abundance of software repositories (e.g., GitHub, StackOverflow, execution logs, etc.) has made SE an ideal field for data-driven ML algorithms to thrive. During the past decade, there has been many publications in sub-domains of SE research such as Mining Software Repositories (Software Analytics) and Automated Software Engineering that study applying ML techniques on various SE tasks such as API recom
mendation [9], risk prediction [10], code comprehension [11], code generation [4], code retrieval [12], code summarization [13], bug-fixing processes [14], software testing [5], fault localization [15], effort estimation [16], documentation [17], clone detection [18], program synthesis [19], etc.
Often case, the models with higher complexity such as SVMs [20], and deep neural networks [21] have achieved higher predictive accuracy in these tasks and thus recommended to be used by practitioners. However, similar to other fields, like NLP or Image Processing, as the ML models become more complex and achieve better accuracies, understanding their decision-making process becomes much harder. State-of-the-art models evolved from using algorithms like decision trees or logistic regression that are intrinsically explainable, to using Deep Learning (DL) architectures, which basically offer no explanation for their decisions. Even though the final results may improve, more questions arise regarding their black-box nature.
This black-box nature of the solution can cause issues with trusting these models at different levels. It will provoke distrust in managers and customers, especially in more critical tasks. For instance, in the case of SE tasks whenever an automated solution is suggested to be deployed, e.g., a generated code or patch, etc.) where the wrong predictions/generations may cause expensive failure or extra manual overhead to check and fix, leading to a lack of adoption in practice [22]. Also, ethical challenges and the risk of immoral biases emerge [23] and it leaves the researchers with less clue about the right direction for improving their models [24].
In other words, acquiring the explainable output from the so-called black-box models, is one of the key problems of the existing machine learning models, to be adopted in practice. Though the black-box models themselves learn the connection between the features and the final output, they do not clearly describe the actual relationship between the given set of features and the individual prediction bestowed by the models. It is extremely important to know why a model makes a specific prediction to retain the model's reliability, in many domain including most SE tasks. For that, the model should be able to offer explanations for an individual prediction or the model's overall approach to the practitioners.
Software practitioners as the main users of ML models in SE would only be interested in adopting ML-based code recommendation, bug repair tools, etc., if they are persuaded that the models have learned what they "should have" and predict or generate what they are "meant to", from the perspective of the human experts. Without establishing such trust any solution is seemed as unreliable with potential unexpected consequences that wastes more resources and time, than it has saved by automation. In addition, if the outputs of the models are not informative enough, they can only lead to more ambiguity rather than helping the developers and users.
As another example, let's look at the defect prediction problem, which is one of the most studied tasks in SE by researchers when it comes to applying ML in SE. It has taken advantage of ML/DL models for many years. The goal is to predict if a piece of code or file is defective or not. In this scenario, if the model finds a file defective, but not any more information about its decision, the developer will have no clue on what to look for if the model provides the reasons for the file's being defective or specify some lines or tokens as the possible defective parts, it would be easy for the software engineers to minimize the bugs based on the grounds provided and validate the ML-based recommendation.
In short, model explainability facilitates the debugging process, bias detection (recourse to individuals who are adversely affected by the model predictions), assess when to trust model predictions in decision-making, and determining if they are suitable for deployment.
Thus, there is a compelling need to overcome the trade-off between accuracy and interpretability to provide trustworthy, fair, robust models for real-world applications. As a consequence, AI researchers have focused on the space of eXplainable Artificial Intelligence (XAI) [25], which provides an explainable and understandable interpretation of the behavior of AI systems.
Responding to the growing attraction in the industry and academia to use AI models in SE in recent years and the critical need for interpretation of AI models that are getting more complex, we conducted a systematic literature review (SLR). The SLR is based on relevant journal and conference papers that offer explainability methods for SE tasks or propose ML models that include any type of explanation. We conducted this research to provide practitioners and researchers a better insight into the efforts that have been made in this topic so far and the state-of-the-art of XAI in software engineering. In order to do so we have defined three research questions.
* **RQ1: What are the main software engineering areas and tasks for which Explainable AI approaches have been used to date?**
RQ1 aims to understand to what extent XAI has been used already in the SE community, which can poten
tially reveal which areas need further study. As our analysis show, "software maintenance" is the highest explored area and defect prediction is the most favorable task for the XAI researchers in SE by far, but areas such as testing and program repair have not used XAI much, even though they have used ML a lot.
* **RQ2: What are the Explainable AI approaches adopted for SE tasks?**
The goal of RQ2 is to understand what XAI methods the SE community are using, which in turn will help us guide the future work by identifying methods that have shown good results already and potential methods that are not studied yet. Our analysis shows that most of the explanations so far are coming from self-explaining models and in the form of model internals (e.g., variable coefficients). Also, we can see that LIME and ANOVA are two of the mostly utilized XAI methods. We also see that higher-level explanations such as visualization and explanations in natural language are less suited in XAI4SE.
* **RQ3: How useful XAI has been for SE?**
In RQ3, we focus on the usefulness of XAI methods and how the offered explanation has helped the users to improve or better understand the models. The goal is to have a realistic view if the potential benefits that can steer the future research in this domain. To answer this question, we collect the original authors' view, based on their results, on the usefulness, not our subjective opinion. One of the most interesting improvements reported in the studies is the usage of XAI to make the defect prediction models more precise while finding bugs and defects. Also, we discuss the lack of standard evaluation metrics according to the studies and how human-centered evaluations should be beneficial.
Finally, we have summarized the limitations and challenges of XAI4SE and provided a roadmap for future works.
To the extent of our knowledge, this is the first SLR for XAI in software engineering. The key contributions of this paper are:
* We present a consolidated body of knowledge of research on different aspects of XAI in software engineering
* We conducted a comprehensive search on the most influential venues in the field and selected 24 related papers that match our criteria.
* We analyzed these 24 papers to get a better insight into different characteristics of the problem in hand (e.g. their objective, datasets, representation, etc.), the models that are used, and the varieties of explainability that they offer.
* Finally, we identified gaps and limitations and have suggested a roadmap for the future.
## 2 Explainable AI in a Nutshell
One of the common issues in the literature on Explainable AI is that there are several related concepts in this domain that are often used interchangeably in different articles. Therefore, in this section, we provide a brief description and the definitions of the terms that are widely used in Explainable AI jargon which help to understand the related literature and the concepts more precisely. This will also make the basis for the definition of XAI in SE, which will be used in this paper.
### Explainability and Interpretability
The most common term which hinders the establishment of concepts in XAI is the interchangeable use of interpretability and explainability in the literature. Although these terms are closely related there are some works that identify and distinguish these terms.
Though there are no clear mathematical definitions for interpretability and explainability, few works in this line attempt to clarify these terms. According to Miller [26], model interpretability means "the degree to which a human can understand the cause of a decision". Another popular definition came from Doshi-Velez and Kim [27], where they define it as "the ability to explain or to present in understandable terms to a human". Based on these explanations _Interpretability_ is more like the cause and effect relationship of inputs and outputs of the model.
In contrast, _Explainability_ is related to the internal logic and mechanics that are inside a machine learning algorithm. If the model is explainable, the human can understand the behavior of the model in the training and decision-making process. An interpretable model does not necessarily explain the internal logic of the ML model. This explains that interpretability does not exactly entail explainability, but an explainable model is always interpretable [28]. Therefore, it is necessary to address both interpretability and explainability aspects to better understand the ML model's logical behavior based on its inputs and outputs.
Regardless of this debate, since many studies have used these terms interchangeably and this SLR wants to cover all the related work, in both literature collection and analysis phases in this paper, we have ignored the differences between these two terms.
### Objectives of XAI
Basically, explainability can be used to evaluate, justify, improve, or manage AI, or even learn from it. It is necessary to understand AI's risks and its failures by understanding the model's behavior.
In this section, we explain the main objectives of explainability in AI, in general (not limited to SE) so that later in the paper we can assess with aspects of generic XAI has been used and identified and useful in the SE domain. It's worth mentioning that these objectives are not separate concepts and may overlap in many aspects or be called with other names in different articles, but here, we gathered the most mentioned objectives in the literature.
An XAI method can aim to explain different aspects of an ML model. For instance, the practitioners may want to focus on the input data in an ML model to help them properly balance the training dataset. In another work, researchers may focus on the final output of the model to be able to provide a human-understandable explanation to the model's end user. In this section, we will go through some of these aspects of XAI and some suggested questions that can guide researchers' efforts, while focusing on these aspects.
#### 2.2.1 Justification
One of the main objectives of explaining a model is to justify the model's decision in order to increase its credibility. This objective's aim is to gain the trust of users or people who are affected by the AI model. For this purpose, it's not necessary to explain different components or algorithms of a model, but it's required to find the connection between inputs and outputs [28].
To answer this need, it's important to know why an instance gives a specific output. It would also be helpful to elaborate on the feature(s) of the instance that determines the prediction or in some cases, to explore the reasons for getting the same output for two different inputs. Sometimes it's important to know how much change and in what direction for an instance is required or tolerable, in order to see a different output. Being able to answer such questions will make it easier for the users to trust the models.
#### 2.2.2 Improvement
Improvement is one of the most important and strongest inspirations behind many explainability efforts [29]. Working with black-box complex models, it is a common problem that the model starts to make wrong decisions but the reasoning behind it is unclear to developers or researchers. The problem can be due to imbalanced training data or an inadequate internal part or overfitting of the model on specific features. Providing an explanation can be helpful in these situations.
#### 2.2.3 Understanding
Understanding a model is another motivation that is mentioned in the literature. It's a very general term but that usually pairs up with other objectives or makes them possible. Understanding a model may lead to its improvement, test its fairness, or increase the level of user confidence.
#### 2.2.4 Fairness
Fairness is a widely used term in the space of AI and ML. Hence, understanding the term fairness is important before discussing how fairness relates to Explainability of ML models. Broadly speaking, fairness is the quality or state of being fair or having impartial judgment. Fairness has been discussed in many disciplines such as law, social science, quantitative fields, philosophy etc. [30]
In machine learning, fairness can be pursued, in order to prevent biases and discriminations that may happen in different parts of an ML. It can appear in the training data where the dataset is imbalanced, in algorithms where for instance using specific optimization functions may lead to biased decision-making, or in the output representation where wrong conclusions are drawn from an observation [31].
XAI methods can significantly help researchers to consciously detect and eliminate biases and achieve fairness.
#### 2.2.5 Transparency
Transparency is the opposite of black-boxness, that means when an ML model makes a decision, its whole process can be comprehended by a human [32]. Full transparency is usually impossible to fulfill, but it can be achieved in three levels that are: simulatable, decomposable, and algorithmic transparency [33]. Simulatable transparency means the model can be readily interpreted or simulated by a human, which means having a specific input, the user must be able to calculate the output having the model's parameters. Decomposable transparency is when smaller components of the model can be explained or explainable, separately. Finally, algorithmic transparent models are those whose training can get investigated mathematically.
### Modality of Explanation
Model interpretability can be achieved if the model is understandable without further explanation [34]. Ac
cording to the interpretability literature, the model understanding can be achieved by either considering inherently interpretable predictive models (i.e.; Linear regression, Random forest, Decision trees, etc.) or post-hoc interpretability approaches [35].
#### 2.3.1 Intrinsically Interpretable
Intrinsically interpretable models are also called inherently interpretable models or self-explaining models. These models usually provide some level of transparency in their design and structure or they might generate additional outputs such as attention maps [36], disentangles representations, or textual or multi-model explanations alongside their predictions. Decision trees are one of the most famous self-explaining ML models. However, these inherently interpretable models often suffer from the accuracy-interpretability trade-off.
#### 2.3.2 Post-hoc Interpretability
This approach approximates the black-box model using a transparent surrogate model without changing the base model. In this approach, the explained model is treated like a black-box and can be obtained even without any access to its internal components which is ideal when the model is too complex. However, post-hoc methods can also be applied intrinsicly and be model-specific [37]. Some of the most common post-hoc methods are LIME [38] and SHAP [39] which provide an explanation regarding defined features.
### Scope of Explanation: Local and Global Explainability
Portability is an important aspect of post-hoc interpretability. It explains how far the explainer method is dependent on access to the internals of the global training model or the explained model.
Explainability methods are called _model-specific_ or decompositional or white-box models, if the interpretability algorithm is restricted to explain only one specific family of models. In contrast, the algorithms which explain the output of different global models by considering the global model as a black-box, are called _model-agnostic_[40].
According to the present deployments in the space of interpretability, post-hoc interpretability can be further classified as global interpretability and local interpretability based on the scale of interpretation.
#### 2.4.1 Global Interpretability
In order to explain the model output globally in a comprehensive way, the knowledge about the trained algorithm, trained model, and input data would be sufficient. The basic idea behind the holistic global interpretability is understanding how the model makes decisions based on the entire feature set, parameters, and structures. The outcomes of holistic-level interpretability are to help identify: (a) the features' importance levels, and (b) the feature interactions with each other.
#### 2.4.2 Local Interpretability
When discussing local interpretability for a single prediction, it considers a single instance and argues that how the model's prediction for the particular instance corresponds to that instance's features. The prediction will be on a linear or monotonic relationship of some features in the local context. For example, the condition of all patients might not linearly depend on the patients' blood pressure. But, if we look at one specific patient and examine that patient's condition, while watching the small changes of his blood pressure, there is a higher chance of finding a linear relationship in the sub-region of our data. This is a reason to expect more accurate explanations from local explanations compared to global explanations.
## 3 Methodology
Our methodology for this review was mostly inspired by the guidelines proposed by Kitchenham for conducting an SLR [41]. The proposed method includes three main stages: planning, conducting the review, and reporting the results.
The planning phase comprises two steps. Identifying the need for a review which is discussed in Section 1 and developing a review protocol in order to remove the researchers' bias in the reviewing process. Our review protocol's main components include research questions, search strategy, paper selection, inclusion and exclusion criteria, and data extraction. This review protocol is recurringly modified and evolved during the process. Finally, the third step is basically reporting the results and the analysis.
The first two steps are discussed in the following section while the results are reported in Section 4 and analyzed in Section 5.
### Research Questions
Regarding the purpose of this SLR which is investigating the utilization of XAI methods for ML/DL models in software engineering, we formulated the following RQs:
1. **RQ1: What are the main software engineering areas and tasks for which Explainable AI approaches have been used to date?** 1. What are the main software engineering areas and tasks for which XAI has been applied? 2. What are the ML4SE works that offer explainability? 3. What are the objectives of explainability in each research paper reviewed?
2. **RQ2: What are the Explainable AI approaches adopted for SE tasks?** 1. What kind of XAI techniques have been used? 2. What kind of explanations do they offer?
3. **RQ3: How useful XAI has been for SE?** 1. What are the objectives of applying XAI on SE and how they have been useful? 2. What are the means of evaluating XAI in SE?
Each of these RQs are aimed to help us shed some light on different areas of our review. RQ1 is focusing on SE tasks that took advantage of any kind of XAI method to explain their models. While RQ2 is concentrated on the XAI methods and how they have been utilized. Finaly, RQ3 cares about how successful XAI has been in doing its specified task. Each of these RQs also has sub-questions that together, form the answer to their respective main question.
### Search Strategy
The search strategy that is used in this work, starts by choosing the digital libraries to be searched. To perform the primary search phase, five main scientific publication databases in software engineering and AI have been selected, i.e., ACM Digital Library, IEEE, Springer, Wiley Online Library, and Elsevier.
We considered three main categories of "Explainable AI", "Software Engineering", and "Artificial Intelligence" to create our search strings. By integrating alternative terms and synonyms for each of them, we gathered a list of keywords in three categories as shown in Table 1.
Using the boolean operators AND and OR, we formulated a search string that is: ("Interpretable" OR "Interpretability" OR "Explainability" OR "local model" OR "global model" OR "model-agnostic" OR "explainer model" OR "explanations" OR "black-box models" OR "rule-based" OR "counterfactual" OR "causal") AND ("SQA" OR "Software Analytics" OR "defect prediction" OR "API recommendation" OR "risk prediction" OR "code comprehension" OR "code generation" OR "code retrieval" OR "code summarization" OR "bug-fixing processes" OR "software testing" OR "effort estimation" OR "documentation generation" OR "clone detection" OR "program synthesis" OR "image to code" OR "security" OR "program repair" OR "vulnerability detection" OR "intrusion detection system" OR "Malware detection") AND ("Machine learning" OR "classification" OR "neural networks" OR "neural networks" OR "image processing" OR "text mining" OR "text analysis" OR "classification" OR "clustering" OR "rule mining" OR "association rules" OR "NLP" OR "Natural Language Processing" OR "embedding" OR "transformer" OR "adversarial learning")
The OR operators will check if any of the search terms in one category exist in the paper and the AND operator will make sure that all three categories are found in the papers.
### Study Selection/ Inclusion and Exclusion Criteria
First, we started by searching a smaller version of the alternative search terms (more generic terms) on the full documents of papers in all of the specified databases with no time limit for the papers on the date
Figure 1: An overview of the searching strategy and its different stages with the number of studies in each step.
"06/textbacksslash21/2022". This ended up with hundreds of irrelevant papers as the result. So it was decided to limit the search to the title, abstract, and keywords of the papers. Given that the terms used in title, abstract, and keywords are more specific and not very generic (i.e., a generic word such as "software engineering" might not be used in the title, abstract, and keywords but the specific task e.g., "defect prediction" will be.), we augmented the search terms in the new search to the current list to make sure we do not miss any relevant paper. To make sure that our findings are not biased to a limited set of specific terms, we augmented the list iteratively (4 iterations), by manually checking the citations and references of the collected papers and updating the list to cover all relevant papers. At this point, we could find a total of 869 papers which many of them were unrelated to SE and belonged to Health, Physiology, Medicine, or security. We used the search engine tools to filter the papers of unrelated fields that left us with 73 papers.
Lots of irrelevant papers were still among the search results so in order to find and select the relevant papers, we manually reviewed the abstracts of all the 73 papers (this job was divided between four authors. Then one author validated all decisions) and included or excluded papers based on the following criteria:
* Full text papers published as journal or conference papers that comply with Interpretability and explainability definition provided in Section 2.
* Papers that are written in English Language.
* Papers that are related to software engineering.
* Papers that are available in full text.
* Books, gray literature (theses, unpublished and incomplete work), posters, secondary or review studies (SLR or SMS), surveys, discussions and keynotes are excluded.
* Short papers where page count is less than 4 pages.
* Papers about Software Engineering but not discussing about interpretability and explainability.
* Papers about interpretability and explainability but not applying it on Software Engineering tasks.
So if a paper meets all of the defined inclusion criteria and also does not meet any of the exclusion criteria, we included it in the final papers list. Finally according to all of these filters and criteria, we were able to extract 25 papers that totally matched our interests. One paper 1 was discarded due to the problem of accessing its full
\begin{table}
\begin{tabular}{l l} \hline \hline Key Search Terms & Alternative Search Terms \\ \hline \multirow{4}{*}{Explainable AI} & Interpretable, Interpretability, Explainability, local model, global model, \\ & model-agnostic, explainer model, explanations, interpretability algorithm, \\ & black-box models, rule-based, counterfactual, causal \\ \hline \multirow{4}{*}{Software Engineering} & SQA, Software Analytics, defect prediction, security, vulnerability detection, \\ & intrusion detection system, API recommendation, risk prediction, code \\ & comprehension, code generation, code retrieval, code summarization, \\ & bug-fixing processes, software testing, effort estimation, documentation \\ & generation, clone detection, program synthesis, image to code, security, \\ & program repair \\ \hline \multirow{4}{*}{Artificial Intelligence} & Machine learning, classification, neural networks, deep learning, image \\ & processing, text mining, text analysis, classification, clustering, rule \\ \cline{1-1} & mining, association rules, NLP, Natural Language Processing, \\ \cline{1-1} & embedding, transformer, adversarial learning \\ \hline \hline \end{tabular}
\end{table}
Table 1: Key search terms and their respective alternative search terms.
version in the digital libraries. So there are 24 papers remained, which can be seen in Table.6.
### Data Extraction
For the data extraction phase, we defined a checklist of items that could help us extract the required information from each paper to answer the RQs. The checklist includes both quantitative and qualitative properties. The 24 papers were divided between four authors and one author verified and consolidated the results. We defined 17 main properties that could help us answer the RQs as below:
1. Publication details (Title, Authors, Published Date, Venue, Authors' affiliation)
2. What is the aim/ motivation/ goal of the research?
3. What are the key research problems addressed?
4. What phases of the SE are considered in the study?
5. What SE tasks are under experiment in the study? (For papers that perform experiments and report results)
6. Is the study conducted based on open-source data or data from industry?
7. What are the ML/DL models and techniques considered in the study and how they have been evaluated?
8. What are the XAI methods and techniques used in the study and how they are evaluated or represented (or visualized)?
9. What is the scope and modality of the explanation offered in the study?
10. What is the replication status of the experiments in the study?
11. The number of participants used in the study where human evaluation was involved?
12. What are the strengths and limitations of the study?
13. What are the key research gaps/ future work identified by each study?
14. Is the explainability among the main focuses of the study or just a side-benefit
### Data Synthesis
Our data synthesis focused on summarizing and presenting the extracted data to convey meaningful information regarding the papers to address the RQs. In Section 4, we present all of our findings and results using tables, pie charts, bar charts, etc. in order to share insightful information from our analysis and studies.
## 4 Results
the results of the final selected papers after the search process and the meta-analysis on their publication places and years.
### Selected Primary Studies
After filtering out or adding different research to our list of papers in multiple steps, finally we came up with a list of 24 papers that totally matched our search criteria. In Table.6, the paper's title, name of authors, year and source of publication, and publication type (journal or conference) of all these final selected studies are summarized.
### Publication Statistics
As shown in Figure2, the first research in our selected studies goes back to 2008, published in Software Quality Journal. This study uses k nearest neighbor (kNN) and classification and regression trees generated based on the CART algorithm to perform the software effort prediction. Even though this study does not directly mention the XAI concept, it touches upon the explainability aspect in an implicit manner by being able to find the dominant predictor variables and how they're statistically significant using Anova method. After that, there is a small number of XAI works in SE each year but then from 2019, we can see a trending pattern and popularity of related research.
In the selected studies, a total of 75 authors contributed, while six mentioned in Table 2 are the authors in more than one paper. Also in terms of conferences and journals, as shown in Table 3, only three journals and one conference had more than one paper in the selected studies, which confirms that this field is still a new domain in the SE community.
Another interesting observation is that before 2018, most of the publications were in AI conferences or journals that weren't specific to SE, while after that, all
\begin{table}
\begin{tabular}{l l} Author & \# of \\ & papers \\ \hline Tantithamthavorn, Chakkrit & 5 \\ Jiarpakdee, Jirayus & 3 \\ Dam, Hoa Khanh & 2 \\ Pornprasit, Chanathip & 2 \\ Thongtanunam, Patanamon & 2 \\ Matsumoto, Kenichi & 2 \\ \end{tabular}
\end{table}
Table 2: Authors with more than one contribution in the selected studies.
of the publications belong to SE-specific venues which shows the rising interest of the SE community in the field.
## 5 Synthesis and Analysis
RQ1 Results: What are the main software engineering areas and tasks for which Explainable AI approaches have been used to date?
In our selected studies, we surveyed different SE activities and SE tasks that XAI methods and studies have been focused on or utilized. We organized the tasks and activities based on the Software Development Life Cycle (SDLC) stages [42] and in this section, we are going to present and analyze the results to answer RQ1 of this study.
1.1 RQ1.1 results: What are the main software engineering areas and tasks for which XAI has been applied?
The most attractive activity for the researchers was software maintenance with 68% of the tasks being used for interpretation as shown in Figure 3. After that, software development has the largest share with 16%, and software management and software requirements each with two tasks in all of the studies have 8%. It's noteworthy that among the six stages of SDLC, software design and testing are two activities that have no presence in all of our selected studies.
A particularly noteworthy finding is the high ratio of the defect prediction task compared to all other tasks in the studies. This task alone has the lead with 44% of the
\begin{table}
\begin{tabular}{l l} & \# of \\ Conference/Journal & papers \\ \hline RAISE [C] & 2 \\ MSR [C] & 2 \\ Transactions on Software Engineering [J] & 2 \\ Software Quality Journal [J] & 2 \\ Empirical Software Engineering [J] & 2 \\ Symposium on Cloud Computing [C] & 1 \\ TOSEM [J] & 1 \\ IUI [C] & 1 \\ ICSE-NIER [C] & 1 \\ Symposium on Applied Computing [C] & 1 \\ ICSE-Companion [C] & 1 \\ ESEC/FSE [C] & 1 \\ RE [C] & 1 \\ ICMLA [C] & 1 \\ ASE [C] & 1 \\ ICECS [C] & 1 \\ ISSRE [C] & 1 \\ DSA [C] & 1 \\ ICTAI [C] & 1 \\ \end{tabular}
\end{table}
Table 3: Conferences [C] and journals [J] and number of papers published in each venue. The full name of each venue can be found in Table.6
Figure 2: Distribution of selected studies over years and types of venues.
total number with a meaningful gap from the next task, which is effort prediction with only two studies and the other tasks each one being utilized only once. Also in general, QA and software maintenance tasks have the highest diversity with having seven different tasks. In Table4, we can see different tasks in each activity domain's broken-down.
Defect prediction is basically a classification problem to tag a code snippet, class, etc. as faulty or not faulty. This makes it a very ideal task for machine learning algorithms and also for XAI. It's also a very important task that ML models have shown promising results in.
We also examined a survey [42] on deep learning models for software engineering tasks and interestingly, we found 27 different tasks that already have taken advantage of black-box DL models, and yet, no explanation method has been used on them. Tasks such as "Code generation", "Code comment generation", "Test case generation", "Bug classification", etc. This can show the big gap between ML applications in SE and the utilization of XAI in the field.
Furthermore, we observed that among the reviewed studies that include an empirical evaluation, 16 papers only used open-source data, three papers only used publicly unavailable industrial data [43; 44; 45], while two papers used both [46; 47]. The data type in all the studies are either textual which can be source code or NL, or tabular which is a set of extracted features.
#### 5.1.2 RQ1.2 Results: What are the ML4SE works that offer explainability?
To answer this question, we analyzed different ML models and the number of times that they have been used in the selected studies. As the sorted results show in Figure 4, while many models are rare and have only one use case, models like random forest and decision tree have been very popular with nine and seven uses. This is probably due to the fact that these classic models are usually being used as the baselines in many papers and also the fact that these models offer a level of self-explainability so it is expected to see them more in our selected studies.
The next popular models are regression models that are used for both regression problems (such as effort prediction) and classification problems (such as defect prediction). Regression models are used six times in total, while four of them are logistic regression, and linear and robust regression each have only one use-case.
Neural networks are also favored by having a total of 15 times being experimented, with adequate diversity among themselves. More complex code models like Code2Vec, Code2Seq, TransCode, and CodeBERT
\begin{table}
\begin{tabular}{c l c} \hline \hline Activity & Task & \# \\ \hline \multirow{3}{*}{Requirement} & Requirements engineering & 1 \\ \cline{2-3} & Requirements quality & 1 \\ \hline \multirow{3}{*}{Development} & Code translation & 1 \\ \cline{2-3} & Code autocompletion & 1 \\ \cline{2-3} & Natural language to code & 1 \\ \cline{2-3} & Code Summarization & 1 \\ \hline \multirow{3}{*}{QA \& Maintanance} & Quality assessment & 1 \\ \cline{2-3} & Defect prediction & 11 \\ \cline{2-3} & Code reviews & 1 \\ \cline{2-3} & Reliability prediction & 1 \\ \cline{2-3} & Technical debt detection & 1 \\ \cline{2-3} & Valid bug report detection & 1 \\ \cline{2-3} & Variable misuse detection & 1 \\ \hline \multicolumn{2}{c}{Management} & Effort Prediction & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The break-down of SDLC stages to tasks and their number of appearance in the selected studies.
Figure 3: Distribution of studies per SDLC stage in the selected studies. Reported by #Papers (% proportions over all).
can be found in the studies, as well as simpler networks like feed-forward neural networks, CNN, RNN, or basic transformer models are also among the explained models.
In Table 5 ML models are shown alongside the tasks they were used to solve. Models like the random forest or regression models are mostly utilized for defect prediction. The popularity of random forest for defect prediction is probably due to the fact that it's a self-explaining model which is a good fit for this specific task with high accuracy. Also, it is interesting to see the diversity of tasks that take advantage of the decision tree as a simple yet effective model.
DNN code models like Code2Vec, Code2Seq, Transcode, and CodeBert are always used for generation-based tasks such as code summarization or code translation which is understandable due to the complexity of these tasks and traditional models' incompatibility with them.
Regarding a large number of defect prediction studies, we also focused on those papers more specifically. Among the 11 works focused on this task, there is quite a diversity of ML models being used and even some studies cover different models for the purpose of their research. As mentioned, random forest is the most popular method that has been used in eight different papers either as the main model or as a benchmark to compare with. Logistic regression and deep learning neural networks are the second most popular models being used in five and three papers. There are other models that only have been used in two or one study. Models such as Support Vector Machine (SVM), k-Nearest Neighbours, and C5.0 Boosting. In these defect prediction tasks, standard evaluation metrics for ML models such as precision, recall, F1 score, and AUC are commonly used.
We also analyzed the SE tasks that these ML models have been applied to in terms of their ML problem type. As shown in Figure5, 18 tasks (more than 65% of the tasks experimented in the selected studies) fall into the classification problems. While only three tasks are considered regression problems (reliability prediction, requirements quality, and effort prediction), and four can be called generation problems (code translation, code autocompletion, natural language to code, and code summarization).
Classification tasks are among the simplest models to explain because of the fixed number of possible outputs that leads to a more straightforward evaluation, thus it makes sense for the researchers to focus on them. On the other hand, generation tasks are among the hardest since the output is usually harder to evaluate objectively. This gets more interesting when we see that three of those generation tasks are actually from one paper [48] that is doing a human evaluation of XAI methods for SE tasks. We will discuss the challenges of evaluation metrics in Section 5.3.2.
#### 5.1.3 RQ1.3 Results: What are the objectives of explainability in each research paper reviewed?
In Section 2.2, we demonstrated different objectives of XAI. In our analysis of the selected studies of this work, we were interested to extract different proclaimed objectives of different studies. We found three main objectives explicitly or implicitly mentioned in the selected studies as: accountability, fairness, credibility, understanding, transparency, and improvement. Accountability and credibility have been used interchangeably in different works, meaning the reliability of the model for its users. Note that these objectives are not mutually exclusive and some papers follow multiple goals so the numbers add up to greater than the number of papers.
As shown in Figure 6, 20 papers that is the vast majority of studies have claimed improvement as (at least) one of their purposes, accountability in six, and understanding in three different papers have been mentioned. transparency and fairness are less common objectives while being the aim of two papers, and credibility has been mentioned only once.
Accountability and fairness are usually used in reviews/surveys [49; 48; 50] together, but the other objectives are coupled with the improvement in different research.
It's worth mentioning that while most of the objectives have been said with the same meaning as what is discussed in Section 2.2, improvement is an ambiguous term, used for multiple intentions. In the selected studies some researchers consider the explanation as it is, as the improvement [46]. In the other words, they believe that the fact that their model offers an explanation is an improvement compared to other models. Tree-based models that extract rules for their decision-making and present them are among this category.
Meanwhile, some studies have provided model-agnostic XAI tools and methods to help other researchers to understand and improve their models in the future [51] or even have used XAI methods to literally improve the results of models, especially in defect prediction. We will discuss them later in Section 5.3.1 where we discuss the helpfulness of XAI for SE models.
\begin{table}
\begin{tabular}{l l l} \hline \hline ML model & Task (\# of Uses) & Total \\ & & \# of uses \\ \hline Random Forest & Defect prediction (8), Code reviews & 9 \\ \hline Decision Tree & Effort prediction, defect prediction (2), Quality assessment, & 7 \\ & Requirements quality, Requirements engineering, Code reviews & \\ \hline Regression Model & Defect prediction (5), Requirements engineering & 6 \\ \hline TransCode & Code translation, Code autocompletion, Natural language to code & 3 \\ \hline CodeBERT & Code translation, Code autocompletion, Natural language to code & 3 \\ \hline Feed Forward Network & Defect Prediction (2), Quality assessment & 3 \\ \hline Naïve Bayes & Defect prediction (2) & 2 \\ \hline kNN & Defect Prediction, Requirements engineering & 2 \\ \hline SVM & Quality assessment, Defect Prediction & 2 \\ \hline CNN & self-admitted technical debt detection, Valid Bug Report & 2 \\ \hline Bayesian Network & Quality assessment, Defect Prediction & 2 \\ \hline RNN & Variable Misuse detection & 1 \\ \hline Fuzzy Logic & Software Reliability Prediction & 1 \\ \hline Transformer & Variable Misuse detection, Defect prediction & 1 \\ \hline Code2Vec & Code Summarization & 1 \\ \hline Code2Seq & Code Summarization & 1 \\ \hline Fuzzy C Means & Defect Prediction & 1 \\ \hline Kurtanovic and Maalej’s Classification & Requirements Classification & 1 \\ \hline Gradient Boosting & Defect prediction & 1 \\ \hline Genetic Algorithms & Software Reliability Prediction & 1 \\ \hline Rule-based & Quality assessment & 1 \\ \hline Case-based Reasoning & Quality assessment & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Different ML models and the tasks they have been used for and number of uses
Figure 4: Number of experiments each ML model have been used in the selected studies
Figure 5: Distribution of SE task types in selected studies. Reported by #Papers (% proportions over all). Figure 6: The XAI objectives stated (explicitly or implicitly) in the selected studies along with the number of papers that have mentioned them. Reported by #Papers (% proportions over all).
_Answer to RQI:_ Among different stages of SDLC, QA and maintenance have been the most popular among the XAI researchers, and this popularity is hugely focused on the defect prediction task. This is likely due to 1) the popularity of this task among researchers and also 2) the adequacy of self-explaining traditional models like decision trees for these tasks.
_Our analysis also shows that, due to their inherent interpretability, classic models, like random forest, decision tree, regression models, and naive Bayes models, are among the most common ML models in the selected studies. DNN models are also another center of attention in terms of ML models, however generation-based tasks that are one of the main strengths of these models are still unexplored._
### RQ2 Results: What are the Explainable AI approaches adopted for SE tasks?
In this section, we focus on the XAI methods that have been used to explain the SE models that we discussed in previous sections. This will include post-hoc XAI methods and self-explaining models.
#### 5.2.1 RQ2.1 Results: What kind of XAI techniques have been used?
As we discussed in Section 2.2.5, there are two types of XAI approaches: (a) models that offer some level of explanation alongside the output, and (b) post-hoc XAI techniques that are applied on the model afterwards. In this section, we are discuss both cases among our selected studies. According to our analysis, 15 self-explaining models were used in the studies that need no or a small amount of effort to interpret while only 9 models were using post-hoc methods.
Tree-based models are the most popular self-explaining models among these studies with using a diverse set of algorithms. Decision trees using the C4.5 algorithm or its developed version, the PART algorithm that uses fuzzy logic to handle the classification task [45; 52; 53; 44; 54] or random forests are widely used in our primary selected studies [43; 55; 56; 57; 58].
The explanation offered in tree-based models is usually a set of rules or highlighted features that are quite self-explanatory for the user and usually need no further processing. Thus, the explanation in these models are either in form of feature summary statistics, or feature summary visualization using plots and graphs.
Deep neural networks are another observed type of ML models that even though are known as black-box models, but some researchers believe they have internal features that can be used for interpretation, after proper processing. For instance, [59] has used back-tracking of a trained CNN model to extract the key phrases of the input and discover meaningful patterns in the code comments. As an explanation, they were able to find some 1-gram, 2-gram, and 3-gram patterns in natural language format that are compared to human-extracted patterns. Their model is able to cover most of the benchmark and also offers further comprehensive patterns with more diversity of vocabulary. In another study [60], the authors also use backtracking of a CNN model but for the valid bug report determination. As the input of this task is in NL format, it is very similar to classification tasks in NLP and it generates a natural language explanation.
Transformer models are also among the promising DNN models that have outperformed state-of-the-art models in some SE tasks by leveraging the attention mechanism. There is a controversial debate about the validation of transformer model's attention mechanism as an explanation method, in recent years. Among the selected studies, there is one that have used the self-attention mechanism of a transformer as an explanation [61]. They claim to find the importance of the input tokens by constructing the attention matrices. The generated matrix as well as a respective saliency map is the explanation that they present, but no evaluation is discussed.
Looking at the post-hoc XAI methods, most techniques that are used in SE are adopted from the NLP domain. LIME (Local Interpretable Model-Agnostic) [38] is one of the widely used XAI methods in NLP that also showed very useful in SE. In the reviewed papers, four different articles have used LIME directly in their studies [55; 56; 57; 58]. The method works based on feeding a black-box model with perturbed input data and observing the changes that happen on the output weights. This straightforward mechanism makes it ideal for NL and PL data. Perturbation can be defined on multiple levels from files in a commit to tokens in a code snippet and the results will be scores for the defined features (lines, tokens, etc.).
As we mentioned, LIME works based on input perturbation by generating synthetic instances(\(n\)) similar to a test instance (\(x\)) by a random perturbation method. The level and granularity of this perturbation depend on the task and objectives. In two of the studies that use LIME [55; 58], code tokens are considered as features that are supposed to be perturbed, while another study [57] uses tabular features of a code commit made by the developers (e.g. number of files modified, num
ber of added or deleted lines, Number of developers who have modified the files, etc.). After the generation of synthetic instances, a defect prediction model is used to label them, and then, based on the predictions, a score between -1 to 1 is assigned to each feature. A positive score means a positive impact on the prediction and vice versa.
This model-agnostic and easy mechanism has led to the popularity of LIME among users, as another study, which is focused on the practitioners' perceptions of XAI for defect prediction models [56], claims that LIME gets the highest appeal among different XAI methods in terms of information usefulness, information, insightfulness, and information quality with more than 66% agreement rate.
The second favorite method is ANOVA (ANalysis Of VAriance), which is a statistical method that can be used to measure the importance of factors in a model. It does this by calculating the improvement in the Residual Sum of Squares (RSS) when a feature is added to the model. In our selected primary studies, This model-agnostic technique is used three times [56; 62; 63]. In addition, LIME and ANOVA have been selected as the first and second most preferred XAI methods, respectively, in a human evaluation of 50 software practitioners.
SIVAND is another model-agnostic XAI method that works based on Delta Debugging and reducing the input size, without changing the output [64]. The big picture is very similar to LIME but instead of just finding the most important atomic unit (token or character), it removes redundant units so the input gets smaller in size, but the prediction remains the same. They have tested their method on Code2Vec, Code2Seq, RNN, and Transformer models on two tasks of Method Name Prediction and Variable Misuse Detection and have a qualitative example-based evaluation of their models. For the Transformer model, they validate their performance by comparing the similarity of SIVAND and attention scores.
PyExplainer is another XAI method inspired by LIME and focused on Just In Time (JIT) defect prediction. It is a model-agnostic local rule-based technique that challenges the LIME's random perturbation mechanism by generating better synthetic neighbors, and explanation more unique and consistent. According to the mentioned criteria, PyExplainer outperforms LIME on Random Forest and Logistic Regression models for defect prediction.
Beside these model-agnostic methods, there are some other XAI techniques that are used for interpreting specific ML models. For instance, deconvolution is a method that exploits the structure of CNNs to extract the importance of features[59]. Flexible Code Counter/Classifier(CCFlex) is also an interesting model that was originally designed to classify lines of code but is used for code review and finding guideline violations[65].
Some other XAI methods are also mentioned in a study asking about software practitioners' perceptions of the goals of XAI methods for defect prediction models[56]. As mentioned earlier, LIME and ANOVA were the most preferred methods, while other techniques like Variable Importance, Partial Dependence, Decision Tree, BreakDown, SHAP, and Anchor were less favorable.
#### 5.2.2 RQ2.2 Results: What kind of explanations do they offer?
We analyzed the selected studies in terms of the type of explanation that they offer. Following the previous discussion about self-explaining models, many models deliver "Model internals" as the explanation. Extracted rules and structures of tree-based models or the attention matrix of Transformer models are in this category. However, some works go further and more than the local explanations, presenting the "Feature summary statistic" or "Feature summary visualization" of their explanations with different visualization methods. The distribution of each type is shown in Figure 7. As it can be understood from the plot, some models have one, while others offer more types of explanation.
In the category of "Feature summary visualization", there are multiple XAI methods presented in the studies. While there are works that are satisfied by "Raw declarative representations" of their interpretations, many XAI methods implement more informative ways. "Natural language explanation" is used five times, while "Saliency maps" and "Plots and charts" each are used three times. There are also two methods that present their results in interactive tools to the users. The distribution of different visualization techniques in the studies is illustrated in Figure8.
### RQ3 Results: How useful XAI has been for SE?
In this section, we are going to focus on the helpfulness of XAI methods that are used for SE tasks.
#### 5.3.1 RQ3.1 Results: What are the objectives of applying XAI on SE and how they have been useful?
In Section 5.1.3, we discussed different objectives of research in our selected studies. Some of them have used explanation as an additional output for their users, while a few others used the XAI methods to improve the performance of the models.
One interesting representation of these improvements can be seen in defect prediction. While many defect prediction models only classify if a file or a class is faulty, some researchers have taken advantage of XAI methods to specifically find the buggy line or tokens of the code [55; 56; 58]. This is interesting since it improves defect prediction's practicality and its potential for adoption by industry.
SIVAND is another successful XAI method, where its finding improves the performance of the code models it is applied to, and also offers valuable knowledge about them [64]. They are able to considerably reduce the size of the inputs, and yet achieve the same accuracy. This means a great reduction in the models' required time to make a prediction. Furthermore, they offer interesting knowledge about code models by claiming the presence of alleged patterns in them. For instance, they believe while Transformer models can understand more "global" information in codes, RNN models are prone to local information. They also believe these models are quite vulnerable to perturbations as they rely on too few features.
Another noteworthy and exemplary use of XAI can be seen in another work [60] where by backtracking a trained CNN network and manually inspecting the explanation, the authors are able to find valid bug report patterns. Further analysis of the identified bug reports, they verify the importance and effect of XAI on their model and the problem.
#### 5.3.2 RQ3.2 Results: What are the means of evaluating XAI in SE?
Looking at the evaluation of the XAI methods (not the ML models) across the selected papers, we can see that the struggle is apparent. While studying these selected papers, we observed that there is little consistency among selected papers in terms of evaluation. Many studies have totally ignored the evaluation phase and only offer some visualization for their XAI with no evidence of its quality or reliability.
Most existing XAI evaluation methods [66; 59; 65] are qualitative and use human subjects to assess the explanations. The problem with this approach, specially with small scale subject pools, is that they are heavily biased and may not be generalizable to other users. In an attempt to find out how thoroughly evaluate XAI in SE, in one work [48] the authors asked 43 software engineers in 9 workshops to express their explainability needs for some specific tasks.
With respect to qualitative assessment of XAI methods in SE, there are some works that based on their task
Figure 7: Different types of explanation offered in the selected studies.
(e.g., defect prediction), are able to define more common metrics such as Mean Squared Root (MSR), or Wald test [62]. In a few other studies, researchers defined innovative metrics or used some less-known metrics from generic the XAI domain. For instance, in one work [51] the authors measure the percentage of unique explanations generated by XAI method as a metric. Another work [52] utilizes the average length of the rules that their XAI extracts and another study [57] uses Variable Stability Index (VSI) and Coefficient Stability Index (CSI), as metrics.
**Answer to RQ3:** Our findings show that some XAI methods were quite helpful and were applied either for providing proper explanations to researchers that led to finding interesting patterns in the model's decision-making, or for increasing the granularity of models' results, leading to the higher precision for the respective SE task.
We also noticed some confusion and a lack of standard routines among the researchers in terms of evaluating XAI methods that are usually compensated by defining innovative and task-specific metrics or using qualitative evaluation by a human.
## 6 Challenges and road ahead
In general, there are valuable works that have been published in the field of XAI for SE. Interesting ideas have been suggested, and an encouraging growing trend has started. However, despite all of these efforts, there is a long way ahead. ML models are improving their performance and growing more complex gradually and yet there is a noticeable lack of interest from software practitioners in using them in the real world, partly due to the lack of transparency and trust issues.
As we mentioned in Section 5.1.1 there are still unexplored areas such as software design and testing, and tasks in SE that already have taken advantage of ML models and yet have not been studied in terms of explainability. Code comment generation, code generation, vulnerability detection, test case generation, bug classification, and program repair are some of the most important tasks that require more attention.
Additionally, if we consider the mentioned unexplored tasks, we can see a pattern of overlooking generation-based tasks (e.g. code generation, test case generation, and program repair). In our analysis, we discussed how classification problems are preferable for XAI due to their limited output space, but looking at recent advancements of AI models in generation-based tasks, we believe there is an undeniable need for XAI methods for those tasks.
A further note on this topic is these unexplored tasks and in general, all generation-based tasks is that their advancements rely heavily on deep neural networks. DNNs have been a great asset for researchers when it comes to searching in a huge space (i.e. vocabulary of words), thus they are ideal for this type of task. Code models and their success in code document generation, code translation, and code repair are good examples. However, they are incomparably more complex than classical ML models, so they are harder to explain. Nonetheless, this complexity is the exact reason that they require explanation.
Furthermore, by analyzing the XAI techniques used in SE models, we see that the generated explanations are mostly meant for developers and programmers and not other types of less-technical end users (e.g., managers,
Figure 8: Distribution of different visualizations used as explanation methods in the selected studies.
exectives, and other stakeholders) as there are fewer high-level explanations (such as visualizations or natural language rules) offered. While in other areas such as Image processing or NLP, the explanations are more user-friendly. This problem is partly because of the type of data and nature of SE tasks, but still, a lack of effort in giving such visualizations is noticeable.
Finally, if we look at our selected studies, we can see a clear lack of evaluation consistency among them. Some studies just explain the ML models for the sake of explaining and pay no more interest in further analysis, evaluation, or justification for the provided explanations. Considering the fact that explanations are supposed to help the models' users, we believe when objective evaluation is not possible or sufficient, trying human evaluation by experts, is a satisfactory solution.
## 7 Related Works
There have been lots of surveys and literature reviews addressing the XAI topic in general or for specific fields. For instance, a study surveys some traditional ML models such as SVM, linear, and tree-based models and notes a trade-off between their performance and reliability [67]. They also mention the ambiguity and confusion among works addressing XAI in terms of taxonomy and definitions.
In an attempt to standardize the efforts in XAI, some studies has tried to clarify some of the definitions and terminology and insisted in distinguishing the terms "interpretability" and "explainability" as different terms with the first only being one of the features of the latter [28]. They also claim that different approaches to solve the problem of black-box models, usually are unilateral and fail to address different aspects of explainability. A similar approach is taken by others that offer a more updated insight and also specify challenges and opportunities ahead[68].
Besides works that surveyed or reviewed the literature on XAI, in general, there are valuable attempts to particularly review the state of XAI in specific fields or models. As expected, medical applications that have taken great advantage of ML models, in recent years, are also among the most critical and sensitive domains. In a research, more than 200 papers that have used XAI in deep learning models for medical image processing are analyzed, which itself shows the high interest of researchers in the field [69]. They also provide some suggestions for future works including the need for medical experts to actively participate in designing interpretation methods.
Natural language processing is also one of the areas that benefitted from surveys and reviews to examine the state of XAI in the literature. For instance, A survey examines 50 papers that are related to XAI in NLP and categorizes and analyzes them according to different aspects like locality and globality or being self-explaining or post-hoc [35]. Furthermore, they note a debate among researchers about the very existence of a trade-off between explainability and performance. Similar to many other surveys in other fields, they also mention the lack of clear terminology and understanding of XAI methods and provide insightful guidelines.
There are also reviews or surveys that instead of tasks or fields of study focus on different types of data or models. Among these works, there are studies that focus on time-series data specifically [70], or analyze the advances in XAI for tabular data [71]. There are other researches that study different XAI methods for deep neural networks [72] or specifically reviews XAI in deep reinforcement learning models [73].
Software engineering is also a field that has benefited from ML models for a long time and SLRs have been conducted to analyze the literature for ML or DL techniques application in SE [42; 74]. However, to the best of our knowledge, this is the first work offering a systematic literature review for XAI in software engineering.
## 8 Conclusion
To the best of our knowledge, this is the first study to systematically review XAI in the domain of SE. By reviewing 24 relevant papers (out of over 800 automatically selected ones) and analyzing the collected data, we report the current state of research in this area and identify challenges and road ahead for future research.
In particular, the review shows that among different stages of SDLC, QA and maintenance have been the most popular among the XAI researchers, and this popularity is hugely focused on the defect prediction task. On the other hand generation-based tasks (e.g. code generation, test case generation, and program repair) which are popular in the ML4SE community are rarely studied in the XAI4SE literature.
The data also shows that the number one impact among all targeted objectives of applying XAI to ML4SE models has been improving the original ML model.
We also observe that self-explainable ML models, such as classic random forest, decision tree, regression models, and naive Bayes are among the most used XAI
techniques and post-hoc technique are less used in the community, so far.
In terms of the type of explanations, XAI4SE has been mainly focused on ML developers as the end user and provides low-level debugging type explanations such as "feature summary statistic" (e.g., Tables showing some extracted key features and their distribution [59]) and "model internals" (e.g., self-attention scores of Transformers [61] and backtracking CNN layers of a NN [60]), and less on higher-level natural language-based explanations or visualization.
Finally, a lack of standard ways for evaluating XAI methods was clear among the studies, which has led to many different project-specific evaluation metrics, as well as human subject assessments.
|
2308.13355
|
WorldSmith: Iterative and Expressive Prompting for World Building with a
Generative AI
|
Crafting a rich and unique environment is crucial for fictional
world-building, but can be difficult to achieve since illustrating a world from
scratch requires time and significant skill. We investigate the use of recent
multi-modal image generation systems to enable users iteratively visualize and
modify elements of their fictional world using a combination of text input,
sketching, and region-based filling. WorldSmith enables novice world builders
to quickly visualize a fictional world with layered edits and hierarchical
compositions. Through a formative study (4 participants) and first-use study
(13 participants) we demonstrate that WorldSmith offers more expressive
interactions with prompt-based models. With this work, we explore how creatives
can be empowered to leverage prompt-based generative AI as a tool in their
creative process, beyond current "click-once" prompting UI paradigms.
|
Hai Dang, Frederik Brudy, George Fitzmaurice, Fraser Anderson
|
2023-08-25T13:03:52Z
|
http://arxiv.org/abs/2308.13355v1
|
# WorldSmith: Iterative and Expressive Prompting for World Building with a Generative AI
###### Abstract
Crafting a rich and unique environment is crucial for fictional worldbuilding, but can be difficult to achieve since illustrating a world from scratch requires time and significant skill. We investigate the
use of recent multi-modal image generation systems to enable users iteratively visualize and modify elements of their fictional world using a combination of text input, sketching, and region-based filling. WorldSmith enables novice world builders to quickly visualize a fictional world with layered edits and hierarchical compositions. Through a formative study (4 participants) and first-use study (13 participants) we demonstrate that WorldSmith offers more expressive interactions with prompt-based models. With this work, we explore how creatives can be empowered to leverage prompt-based generative AI as a tool in their creative process, beyond current "click-once" prompting UI paradigms.
## CCS Concepts
* [leftmargin=*,noitemsep,nolistsep]
* **Human-centered computing \(\rightarrow\) Human computer interaction (HCI).**
## Keywords
Multi-modal image generation, Fictional world-building, AI-assisted creativity
### ACM Reference Format
Hai Dang, Frederik Brudy, George Fitzmaurice, and Fraser Anderson. 2023. WorldSmith: Iterative and Expressive Prompting for World Building with a Generative AI. In _The 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23), October 29-November 1, 2023, San Francisco, CA, USA_. ACM, New York, NY, USA, 17 pages. [https://doi.org/10.1145/3586183.3606772](https://doi.org/10.1145/3586183.3606772)
## 1. Introduction
Fictional world-building is the process of constructing a fictional universe with its unique history, geography, culture, and rules (Hirsch, 2017). In his seminal essay - _On Fairy Stories_ - Tolkien emphasizes the crucial role of an _imagined place_ for an engaging fantasy plot. This observation extends into the present, where interesting and intricate worlds are essential for the success of games, movies, and other forms of entertainment.
There are many reasons why people consume fiction with imaginary worlds. While some just enjoy the creative challenge of conceiving an interesting world itself, others appreciate the enormous vibrant community (Zhu et al., 2018). Forms of engagement include writing fanfiction (Zhu et al., 2018), developing indie games (Hirsch, 2018), or running the popular table-top game Dungeon and Dragons (DnD)(Dungeon and Dragons (DnD)).
Fictional worlds can be conveyed through various media, and creating visual artwork that depicts part of the imagined world is one of them. Visually materializing the world helps others find common ground when discussing and collaborating (Zhu et al., 2018). However, illustrating these worlds is time-consuming and particularly difficult for novice world builders, who often lack the artistic skill or the experience to create coherent worlds. But even experienced world builders must invest significant time and effort into materializing their ideas.
With current illustration tools such as Adobe Photoshop or Adobe Illustrator, users often have to make many fine-grained edits to create their world. This is time-consuming and requires a high level of expertise. To relieve users from designing on the micro-level, many domain-specific tools have been developed, such as procedural terrain generation tools (Bradley et al., 2016; Dang et al., 2017; Dang et al., 2018), which algorithmically generate varied terrains based on a fixed set of rules or grammar (Dang et al., 2017; Dang et al., 2018). Other tools have been developed to support the creation of 3D worlds (Dang et al., 2017; Dang et al., 2018) and game levels (Dang et al., 2018). However, these tools still operate procedurally, preventing users from defining their world using a high-level semantic description.
Recent image generation models such as Dall-E (Dang et al., 2018), Stable Diffusion (Zhu et al., 2018), Imagen (Zhu et al., 2018), and Midjourney (Dang et al., 2018) are now capable of generating high-quality images based on simple natural language text prompts to guide the image generation process semantically. Thus prompting has evolved as a new interaction paradigm with the advent of large pre-trained text (Dang et al., 2019) and image generation models.
Prompt-based image generation models have become increasingly popular, but the prevailing interaction paradigm with these models is limited to a "click-once" text prompt interface. This approach assumes that users can provide a complete and accurate description of the desired visual imagery upfront. However, world-building is an iterative process, and this simplistic approach may not be adequate (Dang et al., 2018). More expressive techniques are needed to interact with these models to address this challenge. One promising approach is incorporating additional input modalities, such as sketching or other graphical interfaces, to allow users to convey their design. However, the impact of such multi-modal systems on the world-building process and the behavior of users when defining prompts for generative AI models remains an open question.
To this end, we designed and built WorldSmith (Figure 1), a tool to support world-builders generate an image of their envisioned fictional worlds through multi-modal inputs, including text input, sketching, and region painting. To accommodate their iterative and piece-by-piece workflow, WorldSmith was designed to reinforce two key concepts 1) hierarchical generation of multiple image tiles and 2) layered editing of individual image tiles. To evaluate the utility and to observe users' prompting behavior with WorldSmith, we conducted a first-use study with 13 participants.
In summary, we contribute the following:
* WorldSmith, a multi-modal tool that enables users to iteratively design and refine complex fictional worlds using layered editing and hierarchical compositions with a prompt-based model that uses text, sketches, and region masks as inputs.
* Insights from a formative and first-use study, demonstrating how WorldSmith facilitates interactive prompting with text input and additionally with non-textual interaction such as sketching and region painting, to disambiguate text prompts for generative AI.
## 2. Related Work
This work draws upon the domains of world-building, scene generation and human-AI co-creativity.
### Image Synthesis Techniques
To facilitate working with multiple generated images, WorldSmith uses various image composition techniques, including inpainting, which involves filling in missing or damaged areas of an image by generating plausible content based on the surrounding context (Dang et al., 2018). Inpainting has been applied to automatically colorize rough
sketches (Wang et al., 2017) and remove objects from photographs (Krizhevsky et al., 2017). Another related technique is outpainting, which generates new content beyond the boundaries of an image (Wang et al., 2017). Pre-trained image models have also been investigated for their ability to perform visual conceptual blending (Wang et al., 2017), which involves blending visual concepts to generate new content, such as an "amphibious vehicle" resulting from blending "a boat" and "a bus. Other research has explored blending for creating symbols (Chen et al., 2018) and for sketches (Chen et al., 2018; Wang et al., 2018). Multiple inputs, including fully segmented images with corresponding annotations (Chen et al., 2018; Wang et al., 2018), can be used to synthesize images. Motivated by the world-building workflow, we investigate how various image synthesis techniques work together to support users in their world-building process. Specifically, we blend multiple image tiles to create a larger composition, generating new content beyond the boundaries of individual images while staying within the boundaries of the target set as a whole.
### Prompting Pretrained Generative Models
Recent developments in natural language processing have shown that Pre-trained Large Language Models (LLMs) can solve multiple tasks without the need for specific training for each task. This can be achieved by using text prompts in natural language, as demonstrated by Brown et al. (2018). Generating effective text prompts is a challenging task, not just for generating text (Wang et al., 2017), but also for generating images. Although the internet community has developed several prompting strategies to create more targeted images, such as incorporating resolution-related terms like _4k_ or _Unreal Engine_, recent research has proposed techniques to automatically refine these prompts through prompt engineering (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) or interactive methods (Wang et al., 2017; Wang et al., 2017). While, many interactive prompt-based tools only support uni-modal input (Chen et al., 2018; Wang et al., 2017; Wang et al., 2017), two recent surveys (Wang et al., 2017; Wang et al., 2017) independently called for also exploring multi-modal affordances of prompt-based models. To this end, Liu et al. (Liu et al., 2017) designed 3DALL-E to support image prompts in addition to text inputs, by taking snapshots of model-objects workspace in their workspace and generating variations of that input. Zhang et al. (Zhang et al., 2017) introduced StoryDrawer a co-creative drawing system to support children in creative storytelling through interacting with an AI through a conversational dialog and drawing. In the current literature, investigations towards more expressive prompting are scarce. However, enabling users to better express themselves when interacting with prompt-based models is crucial to support more complex workflows such as world building. Therefore, we add two new dimensions of expressive prompting to the current literature, namely hierarchical prompting and spatial prompting.
### Scene Generation
Although our focus is on the visual representation of 2D fictional worlds, previous research has already examined text-based worlds (Chen et al., 2018; Wang et al., 2017) and virtual 3D worlds (Zhang et al., 2017). Anticipating the emergence of language-based 3D scene generation systems, Coyne and Sproat (Chen et al., 2018) presented _WordsEye_, a tool that automatically converts text into 3D scenes. However, this tool relied on a vast database of pre-existing 3D models and poses. In contrast, fictional world-building typically involves the creation of new artwork. Consequently, this method may face limitations when dealing with unstructured 2D fictional worlds or unconventional layouts. According to a recent survey (Wang et al., 2017), interactive text-to-scene systems are relatively scarce, while most related work has concentrated on automated approaches (Zhang et al., 2017; Wang et al., 2017; Wang et al., 2017), neglecting the role of humans in the creative process. Nonetheless, there are some examples of systems that adopt a more human-centric perspective (Chen et al., 2018; Wang et al., 2017). Our system, WorldSmith, is intended to assist users in creating rather than substituting for them.
There has been growing interest in the development of interactive scene-generation systems. One approach is to use a scene graph to generate images, as explored in (Wang et al., 2017). Another interesting direction in interactive image generation is the use of chat interfaces (Wang et al., 2017; Wang et al., 2017). However, fictional worlds often have a spatial component, but it has been found that human language for expressing spatial relations is often ambiguous and subjective (Wang et al., 2017; Wang et al., 2017). We designed WorldSmith to allow users to draw their spatial knowledge in addition to text input.
### Human-AI Co-Creativity
Co-creative systems that involve both humans and Artificial Intelligence (AI) entail collaboration where each party contributes their capabilities to the creative process. AI systems can assist with generating ideas (Wang et al., 2017; Wang et al., 2017), provide inspiration (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), and support internal reflection on the content (Wang et al., 2017; Wang et al., 2017), while humans can provide subjective judgment and critical thinking. Recently developed co-creative tools include FashionQ (Wang et al., 2017) for ideation in fashion design, and WeToon (Wang et al., 2017) which enables users to generate sketches through direct manipulation of a graphical user interface. Other tools employ prompt-based AI models to support users in their design process (Wang et al., 2017; Wang et al., 2017). However, Jakesch et al. (Jakesch et al., 2017) found that generative model biases can influence users' behavior and lead them to choose the most convenient option, often the first generated content item. This insight further highlights the need for the careful evaluation and design of co-creative systems. To this end, various frameworks have been developed to support the design of a creative partner (Chen et al., 2018; Wang et al., 2017; Wang et al., 2017), given the challenges of developing effective AI. Moreover, several works have suggested to also logging users' interactions (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) to evaluate their behavior with co-creative AI systems. With WorldSmith, we contribute a creativity tool to support users' world-building workflow through iterative and expressive interactions.
## 3. Formative Study
A formative study was carried out to gain insight into the approaches and perceptions creators have in constructing and defining their fictional worlds.
### Participants and Procedure
A total of N=4 individuals participated in remote interviews, with ages between 25-52 years. Participants were recruited through internal email lists and one external professional was recommended by personal contacts. All participants reported prior experience in world-building, either as a hobby or within a professional context designing landscapes or animating game assets. Two participants spent less than an hour per week on world-building tasks, one
spent 1-5 hours per week, and one spent at least 5 hours per week constructing worlds.
During the study, semi-structured interviews were conducted to investigate how participants plan their world-building process. On average, each interview took 45 minutes. The guiding questions were focused on how the study participants characterized world-building and the methods they employed throughout their world-building process.
### World Building Process
From the results of the interviews, we found that participants had different motivations for building worlds as well as different tools used, though many followed similar processes. While some participants (\(P_{1}\), \(P_{3}\)) found joy in developing interesting and fun worlds to share with their friends, other participants created intricate worlds as part of their profession. For example, \(P_{2}\) taught about a landscape design class at a university and builds terrain maps to investigate how architectural structures evolve over time based on the surrounding environment. \(P_{4}\) is a professional game animator who has animated multiple 3D fictional worlds.
#### 3.2.1. Finding Inspiration and Re-using Assets
When it comes to finding inspiration for their creative work, many commented that they often turn to the internet. For example, they reported using search engines or browsing related blogs and online communities like Reddit in search of visual imagery that sparks their imagination (\(P_{1}\), \(P_{2}\)\(P_{3}\)). In a Dungeon and Dragons campaign, \(P_{1}\) had the primary responsibility of constructing the game's world. However, they found creating visual assets tedious and time-consuming. As a result, \(P_{1}\) frequently resorted to using pre-existing assets found online and noted that there may be copyright issues with re-using these assets.
#### 3.2.2. Refining Ideas
The initial image or inspiration is often vague and often requires many iterations before it becomes something concrete. From their initial spark, participants began asking themselves questions about the fictional world they wanted to build, using an image sketch or list of notes as a starting point to generate further ideas and details. As one participant explained, _"Sometimes a piece of visual artworks strikes me and I find myself asking: What is happening in that image? What is the character doing in that scene?"_ (\(P_{4}\)) This iterative process of questioning and building upon ideas aids the development of richer and more detailed fictional worlds.
#### 3.2.3. Inductive Rather Than Divergent World-Building Process
On a macro level, rather than exploring many divergent creations, it is more common to fix key characteristics such as the time and age, or style of the world (fantasy world, science fiction world) once at the beginning of the world building process. Fine details of the world such as individual fauna, flora, and their spatial composition are subject to more frequent changes. However, the further a creator is in the design process, i.e. the more complex the fictional world is, the less likely are major changes in the image composition, because such changes would introduce too much re-work.
#### 3.2.4. Throwaway Prototypes
After the initial idea finding phase, a key activity is creating disposable prototypes quickly. There are many tools to support world builders to accomplish their task, but these tools are highly specialized and often have a steep learning curve to master them fully. Participants in our study reported that they use multiple tools during their world building process. Nevertheless, all participants often started with a rough outline of the world they want to build using pen and paper only, because it allows them to quickly note and depict ideas. These outlines may only include a list of notes (\(P_{3}\)), but more often also include graphical elements such as a mind map (\(P_{4}\)) or layout sketch with text annotations that describe which elements should appear in their artwork later on (\(P_{1}\), \(P_{2}\)).
#### 3.2.5. High-Fidelity Artwork
After an initial stage of ideation, the desired level of professionalism and the complexity of the fictional world determines which tools they use. As a hobby Dungeon and Dragons game master, \(P_{1}\) use specialized online world-building tools for generating graphical game elements such as maps (Dang et al., 2018; Dang et al., 2019), assets (Dang et al., 2020), and characters (Dang et al., 2020). Furthermore, \(P_{1}\) felt overwhelmed with learning all the tools required to create various elements for the DnD campaign. \(P_{3}\) is an experienced software developer and built a computational agent that procedurally created a DnD map. During the DnD campaign \(P_{3}\) would use a printed version of the previously generated map and let players spontaneously draw additional game elements on it. Although it saves time, \(P_{3}\) wanted to have an image for the player to set the "mood" for the current DnD campaign. For landscape design, \(P_{2}\) use a range of terrain editing software (Dang et al., 2018; Dang et al., 2020; Dang et al., 2020) to model photo-realistic landscapes. Here, \(P_{2}\) noted that, although these tools produce highly realistic 3D terrains, this comes at the cost of a high learning curve.
## 4. Design Goals
We formulated the following design objectives, based on the processes identified during our formative user study (Section 3.2), to assist users in constructing their fictional worlds.
**D1 - Support Multi-Modal Input** World-building includes multiple steps that require prototypes of different fidelity. Early prototypes are usually coarse and are mainly text driven, sometimes also including simple sketches. Our system needs to support multiple input modes to allow users to express their design intent.
**D2 - Supporting Iterative Refinement The system needs to allow users to continually incorporate new details into their world. Therefore, the prototype should facilitate the ability to make layered revisions to images that users have previously created.
**D3 - Support Visual Asset Generation** In order to facilitate the visualization of intricate worlds, the prototype should empower users to create new visual assets to populate their world.
**D4- Enable Hierarchical Composition** The formative study indicated that world builders typically work on various levels of detail. At the macro level, they establish the general layout of the world, while at the micro level, they determine the specific components present within the world. Hence, our prototype should allow users to engage hierarchically in the design process.
## 5. WorkDsmith
We designed our prototype (Figure 1) to support users' world-building workflow by allowing them to focus on different sub-components of their world and perform layered edits. Through multiple generated images users iteratively refine their initially
vague ideas. An interactive _Tree View_ allows them to introspect their past actions and branch out to create new visual assets all in one application.
### Global Tile View
Inspired by the game Carcassone, WorldSmith lets users create a world image with multiple _Image Tiles_ (**D4**, Figure 2 (B)). WorldSmith includes four image tiles which gave participant in our user study (Section 6) enough time to work on each tile in detail. However, our concept also allows for more image tiles. Tiles are initially aligned in a grid but can be resized and moved on the canvas.
WorldSmith supports multi-level editing with the _Global Tile View_ which allows users to blend multiple tiles together to form a cohesive world (**D4**). We decided to separate the image tile composition from the image tile creation, as they conceptually represent different layers of abstractions. In addition, this division allows users to concentrate on broader objectives, such as combining various tiles that depict the creative vision of the world, while deferring the intricate editing process for each image tile. Typically, these components are guided by a narrative framework; for instance, a user may wish to construct a map featuring a forest, a lake, and mountains. Once blending is complete, the result is shown next to the global tile view, and users can return to individual tiles for further editing if necessary.
_Blending Tiles_ Users can blend tiles by providing a text prompt (Figure 2) and adjusting the empty space between them. The system fills the space between tiles based on the created tiles and a text prompt.
_Resizing and Repositioning Tiles_ The _Grid Size_ slider controls the amount of empty space, with more space providing more blending space. Users can also resize and reposition tiles on the canvas for added flexibility.
### Detail Tile Editor
The _Detail Editor View_(Figure 1B) allows users to focus on a specific part of the world within a larger composition. It provides several tools that allow users to generate content using text, sketching, and masking (**D1**).
#### 5.2.1. Text Prompt Editor
The text editor of WorldSmith comprises of two sections: a global scene description area and a specific region description area for users to provide more focused descriptions.
_Overall Scene Description_ The overall scene description text box provides a plain text entry box for users to generate image content quickly. This is a common UI pattern already found in many recent generative image generation tools (Sang et al., 2018; Wang et al., 2019).
_Region Description_ The region description permits users to spatially specify where content should generate on the canvas. Adding a region inserts an empty text segment with a newly assigned color (see Figure 1B). Users can modify this segment by typing a description and only affecting the outlined area. Regions can be drawn to visually link to the corresponding text segment color.
#### 5.2.2. Large Canvas
A large multi-purpose canvas allows users to draw sketches and iterate on their generated images (Figure 4). Users can choose between a sketch mode or a region mode when iterating on an image (**D2**). Both modes support a textual description (see Section 5.2) that instructs the system how to interpret users' spatial inputs.
_Sketch Mode_ The sketch mode lets users draw sketches using a pen tool (see Figure 3 c). WorldSmith takes the sketch input and a textual description to generate images that are similar to the user's drawn images but adds more detail to the generated image (Figure 3). This allows users to coarsely sketch their image while the system generates a higher fidelity image. They can also drag
Figure 3. (A) An example of a sketch from \(P_{3}\) with the overall scene text prompt: “A skyline view of a city in the Caribbean, it has a chain of mountains in the left, and next to them there is a big city, it has a lot of skyserapers, a path is coming down from the mountain and integrating into the city, Anime style”. (B) The generated results are shown at the bottom and (C) enlarged when hovered over.
Figure 2. Figure depicts the _Global Tile View_, where all the tiles have already been created by \(P_{11}\)B). The tool panel on the left-hand side facilitates the control over space between individual tiles (A) and, further, allows for a description of how the tiles should be blended together (C). The text prompt reads: “a rainy city at night.
existing images and generated images into the sketch canvas to create variations of that image to quickly explore alternative image generations.
_Region Mode_ In the region mode users can draw a region on the canvas by using one of several region brushes: 1) The _Pencil Brush_ allows users to draw simple strokes. Each stroke corresponds to one of the text segments previously defined when users created a new region (see Figure 1B), 2) The _Hull Brush_ computes the convex hull of all brush strokes combined since the hull brush was selected. This brush allows users to quickly select large regions. 3) The _Lasso Brush_ creates a closed path and lets users quickly draw closed shapes. The masks are only visible when the region mode is active (see Figure 4)
#### 5.2.3. Generating Images
After users are done defining their inputs which may include a scene description, a sketch, as well as a few region descriptions, they can generate an image by clicking on the _Run Diffusion_ button (Figure 1B).
#### 5.2.4. Results View
The _Results View_ is a collection of all images that users have generated displayed as small thumbnails in a scrollable grid. Hovering over a thumbnail displays a larger preview of the image next to the canvas, allowing users to directly compare the differences between the two. When new images are fetched, the _Results View_ temporarily greys out to indicate that new content is currently being generated. Once generated, these images will display in the _Results View_ so users can reuse it in all their edits.
To insert a selected image into the canvas, users can double-click on the corresponding thumbnail or simply drag the image onto the canvas.
### Tree View
The tree view records all user actions for each tile (see Figure 6). Each node in the tree view includes an image preview and a detailed text description that contains both the scene and region descriptions, representing a snapshot of a tile.
The tree view supports reflective thinking by displaying previous inputs and allowing users to explore the evolution of their design throughout the session (Dang et al., 2018; Wang et al., 2018). Moreover, it encourages divergent thinking by enabling users to create alternate iterations (**D2**) and explore new visual assets (**D3**). In the realm of text-based world simulation systems, knowledge graphs have been employed to represent the state of the world by mining existing storylines Ammanabrolu et al. (Ammanabrolu et al., 2018). WorldSmith automatically creates such a state tree and additionally enables users to manually extend this state tree.
Our description below outlines the user interactions that facilitated exploring and refining image tiles with the tree view.
_Automatic node insertion_ Whenever users execute the diffusion process, the system automatically updates the tree view. If the inputs to the system have been altered since the last image generation, a new node will be added to the currently selected tree node. This automated insertion upon update enables users to reference their previous interactions.
Furthermore, users have the option to manually insert nodes by selecting a node and clicking on the _Add Node_ button. This interaction pattern is useful for experimenting with new-generation ideas while preserving the previous generation. When inserting a new node, users can choose to iterate on an exact copy of the previous inputs, or they can start from scratch to create new visual assets and then blend it into their previously generated world (see Figure 6).
_Selecting a Node_. Instead of adding a new node, users can also double-click on an existing node in the tree view to load the inputs linked to that particular node into the detail editor view, enabling them to continue iterating on it (**D2**).
_Pan and Zoom_ To accommodate the expanding size of the tree view, we added a pan and zoom function to allow users to zoom in on specific nodes and examine their input. Users can zoom into the canvas using the mouse wheel and pan by clicking and dragging the canvas while holding the left mouse button. Upon generating a
Figure 4. An example of a region segmentation (A) from \(P_{6}\) with the corresponding scene (B) and region (C) description in the prompt editor. The regions and corresponding text segments are shown with the same color.
Figure 5. An example of adding sketch information to refine a generated image (B2). \(P_{11}\) used a green pen to sketch on the image (A1) and a region tool to highlight parts of the image (A2), thereby providing extra information to guide the generation of a green nebula and Godzilla. The scene description reads: “a lake at night in cartoon style” (B1) and the regions include _Godzilla_ and a _nebula_ (C). (D) The system redrew the user’s input, adding a nebula and Godzilla that matches the style of the image.
new node (see Section 5.3), the system automatically centers the viewport to present the node at the center of the screen, aiding users in maintaining focus and navigating the tree view more efficiently.
_Preview generated images_ WorldSmith automatically displays a preview of all images generated for a particular node and its inputs when users hover over an image. This feature allows users to refer to the text prompt beneath the node while also previewing the content they have created.
### Usage Scenario
The following describes one possible scenario of how WorldSmith can be used. This scenario compactly describes how \(P_{11}\) in our user study (Section 6) worked with WorldSmith. It demonstrates how the different components of WorldSmith interplay to enable a user to iteratively generate a visual representation of a fictional world, by using text based prompting, iterative workflows, and detailed editing using sketching and regions.
\(P_{11}\), an avid reader of fictional novels and a creative writer, struggles to convey the mood of her imaginative world to her collaborators. To illustrate her world and build a shared understanding with them, she uses WorldSmith, to iteratively create a visual representation (e.g., a map view) of her world from scratch.
To start, she selects one of the four empty image tiles in the Global Tile View (Figure 1 A) and enters "_a lake at night in a cartoon style_" as the scene description (Figure 5 B1). After generating and selecting her favorite initial scene it appears as an input preview (Figure 5 B2). However, it is missing the all-important Godzilla. She switches to the Region Mode (Figure 5 A2, red) and enters the text description "godzilla" as a region descriptor (Figure 5 C). She then adds a Nebula using the Sketch Mode + Region Mode to draw a green mist on the canvas (Figure 5 A1+A2), which the system repoints, blending it in with the rest of the scene.
For the second tile, she starts by generating a scene with overall description "_a cyberpunk city at night_". She then wants to add a reusable neon billboard sign asset. To do that, she adds a new node in the Tree View (Figure 6), clears the canvas, and creates a "_neon billboard_". Once happy with the result, she navigates back to the previous city scene using the Tree View and drags and drops the neon sign into the scene. She masks out the borders of the new image asset, provides a short description of how the asset fits the scene, and the system blends the images.
Using the same techniques, she creates another tile showing a _neon bridge at night_ and a _cyberpunk car at night_. \(P_{11}\) merges all four image tiles into one coherent image using the Global Tile View (Figure 2 A), adjusting the grid space and provides a short description that relates all tiles (Figure 2C). She positions and resizes (Figure 2B) the tiles to her liking and clicks Blend Tiles. The system contextually fills in the space between the tiles (Figure 2D, Figure 12), and after some iterations.
Figure 6. (A) The tree view shows all user inputs and interactions in a tree diagram. This allows users to retrace their creation process, branch out, and create alternative versions of any tile or entirely new assets. (B) During their creation of a cyberpunk city, \(P_{11}\) found the neon sign to be lacking. They used the tree view feature to create a neon sign (C) and merged that newly created asset into the previously generated image scene (D).
### Technical Details
Our prototype uses a client-server architecture. The client frontend is built with SvelteKit (SvelteKit, 2017) and Skeleton UI toolkit (Skeleton UI toolkit, 2018), while the backend is built with FastAPI and runs on the python webserver uvicorn. We utilized the Stable Diffusion algorithm via HuggingFace's model hub (512x512 pixels), which were run on a machine with 16GB GPU VRAM. To enable interactive painting, we used fabricjs(HuggingFace, 2018) and created two separate canvases for sketching and region masks. When users initiated the diffusion process, we extracted inputs from the detail editor view. For scene descriptions without additional input (e.g. sketches or regions) we generated an image from random noise based on the provided text input. If users provided an RGB sketch, we used it with added Gaussian noise to generate images that matches user's drawings following Rombach et al. (Rombach et al., 2018). Regions were transmitted as an array, with each entry containing a binary mask image and corresponding description. In our region-based painting feature (Figure 4), we extract multiple binary masks from a user-provided region segmentation, where white pixels correspond to the unique region color, and the remaining area is black. Our region-based painting feature is inspired by Balaji et al. (Balaji et al., 2018) who proposed an approach allowing users to specify where elements should appear on the generated image. They combined a separate binary image mask, with dimensions matching the output image, with each word in an input text prompt. Notably, words in the user-provided text input exert variable influence on different parts of the image, with white pixels serving as indicators for a higher probability of an element appearing in the assigned segment. We used an open-source implementation of this concept applied to Stable Diffusion1. For blending image tiles, we obtained a binary mask with black pixels for image tiles and white pixels for _empty_ space. A Gaussian blur was applied to the mask, softening black tile edges for a smoother blend. The final image was generated by imputting this mask and user-created image tiles into the diffusion model.
Footnote 1: [https://github.com/clone/simo/paint-with-words-sd](https://github.com/clone/simo/paint-with-words-sd)
To track user interactions (such as typing a scene description, drawing a sketch or a region, moving the tiles), we implemented a logging server on FastAPI, which utilized a Postgres database.
## 6. Evaluation
To evaluate the utility and use of WorldSmith, we conducted a user study with 13 participants which provide first insights into the following research questions:
* (_RQ1_) How do users engage with generative AI for world building?
* (_RQ2_) To what extent does WorldSmith support the world building process?
### World Building Task
A set of prompts were created to inspire participants to build their fictional world. The prompts were intended to cover various types of visual world-building, such as fantasy maps or fictional landscapes. The prompts were open-ended and designed to allow participants to think creatively and explore a breadth of concepts (Table A.1 in the Appendix).
We further encouraged them to explore not only maps but also other types of visual imagery depicting fictional worlds. While maps typically have specific characteristics such as a top-down view and a specific scale, fictional worlds can be composed of any objects that create a scene that does not exist in reality. We instructed participants to read the previous design prompts and focus on the composition of elements and the setting of the world (e.g., a fantasy world and everything such worlds entail).
### Methods
We conducted a first-use study online via Zoom and asked participants to think aloud during the user study to learn about their motivations and understand their potential challenges when interacting with the prototype. Our study involved a world-building task where participants used WorldSmith to create a fictional world (Section 5). In addition to logging users' interactions, we conducted semi-structured interviews at the beginning and end of the world-building task to collect their open feedback about their prior experience and motivations for world-building and their overall experience working with WorldSmith. Finally, participants completed two questionnaires at the end of the task where they rated their experience with the different features of WorldSmith. For qualitative analysis, two researchers independently assigned inductive codes for a subset of the transcribed interviews. Then one researcher used these codes and notes taken during the interviews and thoroughly reviewed the full transcripts of all interviews to find further evidence for the thematic clusters identified in the previous step.
### Participants and Procedure
We recruited the participants for the study over e-mail lists and personal contacts. All participants had experience building worlds (between 2 and 10 years). This included experience crafting DnD worlds, video game levels, creative writing, and landscape design. This study was approved through our institutional review process.
The study consisted of four phases which we briefly outline below. Before the study, all participants completed a consent form and demographic questionnaire.
_Pre-Interview (10 minutes)_ The first phase consisted of a short semi-structured interview with each participant. During this interview, we asked participants about their motivations and experience in building fictional worlds.
_Tutorial (10 minutes)_ During the tutorial phase, participants were introduced to the prototype and given an overview of its features and functionalities. One researcher explained the process of creating an example image tile using text, sketching or region painting. Meanwhile, the participants engaged with the tool by following the researcher's guiding instructions to become acquainted with the different ways of providing input.
_World Building Task (60 minutes)_ In the third phase, participants interacted with the software prototype via screen sharing and remote control. They were given a fictional world-building task to complete (Section 6.1). We also asked them to briefly describe what they wanted to create and we recorded their interactions with the software and resultant words.
_Post-Interview and Questionnaire (10 minutes)_ After completing the world-building task, we conducted an interview with each
participant to collect their feedback and impressions of the software prototype. In the questionnaire, we asked participants to rate the features of the prototype on a Likert scale. This questionnaire included questions about the various input modalities (text-only, text+region, text+sketching) and whether they think WorldSmith can speed up their regular approach to world-building. Finally, we administered a _System Usability Scale_ (SUS) questionnaire to evaluate the overall usability of WorldSmith which included questions about the complexity of the program, and related how easy it was to learn the program and to interact with it.
### Quantitative Findings
Overall, we found participants leveraged all forms of interactions. In total, participants triggered the image generation process 229 times to generate scenes, maps, and assets for the individual image tiles, resulting in 2748 generated images. Additionally, users created 86 world compositions using the blending features.
In our first-use study, 13 participants interacted with the prototype. However, some participants finished their world-building task before the official time ended, so they started a second session within the remaining time. Therefore, for the remainder of this section, we denoted each of the resulting 16 sessions with a participant id and the corresponding trial number (\(P_{X}\ Trial\ Y\)).
#### 6.4.1. Relational vs. Quantifying Keywords
We counted the number of words in 1) the _text of each scene description_ and 2) the _text of each region description_. We found that participants wrote differently for scene and region descriptions. Each participant created on average 2.05 regions with corresponding region descriptions per tile (Median=2, inter-quartile range 2). Region descriptions were typically shorter (Avg=4.2 words; Md=3; iqr=6) than scene descriptions (Avg=12.4 words; Md=11; iqr=7). Scene descriptions were primarily used to describe multiple objects and the overall style, while region descriptions were used to quantify or describe a specific object or element and to provide more localized instructions.
Position - We analyzed scene and region descriptions using a coding routine similar to that in Section 6.2, using text prompts from user interaction logging data. The full list of codes can be found in Table A.2 in the Appendix. Our analysis found that scene descriptions contained more _positional_ keywords (_e.g. surrounded by_, _above_, _north_, _south_) than region descriptions, which is consistent with participant observed behavior in the user study where they used the region drawing tool to specify spatial relations. This is shown in Figure 7.
Style - Scene descriptions also more frequently included style (e.g. cartoon style, fantasy) and perspective keywords (top-down view, isometric view). Participants used the UI in overall scene descriptions to define style and perspective keywords rather than repeating them for every region description. More generally, participants wanted to define these keywords once for the entire session and for all image tiles.
Action - Scene descriptions frequently included action keywords, which relate one object to another (e.g. "Mountain range running north to south"). In contrast, region descriptions typically described only one object without trying to relate it to other elements in the world, which is done implicitly via drawing the regions.
Quantifier - Participants in our study used indefinite quantifiers (e.g. "a few trees", "many dirt roads") and size keywords (e.g. "a large", "a gigantic") more frequently in region descriptions to convey the number and size of elements in the world, instead of drawing separate regions for each.
#### 6.4.2. Composing Tiles
The final iteration with WorldSmith often involved rearranging and resizing the created image tiles (Figure A.1). The prompts at the global level were more generalized compared to the individual tile level prompts. They included keywords to remove visual artifacts such as border around the tiles during blending ("_no [straight] edges, smooth collage"_ (\(P_{2}\)), "_seamless map_" (\(P_{1}\))) or introduce new objects to connect the tiles ("[...] _islands connected by bridges_" (\(P_{3}\)), "[...] _an ocean in the center_" (\(P_{8}\))). Although, keywords related to style and perspective were also used, global keywords didn't affect already created image tiles since WorldSmith does not support perspective and style matching across tiles post image tile creation.
#### 6.4.3. Interaction traces
Users' editing behavior was analyzed by filtering four representative actions from the interaction logging data (Figure 8). These actions include modifying tiles, sketches, regions, and text prompts. Examples of modifying tiles include repositioning and scaling them in the _Global Tile View_ (Figure 1 (A)) while modifying sketches involves drawing on the Sketch Canvas. Modifying regions involves adding, drawing, and describing new regions, and modifying text prompts only involves editing the scene description in the _Detail Editor View_.
Bootstrapping the World with Text - Participants in 12 out of 16 sessions began creating their world with a text-only description, likely due to the fast and lightweight nature of the method, as reported by participants in the study: "_I do like just having the general big description, and just seeing what it comes up with."_ (\(P_{1}\)). Participants in the other 4 sessions started
Figure 7. An overview of the codes extracted from the prompts for region descriptions and scene descriptions. See Table A.2 in the Appendix for an explanation of the codes. The blue and orange bars add up to one, respectively. Note that scene descriptions frequently contained style and perspective-related keywords, while region descriptions contained more size and quantifier keywords.
their worlds. These participants already had an image with a rough structure in mind. Using the sketching tools directly, they could sketch out their mental image.
_Moving from Coarse to Detail_ - We computed and aggregated the transition matrices between creation operations over all participants and trials to analyze editing behavior (Figure 9). Overall, participants have transitioned between all available edit operations. However, modifying regions frequently preceded the blending process the last action in the world-building process. This observation, together with our previous observations (Section 6.4.3) suggest that participants transitioned from coarse actions, such as text prompts, to fine-grained editings, such as sketching and region painting, to add details to their image tiles. This was consistent with the previous observation that participants used text prompts to quickly bootstrap the world-building process.
_Summary_ - The quantitative results indicate that users engaged with WorldSmith in different ways (\(RQ1\)). They generally moved from coarse to detail, i.e. using text and regions to populate the individual tiles before sketching in the details. On the global composition level, participants explored alternative worlds by revising both their prompts as well as tile compositions. Differences in language used in their prompts imply that users perceive scene and region prompts differently and convey input information implicitly through both text and sketching.
### Design Insights and Improvements
Overall, the participants felt that the tool was easy to use and helped them perform the world-building task (Figure 10). Our observations and participants' comments during the interview revealed that some system features required further iteration.
_Related Keywords_ - As noted by Liu and Chilton (Liu et al., 2018) and Liu et al. (Liu et al., 2018), developing relevant keywords for text-to-image generations is challenging. While participants in the study were primarily concerned with building their fictional world, they felt burdened to think of keywords that are relevant to their world, such as stylistic keywords and perspective. They wished the system would automatically insert those keywords and match the style across all tiles.
_Direct Manipulation_ - There was a desire to more directly edit the result image, with one participant explaining _"It would have been nice to be able to manipulate elements in the generated image directly. For example, after I created that tile with a crossroad, I wanted to select that road as a spline object that could be directly manipulated."_ (\(P_{2}\)). This would allow for greater control and precision in the image creation process, though comes with a large set of technical challenges.
### Qualitative Findings
#### 6.6.1. Working with Region-Based Descriptions
We observed two distinct strategies when participants interacted with the region-based descriptions.
_Narative first, then drawing_ - When defining regions, some participants created the text segments first and typed the full description of each region before deciding how these regions are composed on the canvas. This is in line with findings from the
Figure 8. The graph shows the distribution of participants’ interaction traces while they worked on the interactive prototype. Three participants completed the task early and started a second world-building session, resulting in a total of 16 sessions instead of 13. Note that most participants started with a text prompt when first interacting with WorldSmith.
Figure 10. An overview of the responses for the System Usability Scale questionnaire.
Figure 9. An overview of the action transition frequencies aggregates across all sessions. The number in each matrix cell represents the relative transition ratio at which a participant transitioned from action Y (rows) to action X (columns).
formative study (Section 3.2) on starting with a concept of the fictional world first before committing to a specific instantiation of that world.
_Drawing first, then narrative_ - On the other hand, some participants first thought about the elements' overall composition before writing down a detailed description for each element. When asked whether they had a concrete image in mind, they replied with: _'I don't have anything specific in mind, but rather a rough idea of the elements I want in that picture."_ (\(P_{13}\))
#### 6.6.2. Users preferred Multi-Modal Input
While all participants agreed that writing text prompts to generate images was helpful, the majority (11 out of 13) preferred multi-modal input over using text-prompts only (Figure 11). Deriving suitable text prompts for image generation was not considered difficult, however, we observed during the study that participants valued content that described the core-elements of their fictional-world (e.g. a magic forest, a magic rabbit), more than keywords which related to overall rendering features such as stylistic and perspective keywords. Participants also responded that they preferred adding sketches and painting in addition to their text input. From our observations we note that some participants (\(P_{2}\), \(P_{3}\)) found it challenging to sketch-in details with the remote controlled computer using the mouse, while other participants saw this technical constraint as a strong motivator for generating images from rough sketches without the need for precise 2D input (\(P_{6}\)) using WorldSmith.
#### 6.6.3. Building World by Parts
During the formative study, we found that not all parts of an image hold equal importance for the creator. This is particularly evident in Dungeons and Dragons (DnD) games, where certain areas of interest contain more detail and are the sites of important events. Observations from our first-use study underpinned the previous finding by showing that participants invested significant effort in creating highly detailed individual tiles, and appreciated the ability to focus on these tiles while leaving the empty spaces between them to be filled in automatically by WorldSmith to smoothly combine the tiles. Additionally, one participant noted that _"[t]he tiles have lots of detail which naturally draws the viewers eye to those points of interest."_ (\(P_{1}\)). Another participant found blending to be useful to explore different world compositions quickly: _"You know, sometimes [...] I know that I want there to be like a lake over here in a city over here. But I don't really care what else is in there [...]. Let's throw together some interesting stuff and see how it blends together, and then start adding on from here."_ (\(P_{4}\)).
#### 6.6.4. Materialise Stream of Consciousness
Participants commented that they sometimes struggled to track their thoughts because _"ideas flash up and vanish"_ (\(P_{13}\)) before they had the chance to fully develop that idea. However, using image generation has aided them in quickly capturing their train of thought. One participant further noted _"T'm one of those people who has a very blind inner eye, so I can't visualize things in my head. I have to put it in front of me in order to make any sense of it visually."_ (\(P_{11}\)). Using the multi-modal image generation system has enabled them to refine their initially _vague ideas_ by allowing them to continuously add details to their creations. Here, one participant noted that _"seeing how all seamlessly blend, I now had a clearer vision of how I want to compose the elements in the world"_ (\(P_{3}\)).
#### 6.6.5. Perception of control over the AI
Participants were generally conscious that they were interacting with an AI system. During the world-building task, we frequently observed participants questioning whether the system understood their commands or intentions. As a result, participants appeared to be more understanding when WorldSmith failed to generate exactly what they had in minds. In such instances, participants began to consider alternative ways of expressing their intent. They sometimes reformulated their text input to the system or included supplementary information for the system, such as drawing a sketch. For example, one participant wanted to create a nebula on a night sky using the region-based painting tool, but the system did not produce the desired result. In response, the participant decided to add more information to the input by also adding a sketch in addition to the region-based painting (see Figure 5). However, some participants were discouraged by their initial failed attempts to author their envisioned image tile, feeling that the system was inconsistent (Figure 10) when generating new images (\(P_{4}\), \(P_{7}\), \(P_{8}\)). For example, while \(P_{4}\) could use WorldSmith to generate _"ducks in a pond"_ or _"a house surrounded by a forest"_, he was struggling to generate a complex scene such as _"a top-down view of a mountain rift running North to South"_ and wanted to insert a _"a fantasy art of a mouth of a mine with mine carts arranged around"_ and a _"top down view of a yard surrounded by tents and roman soldiers"_. Creating such a scene would have required more time to create relevant assets such as the mine or the war camp.
#### 6.6.6. Feedback on the Tree View
Overall, we found that those participants who used the tree view explored this concept to create new image assets and blend them back into their scenes. One participant mainly used this feature to introspect his past interactions. The tree view enabled him to be more confident in exploring alternative generations because it offered a way to revert to the original image. _"I really liked the tree view, because I use a lot of [...] platforms and I'm often hesitant to continue iterating on an idea [...] because it's hard to get back to the original image. Sometimes it kind of gets lost, even if it's somewhere in the history it's up to you to find the original image that you originated from [...]"_ (\(P_{10}\)). Another participant commented that she wanted to create _"unlimited nodes and add them directly as
Figure 11. An overview of the responses from the final questionnaire.
image tiles to the global canvas"_ (\(P_{12}\)). However, the responses in the final questionnaire (Figure 11) indicated that participants were generally neutral about the _Tree View_ in this study. We reflect on this observation in the discussion.
#### 6.6.7. Feedback on the Generated Worlds
Participants in our study have found that the generated worlds made sense, although they did not always match their prior imagination. This included the generated individual tiles as well as the overall blended tiles (Figure 12)
During the interview 11 out of 13 participants commented positively about the blended results, highlighting that the system was capable of sensibly filling in the gaps between the created image tiles. The other two participants generally liked the idea of blending different parts of their image but found it challenging to create a seamless blend of all their tiles.
#### 6.6.8. Comparisons against world-building without Generative AI
When participants reflected on their experience with WorldSmith, two participants commended the fast generation of an initial map draft by focusing only on a few _key areas_ (\(P_{1}\),\(P_{9}\)). Nevertheless, full utility of the generated maps required better style and perspective alignment across all tiles (\(P_{6}\), \(P_{9}\), \(P_{8}\)). For example, \(P_{8}\) in found it difficult to align a tile with _"a cartographic rendering of an arctic tundra"_ with a _"cartoon style desert island with war camps"_.
Aside from DnD maps, one participant particularly liked the ability to customize tiles. He compared the tile generation process using WorldSmith and his prior experience with in-game map builders noting: _"In the past, while working with tile-based map editors, I frequently found myself searching for compatible tiles, for instance, connecting streets. However, with this system, I would simply provide sufficient space between the tiles and let the system determine the best way to merge them"_ (\(P_{6}\)). \(P_{11}\) in particular found that WorldSmith's proposed workflow closely matches her own world-building process: _"I do the more fine details and then go. Oh, wait! I should do something [...] a little bit more broad. So for me [the workflow] wasn't anything new. It was kind of a more natural flow for me."_
_Summary_ - In relation to (_RQ2_), our findings demonstrate that the suggested workflow of WorldSmith effectively complemented participants' existing world building process, enabling them to swiftly produce a rendition of their envisioned worlds. During a 45-minute user study, each participant successfully generated a version of their world. While WorldSmith facilitated the rapid creation of an initial draft, participants also acknowledged the need to address appropriate scaling, stylization, and perspective coherence among the already generated image tiles in order to accurately represent their envisioned world.
## 7. Discussion
### Speed vs. Quality Trade-Off
We observed that the quality of the blended results varied among participants. Some participants could seamlessly blend multiple tiles, while others required multiple iterations. We observed three main factors that influenced the quality of the blended result in our study: 1) Tiles with similar styles were easier to blend. 2) Participants had to consider the logical structure of tile composition, such as creating a single horizon across multiple tiles to achieve a seamless blend. Ambiguous positioning of the horizon could result in a less seamless blend. 3) Tile complexity, including the length of the text prompt, number of regions, and style parameters, also affected blending quality. More detailed tiles were harder to blend seamlessly.
We experimented with MultiDiffusion (Bordes et al., 2017) and found it produced better results than stable diffusion but was significantly slower (over 30 seconds for a 768x768 pixel image). Therefore, we prioritized speed and used stable diffusion for image generation to allow users to iterate quickly. Our main objective was to analyze how users developed their fictional world. Some participants were satisfied if the approximate content of their initially created tiles was preserved for the final blending. They were also willing to wait for the final rendering once they were satisfied with the overall composition. To address this, future work could include a slower, quality-preserving mechanism like MultiDiffusion to be used as a post-processing step for the final rendering.
### Creative Use of Model Bias
We have found that stable diffusion (Srivastava et al., 2017) excels in generating images where the elements fit naturally into the scene, such as a spider in a forest, but often faces challenges when generating images of concepts that do not belong together using only text prompts, such as a snow-covered mountain ridge on a tropical island. It has been found that LLMs, sometimes "hallucinate" facts and struggle to generate coherent storylines (Srivastava et al., 2017). However, our users' feedback indicated that visual hallucinations (Bordes et al., 2017) can be a desirable attribute that allows users to create unconventional worlds. Therefore, our prototype offers two possible interactions to combine different concepts. The first solution involves merging tiles that portray
Figure 12. Figure shows multiple worlds that users have built using WorldSmith. Some show a fictional map, while others depict a scene in a fictional world.
disparate concepts, while the second allows users to blend arbitrary images into the scene using a region mask.
Related work has improved the synthesis of images with disparate concepts (Beng et al., 2015; Wang et al., 2016). With WorldSmith we complement these systems by providing a tool that enables users to interactively define blending on three levels: 1) blending using text-only descriptions, 2) blending by also providing sketching and region-masks for disparate concepts, and 3) blending of multiple image tiles.
### Limitations and Reflections on Methodology
Our study was conducted with a pool of participants who shared a common interest and proficiency in technical systems. Many had prior exposure to generative image systems, which allowed them to engage easily with the system during the study. We note that participants without prior experience in image editing may require additional time to familiarize themselves with the system. This learning curve may vary depending on the individual's technical knowledge and experience level. The user study suggested that participants mainly required help to find domain-specific language related to world-building, e.g. perspective, style. Nonetheless, we ensured that all participants received guidance and suggested relevant keywords based on the verbal descriptions of what they wanted to create. Future systems can include a LLM to suggest relevant keywords to lower the threshold for using the system (Wang et al., 2017).
During the study, we found that participants were focusing more on building the individual image tiles, which occupied most of their time. Given the time limit of the user study, few participants interacted extensively with the _Tree View_. However, those who did comment positively about it. We believe that such a tool is best explored over a longer period and over distinct sessions to enable users to introspect their past behavior and explore a wider variety of scenes as they create their world.
In this study we focused on the multi-modal input techniques to leverage a generative AI for world-building. However, we did not consider lore, musical score or character building which are also an essential component of successful worlds.
Our study identified certain limitations with respect to the quality and speed of WorldSmith. We found that the generated images sometimes did not accurately capture all the concepts mentioned in users' text descriptions. Additionally, users had to wait for 6-8 seconds before the first batch of images was generated. While we expect that the development of pre-trained generative AI models will continue to improve the accuracy and speed of image generation, it is important to acknowledge that human language remains inherently ambiguous. As such, we need to explore additional methods to help users express their creative vision when working with generative AI models.
### Expressive Prompting
Current user interfaces for prompt-based models (Wang et al., 2017; Wang et al., 2018) tend to promote a one-shot image generation approach, wherein users can only modify the text to influence the generation process. However, our user study highlighted a key insight: participants tend to create their world models in parts, using sketching and text inputs to communicate positional information. To support this design process, WorldSmith offers users the ability to sketch and paint regions in addition to entering text prompts. Furthermore, WorldSmith incorporates hierarchical support by separating tile blending from tile creation.
Based on our improved understanding of how users integrate multi-modal input in the process of building worlds, we have identified two distinct dimensions that demonstrate how WorldSmith enables more expressive prompting beyond the limitations of the "click-once" interaction paradigm.
_Hierarchical Prompting_ - WorldSmith facilitates hierarchical prompting by enabling users to define prompts across three levels. Firstly, at the Image Tile Canvas level, users can provide text prompts to create a base scene, thus setting the stage for their world-building process. Secondly, at the Sketch and Regions level, users can add text prompts to sketches and regions to provide structural guidance to the system. Finally, users can add text prompts to blend multiple tiles together at the Global Tiles level.
_Spatial Prompting_ - In addition to its other features, WorldSmith also allows participants in our study to modify prompts spatially through non-textual interactions. This capability is facilitated through two methods: firstly, users can convey spatial prompt information through sketching, thus engaging in what we term "Prompting through Painting," Secondly, users can move image tiles to convey prompt information, which we call "Prompting through Dragging."
In summary, WorldSmith introduces two distinct dimensions that enhance users' ability to interact more expressively with prompt-based models. However, we believe that expressive prompting is not limited to world-building, but can also aid the composition of any complex image. Traditional photo editing includes a layering system to allow users to organize and structure their images manually. Spatial and hierarchical prompting augments this interaction and allows participants to quickly explore blended image compositions.
## 8. Conclusion
In summary, we have looked beyond the "click-once" image generation interaction paradigm for prompting generative AI, and we discovered through a first-use study with WorldSmith that users leveraged all inputs (text, sketch, and region masks) in combination. Crucially, participants expressed their creative vision not only through textual descriptions but also through non-textual interactions with the system. Based on our findings, we propose two expressive prompting concepts as part of WorldSmith's graphical UI, supporting: 1) _hierarchical prompting_, which involves the use of layered prompts, and 2) _spatial prompting_, which allows users to spatially arrange prompts. With WorldSmith, we illustrate how these prompting concepts aid the fictional world-building process. Beyond this use case, we see expressive prompting as a general concept to inspire user interfaces that support users' complex workflows with prompt-based AI models.
###### Acknowledgements.
We thank Justin Malejka, Jo Vermeulen, Bon Adriel Aseniero, David Ledo, Qian Zhou and Daniel Buschek for providing valuable feedback on this work.
|
2302.09356
|
On the monotonicity of the Hilbert functions for 4-generated
pseudo-symmetric monomial curves
|
In this article we solve the conjecture "Hilbert function of the local ring
for a 4 generated pseudo-symmetric numerical semigroup $\langle n_1,n_2,n_3,n_4
\rangle$ is always non-decreasing when $ n_1 < n_2 < n_3 < n_4$". We give a
complete characterization to the standard bases when the tangent cone is not
Cohen-Macaulay by showing that the number of elements in the standard basis
depends on some parameters $s_j$ 's we define. Since the tangent cone is not
Cohen-Macaulay, non-decreasingness of the Hilbert fuction was not guaranteed,
we proved the non-decreasingness from our explicit Hilbert Function
computation.
|
Nil Şahin
|
2023-02-18T15:00:11Z
|
http://arxiv.org/abs/2302.09356v2
|
# On the Monotonocity of the Hilbert functions for 4-generated pseudo-symmetric monomial curves
###### Abstract.
In this article we solve the conjecture "Hilbert function of the local ring for a 4 generated pseudo-symmetric numerical semigroup \(<n_{1},n_{2},n_{3},n_{4}>\) is always non-decreasing when \(n_{1}<n_{2}<n_{3}<n_{4}\)". We give a complete characterization to the standard bases when the tangent cone is not Cohen-Macaulay by showing that the number of elements in the standard basis depends on some parameters \(s_{j}\)'s we define. Since the tangent cone is not Cohen-Macaulay, non-decreasingness of the Hilbert fuction was not guaranteed, we proved the non-decreasingness from our explicit Hilbert Function computation.
Key words and phrases:Hilbert function, tangent cone, monomial curve, numerical semigroup, standard bases 2010 Mathematics Subject Classification: Primary 13H10, 14H20; Secondary 13P10
## 1. Introduction
Cohen-Macaulayness and the Hilbert Functions of the tangent cone of a projective variety is a classical problem of commutative algebra as the Hilbert Function gives important geometric information like the degree, arithmetic genus and the dimension of the variety. Despite the fact that Hilbert function of a Cohen-Macaulay algebra is well understood, very little is known in the local case. One of the main problems is if the properties of the local ring can be carried out to the tangent cone, see [27]. It is well known that the good properties of the local ring as being Cohen-Macaulay, Gorenstein, complete intersection or level can not be carried out to the the tangent cone in general. Most of the time, to explore these, an explicit standard basis computation to the defining ideal of the local ring and of the graded ring is required, [28]. A long standing conjecture in the theory of Hilbert functions is Sally's conjecture " Hibert Function of a one dimensional Cohen-Macaulay local ring with small enough embedding dimension is nondecreasing" [32]. The statement is obvious for embedding dimension 1, proved by Matlis [20] and Elias [8] in embedding dimensions 2 and 3 respectively. Counter examples are given for embedding dimension 4 by Gupta and Roberts, for each embedding dimension greater than 4 by Orecchia [12, 24]. The problem is open even in monomial curve case: There are many affirmative answers but the counter examples were given in affine 10 space by Herzog and Waldi [16] and in affine 12 space by Eakin and Sathaye [7]. In these counter examples, the local ring was Cohen-Macaulay so the Cohen-Macaulays of the local ring does not guarantee the non-decreasingness property of the Hilbert Function when embedding dimension is greater than 3. In [27] Rossi conjectured that "Hilbert Function of a one dimensional Gorenstein local ring is non-decreasing". This problem has many positive answers, see [2, 3, 25, 4, 17, 1, 23]. Recently, in [22] Oneto, Strazzanti and Tamone constructed explicit examples of Gorenstein numerical semigroup rings with decreasing Hilbert Function and give counter examples to Rossi's conjecture. However, Sally's conjecture is still open for the monomial curves in \(n\) space when \(3<n<10\). We focus on the first case: numerical semigroups in 4-space. In 1978, Stanley proved that "If the tangent cone is Cohen-Macaulay, then The Hilbert function of a Cohen-Macaulay local ring is nondecreasing". Arslan and Mete put the extra condition \(\alpha_{2}\leq\alpha_{21}+\alpha_{24}\) to the generators of a symmetric semigroup when \(n_{1}<n_{2}<n_{3}<n_{4}\). They find the generators of the defining ideal and showed the Cohen-Macaulayness of the tangentcone which proves the nondecreasingness
of the Hilbert Function by Stanley's theorem. The case \(\alpha_{2}>\alpha_{21}+\alpha_{24}\) is still open in symmetric case. As symmetric and pseudosymmetric semigroups are maximal with respect to inclusion with fixed genus, we focus on 4 generated pseudosymmetric monomial curves in this paper and solve the conjecture
" Is the Hilbert function of the local ring corresponding to a 4 generated numerical semigroup nondecreasing?".
The case \(\alpha_{2}\leq\alpha_{21}+1\) is studied in [29] and since the tangent cone is C-M, without an explicit Hilbert Function computation, non-decreasingness of the Hilbert Function is proved by the Stanley's theorem. Though in this case there are 5 elements in the standard basis, the number of elements in the standard basis increase in the open case \(\alpha_{2}>\alpha_{21}+1\) when \(\alpha_{4}\) increase, which makes the standard basis computation difficult as the normal forms of the s-polynomials should be added to the standard basis each time, see [30]. Furthermore, as the tangent cone is not C-M, an explicit Hilbert Function computation is needed in this case to prove the non-decreasingness of the Hilbert Function. \(\alpha_{4}=2\) and \(\alpha_{4}=3\) cases are investigated in [30, 31] respectively. Though the Hilbert Functions are computed in both of these cases, nondecreasingness of the Hilbert Function is not proved. Since the number of elements inside the standard basis increase when \(\alpha_{4}\) increase, giving a complete characterization to the standard basis for a general \(\alpha_{4}\) is significant in showing the nondecreasingness of the Hilbert function. In this paper, we will give a standard basis for a general \(\alpha_{4}\), show that the elements in this basis depend on some parameters we define as \(s^{\prime}_{j}s\) for \(j=0,1,\ldots,\alpha_{4}-1\). We show that the Hilbert Function is non-decreasing independent from the \(s_{j}\)'s.
The structure of the paper is the following. In Section 2 we introduce pseudo-symmetric semigroups and give some preliminaries. In Section 3, we decribe the standard basis of the defining ideal in Theorem 3.3. In Section 4 we decribe the Hilbert Function in Theorem 4.1, second Hilbert Function in Theorem 4.3 and prove our main result, see Theorem 4.5. In section 5, we give explicit examples of 4-generated pseudo-symmetric monomial curves with nondecreasing Hilbert functions. Finally, the appendix contains some technical facts required to prove nondecreasingness of the Hilbert Function.
## 2. Preliminaries
\(n_{1}<n_{2}<\cdots<n_{k}\) being positive integers with \(\gcd(n_{1},\ldots,n_{k})=1\), the numerical semigroup generated by these integers is defined as \(S=\langle n_{1},\ldots,n_{k}\rangle=\{\sum_{i=1}^{k}u_{i}n_{i}|u_{i}\in \mathbb{N}\}\). \(K\) being an algebraically closed field, the semigroup ring of \(S\) is \(K[S]=K[t^{n_{1}},t^{n_{2}},\ldots,t^{n_{k}}]\) and let \(A=K[X_{1},X_{2},\ldots,X_{k}]\). If \(\phi:A{\longrightarrow}K[S]\) with \(\phi(X_{i})=t^{n_{i}}\) and \(\ker\phi=I_{S}\), then \(K[S]\simeq A/I_{S}\). Let \(C_{S}\) be the affine curve corresponding to \(S\) with parametrization
\[X_{1}=t^{n_{1}},\ \ X_{2}=t^{n_{2}},\ \ldots,\ X_{k}=t^{n_{k}}\]
then \(I_{S}\) is called the defining ideal of \(C_{S}\). The _multiplicity_ of \(C_{S}\) is smallest integer \(n_{1}\) in the semigroup. Let's denote the corresponding local ring with \(R_{S}=K[[t^{n_{1}},\ldots,t^{n_{k}}]]\) and the maximal ideal with \(\mathfrak{m}=\langle t^{n_{1}},\ldots,t^{n_{k}}\rangle\). Then \(gr_{\mathfrak{m}}(R_{S})=\bigoplus_{i=0}^{\infty}\mathfrak{m}^{i}/\mathfrak{ m}^{i+1}\cong A/I_{S}^{*}\), is the associated graded ring where \(I_{S}^{*}=\langle f^{*}|f\in I_{S}\rangle\) with \(f^{*}\) denoting the least homogeneous summand of \(f\).
The Hilbert function of the associated graded ring \(gr_{\mathfrak{m}}(R_{S})=\bigoplus_{i=0}^{\infty}\mathfrak{m}^{i}/\mathfrak{ m}^{i+1}\) is often referred to the Hilbert function \(H_{R_{S}}(n)\) of the local ring \(R_{S}\). In other words,
\[H_{R_{S}}(n)=H_{gr_{\mathfrak{m}}(R_{S})}(n)=dim_{R_{S}/\mathfrak{m}}( \mathfrak{m}^{n}/\mathfrak{m}^{n+1})\ \ n\geq 0.\]
This function is called non-decreasing if \(H_{R_{S}}(n)\geq H_{R_{S}}(n-1)\) for all \(n\in\mathbb{N}\). The Hilbert series of \(R_{S}\) is defined to be the generating function
\[HS_{R_{S}}(t)=\sum_{n\in\mathbb{N}}H_{R_{S}}(n)t^{n}.\]
By the Hilbert-Serre theorem it can also be written as: \(HS_{R_{S}}(t)=\frac{P(t)}{(1-t)^{k}}=\frac{Q(t)}{(1-t)^{d}}\), where \(P(t)\) and \(Q(t)\) are polynomials with coefficients in \(\mathbb{Z}\) and \(d\) is the Krull dimension of \(R_{S}\). \(P(t)\) is called first Hilbert Series and \(Q(t)\) is called second Hilbert series, [11, 27]. It is also known that there is a polynomial \(P_{R_{S}}(n)\in\mathbb{Q}[n]\) called Hilbert polynomial of \(R_{S}\) such that \(H_{R_{S}}(n)=P_{R_{S}}(n)\) for all \(n\geq n_{0}\), for some \(n_{0}\in\mathbb{N}\). The smallest \(n_{0}\) satisfying this condition is the regularity index of the Hilbert function of \(R_{S}\).
In [18], Komeda gives an explicit description to \(4\)- generated pseudo-symmetric numerical semigroups : A \(4\)-generated semigroup \(S=\langle n_{1},n_{2},n_{3},n_{4}\rangle\) is pseudo-symmetric if and only if there are integers \(\alpha_{i}>1\), for \(1\leq i\leq 4\), and \(\alpha_{21}>0\) with \(0<\alpha_{21}<\alpha_{1}-1\), such that
\[n_{1} = \alpha_{2}\alpha_{3}(\alpha_{4}-1)+1,\] \[n_{2} = \alpha_{21}\alpha_{3}\alpha_{4}+(\alpha_{1}-\alpha_{21}-1)(\alpha _{3}-1)+\alpha_{3},\] \[n_{3} = \alpha_{1}\alpha_{4}+(\alpha_{1}-\alpha_{21}-1)(\alpha_{2}-1)( \alpha_{4}-1)-\alpha_{4}+1,\] \[n_{4} = \alpha_{1}\alpha_{2}(\alpha_{3}-1)+\alpha_{21}(\alpha_{2}-1)+ \alpha_{2}.\]
He also gave an explicit characterization to the toric ideal as \(I_{S}=\langle f_{1},f_{2},f_{3},f_{4},f_{5}\rangle\) with
\[f_{1} = X_{1}^{\alpha_{1}}-X_{3}X_{4}^{\alpha_{4}-1},\qquad f_{2}=X_{2} ^{\alpha_{2}}-X_{1}^{\alpha_{21}}X_{4},\quad f_{3}=X_{3}^{\alpha_{3}}-X_{1}^{ \alpha_{1}-\alpha_{21}-1}X_{2},\] \[f_{4} = X_{4}^{\alpha_{4}}-X_{1}X_{2}^{\alpha_{2}-1}X_{3}^{\alpha_{3}-1 },\quad f_{5}=X_{1}^{\alpha_{21}+1}X_{3}^{\alpha_{3}-1}-X_{2}X_{4}^{\alpha_{4 }-1}.\]
If \(n_{1}<n_{2}<n_{3}<n_{4}\) then it is known from [29] that
1. \(\alpha_{1}>\alpha_{4}\)
2. \(\alpha_{3}<\alpha_{1}-\alpha_{21}\)
3. \(\alpha_{4}<\alpha_{2}+\alpha_{3}-1\)
and these conditions completely determine the leading monomials of \(f_{1},f_{3}\) and \(f_{4}\). Indeed, \(\text{LM}(f_{1})=X_{3}X_{4}^{\alpha_{4}-1}\) by (1), \(\text{LM}(f_{3})=X_{3}^{\alpha_{3}}\) by (2), \(\text{LM}(f_{4})=X_{4}^{\alpha_{4}}\) by (3). For the case \(\alpha_{2}\leq\alpha_{21}+1\), a we have given a complete characterization to the standard basis in [29] and since the tangent cone is Cohen-Macaulay in this case, we showed that the Hilbert funciton is nondecreasing. If we let
1. \(\alpha_{2}>\alpha_{21}+1\)
we determine the leading monomial of \(f_{2}\) as \(\text{LM}(f_{2})=X_{1}^{\alpha_{21}}X_{4}\).
## 3. Standard bases
Before we state and prove our main theorem, we will prove the following proposition about the normal forms of the polynomials that are in specific forms to simplify our computations. This may be a proposition that is stated and proved before but since we did not encounter it, we will give a proof here.
**Proposition 3.1**.: Let \(m_{1}\) and \(m_{2}\) be monomials and let \(g=m_{1}-m_{2}\), \(f=m_{1}^{k}-m_{2}^{k}\) where \(k\) is a natural number greater than or equal to \(1\). If \(G\) is a set containing \(g\), then \(\text{NF}(f|G)=0\).
Proof.: Without losing generality, assume that \(\text{LM}(g)=m_{1}\) which makes \(\text{LM}(f)=m_{1}^{k}\). Then \(\text{spoly}(f,g)=f-m_{1}^{k-1}g=m_{2}\left[m_{1}^{k-1}-m_{2}^{k-1}\right]=r_{1}\). \(\text{LM}(r_{1})=m_{1}^{k-1}m_{2}\) and \(\text{spoly}(r_{1},g)=m_{1}^{k-2}m_{2}g-r_{1}=-m_{2}^{2}\left[m_{1}^{k-2}-m_{2} ^{k-2}\right]=r_{2}\). Then continuing inductively, \(r_{k}=\text{spoly}(r_{k-1},g)=(-1)^{k+1}m_{2}^{k}g\) and hence, \(\text{spoly}(r_{k},g)=r_{k}-(-1)^{k+1}m_{2}^{k}g=0\)
Recall the next remark from [6]
_Remark 3.2_.: Let \(f\) be a polynomial and \(G\) be a standard basis of an ideal \(I\). Then \(f\in I\) iff \(\text{NF}(f|g)=0\)
**Theorem 3.3**.: _Let \(S=\langle n_{1},n_{2},n_{3},n_{4}\rangle\) be a 4-generated pseudosymmetric numerical semigroup with \(n_{1}<n_{2}<n_{3}<n_{4}\). If \(\alpha_{2}>\alpha_{21}+1\) then the standard basis for \(I_{S}\) is_
\[G=\{f_{1,j},f_{2},f_{3},f_{4},g_{j,i_{j}}\}\]
_where \(f_{1,j}=X_{1}^{\alpha_{1}+j\alpha_{21}}-X_{2}^{j\alpha_{2}}X_{3}X_{4}^{\alpha _{4}-(j+1)}\), \(j=0,1,\ldots,\alpha_{4}-1\)_
\(g_{j,i_{j}}=X_{2}^{((\alpha_{4}-1)i_{j}+j)\alpha_{2}+1}X_{4}^{\alpha_{4}-(j+1) }-X_{1}^{i_{j}\alpha_{1}+((\alpha_{4}-1)i_{j}+j+1)\alpha_{21}+1}X_{3}^{\alpha_ {3}-(i_{j}+1)}\)_\(j=0,1,2,\ldots,\alpha_{4}-1\) and \(i_{j}=s_{j-1},s_{j-1}+1,s_{j-1}+2,\ldots,s_{j}-1,s_{j}\). \(s_{-1}=0\) and \(s_{j}\) is the smallest integer with \(((\alpha_{4}-1)s_{j}+j)\alpha_{2}+\alpha_{4}-j<s_{j}\alpha_{1}+((\alpha_{4}-1) s_{j}+j+1)\alpha_{21}+\alpha_{3}-s_{j}\)_
To prove the theorem, we will need the following lemmas and remark.
**Lemma 3.4**.: \(j\alpha_{2}+\alpha_{4}<\alpha_{1}+j\alpha_{21}+j\) _for all \(j=0,1,\ldots,\alpha_{4}-1\) and hence \(\mathrm{LM}(f_{1,j})=X_{2}^{j\alpha_{2}}X_{3}X_{4}^{\alpha_{4}-(j+1)}\) for all \(j\), too._
Proof.: Since \(n_{1}<n_{2}\), we have
\[\alpha_{2}\alpha_{3}(\alpha_{4}-1)+1 < \alpha_{21}\alpha_{3}\alpha_{4}+(\alpha_{1}-\alpha_{21}-1)( \alpha_{3}-1)+\alpha_{3}\] \[\alpha_{1}-\alpha_{21}+\alpha_{3}(\alpha_{21}+\alpha_{2}\alpha_{ 4}) < \alpha_{3}(\alpha_{2}+\alpha_{21}\alpha_{4}+\alpha_{1})\] \[\alpha_{1}-\alpha_{3}-\alpha_{21}+\alpha_{3}(\alpha_{21}+\alpha_{ 2}\alpha_{4}+1) < \alpha_{3}(\alpha_{2}+\alpha_{21}\alpha_{4}+\alpha_{1})\]
Since \(\alpha_{1}-\alpha_{3}-\alpha_{21}>0\), we have \(\alpha_{3}(\alpha_{21}+\alpha_{2}\alpha_{4}+1)<\alpha_{3}(\alpha_{2}+\alpha_{ 21}\alpha_{4}+\alpha_{1})\), and cancelling out \(\alpha_{3}\), we obtain
\[\alpha_{2}(\alpha_{4}-1)+1<\alpha_{21}(\alpha_{4}-1)+\alpha_{1}\]
or equivalently, \((\alpha_{2}-\alpha_{21})(\alpha_{4}-1)+1<\alpha_{1}\). To obtain the same inequality for \(j<\alpha_{4}-1\),
\[(\alpha_{2}-\alpha_{21})(\alpha_{4}-1)+1 < \alpha_{1}\] \[(\alpha_{2}-\alpha_{21}-1)(\alpha_{4}-1)+\alpha_{4} < \alpha_{1}\]
Since \(j<\alpha_{4}-1\) and \(\alpha_{2}-\alpha_{21}-1>0\), we have \((\alpha_{2}-\alpha_{21}-1)j+\alpha_{4}<\alpha_{1}\) or equivalently,
\[j\alpha_{2}+\alpha_{4}<\alpha_{1}+j\alpha_{21}+j\]
Note that lemma 3.4 is the generalization of remark (1.1) of [30] and remark (3.1) of [31].
**Lemma 3.5**.: \(\mathrm{NF}(g_{j,m}|G)=0\) _for any \(m=0,1,\ldots,s_{\alpha_{4}-1}\)_
Proof.: For \(s_{j-1}\leq m\leq s_{j}\), \(g_{j,m}\in G\) and hence the result is clear.
For \(m>s_{j}\), \(T_{g_{j,m}}=\{g_{j,s_{j}}\}\) and \(\mathrm{spoly}(g_{j,m},g_{j,s_{j}})=X_{1}^{(s_{j})\alpha_{1}+((\alpha_{4}-1)(s_ {j})+j+1)\alpha_{21}+1}X_{3}^{\alpha_{3}-(m+1)}\left[(X_{1}^{\alpha_{1}+(\alpha _{4}-1)\alpha_{21}}X_{3})^{m-s_{j}}\right]=r_{1}\). Since the monomials inside the paranthesis are \(m-s_{j}\)th powers of the monomials in \(f_{1},\alpha_{4}-1\), by lemma 3.1 and remark 3.2, \(\mathrm{NF}(g_{j,m}|G)=0\).
For \(m<s_{j-1}\), \(\mathrm{NF}(g_{j,m}|G)=0\) as \(\mathrm{spoly}(g_{j,m},g_{j-1,m})=X_{2}^{((\alpha_{4}-1)m+j-1)\alpha_{2}+1}X_{4 }^{\alpha_{4}-(j+1)}f_{2}\) and remark 3.2 gives the result.
_Remark 3.6_.: If \(j=0\) then \(0\) is the smallest integer satisfying \(((\alpha_{4}-1)s_{j}+j)\alpha_{2}+\alpha_{4}-j<s_{j}\alpha_{1}+((\alpha_{4}-1 )s_{j}+j+1)\alpha_{21}+\alpha_{3}-s_{j}\) by remark 1.1 of [30]. That is \(s_{0}=0\). Futhermore, when \(j=0\) then \(g_{0,s_{0}}=g_{0,0}=f_{5}\) in [18].
Proof.: We will use \(\mathrm{NFM}_{\mathrm{ORA}}\) as the normal form.
* \(NF(\operatorname{spoly}(f_{2},f_{3})|G)=0\), \(NF(\operatorname{spoly}(f_{3},f_{4})|G)=0\), \(NF(\operatorname{spoly}(f_{3},f_{5})|G)=0\) as the leading monomials are relatively prime.
* \(NF(\operatorname{spoly}(f_{2},f_{4})|G)=0\) as \(\operatorname{spoly}(f_{2},f_{4})=X_{2}^{\alpha_{2}-1}g_{0,0}\).
Normal forms of the s-polynomials with \(f_{1,j}\) will be investigated in two cases: When \(j=\alpha_{4}-1\) and \(j<\alpha_{4}-1\) since the leading monomial of \(f_{1,j}\) change in these two cases.
* If \(j=\alpha_{4}-1\), then \(NF(\operatorname{spoly}(f_{1,\alpha_{4}-1},f_{2})|G)=0\) and \(NF(\operatorname{spoly}(f_{1,\alpha_{4}-1},f_{4})|G)=0\) as the leading monomials are relatively prime., \(\operatorname{NF}(\operatorname{spoly}(f_{1,\alpha_{4}-1},f_{3})|G)=0\) as \(\operatorname{spoly}(f_{1,\alpha_{4}-1},f_{3})=X_{1}^{\alpha_{1}-\alpha_{21}- 1}g_{\alpha_{4}-1,0}\) and lemma 3.5 gives the result.
* If \(j<\alpha_{4}-1\), then \(\operatorname{NF}(\operatorname{spoly}(f_{1,j},f_{2})|G)=0\) as \(\operatorname{spoly}(f_{1,j},f_{2})=f_{1,j+1}\in G\). \(\operatorname{NF}(\operatorname{spoly}(f_{1,j},f_{3})|G)=0\) as \(\operatorname{spoly}(f_{1,j},f_{3})=X_{1}^{\alpha_{1}-\alpha_{21}-1}g_{j,0}\) and \(g_{j,0}\in G\) \(\operatorname{NF}(\operatorname{spoly}(f_{1,j},f_{4})|G)=0\), as \(\operatorname{spoly}(f_{1,j},f_{4})=X_{1}(X_{2}^{(j+1)\alpha_{2}-1}X_{3}^{ \alpha_{3}}-X_{1}^{(1+j\alpha_{21}-1}X_{4}^{j+1)})=r_{1}\).
* If \(\operatorname{LM}(r_{1})=X_{1}X_{2}^{(j+1)\alpha_{2}-1}X_{3}^{\alpha_{3}}\) then \(T_{r_{1}}=\{f_{3}\}\) and \(\operatorname{spoly}(f_{3},r_{1})=X_{1}^{\alpha_{1}-\alpha_{21}}\left[X_{2}^{( j+1)\alpha_{2}}-X_{1}^{(j+1)\alpha_{21}}X_{4}^{j+1}\right]=r_{2}\). Since the monomials inside the paranthesis are \(j+1\)th powers of the monomials of \(f_{2}\), proposition 3.1 and remark 3.2 gives the result.
* If \(\operatorname{LM}(r_{1})=X_{1}^{\alpha_{1}+j\alpha_{21}}X_{4}^{j+1}\) then \(T_{r_{1}}=\{f_{2}\}\) and \(\operatorname{spoly}(f_{2},r_{1})=X_{1}X_{2}^{\alpha_{2}}\left[X_{1}^{\alpha_{ 1}+(j-1)\alpha_{21}-1}X_{4}^{j}-X_{2}^{j\alpha_{2}-1}X_{3}^{\alpha_{3}}\right]= r_{2}\). Continuing inductively, \(T_{r_{j+1}}=\{f_{2}\}\) and \(r_{j+2}=\operatorname{spoly}(f_{3},r_{j+1})=X_{1}X_{2}^{(j+2)\alpha_{2}-1}f_{3}\). Then remark 3.2 gives the result.
* \(\operatorname{NF}(\operatorname{spoly}(f_{1,j_{1}},f_{1,j_{2}})|G)=0\) for all \(0\leq j_{1}<j_{2}\leq\alpha_{4}-1\). Indeed, proposition 3.1 and remark 3.2 and the fact that \(\operatorname{spoly}(f_{1,j_{1}},f_{1,j_{2}})=X_{1}^{\alpha_{1}+j_{1}\alpha_{2 1}}\left[X_{2}^{(j_{2}-j_{1})\alpha_{2}}-X_{1}^{(j_{2}-j_{1})\alpha_{21}}X_{4 }^{j_{2}-j_{1}}\right]\), gives the result.
Normal forms of the s-polynomials with \(g_{j,i_{j}}\) will be investigated in two cases: When \(i_{j}=s_{j}\) and \(i_{j}<s_{j}\) since the leading monomial of \(g_{j,i_{j}}\) change in these two cases.
* If \(i_{j}<s_{j}\): \(\operatorname{NF}(\operatorname{spoly}(g_{j,i_{j}},f_{4})|G)=0\), \(NF(\operatorname{spoly}(g_{j,i_{j}},g_{j,s_{j}})|G)=0\) as the leading monomials are relatively prime. \(\operatorname{NF}(\operatorname{spoly}(g_{j,i_{j}},f_{2})|G)=0\) as \(\operatorname{spoly}(g_{j,i_{j}},f_{2})=X_{2}^{\alpha_{2}}g_{j-1,i_{j}}\) and since \(\operatorname{NF}(g_{j-1,i_{j}}|G)=0\) by lemma 3.5. \(\operatorname{NF}(\operatorname{spoly}(g_{j,i_{j}},f_{3})|G)=0\) as \(\operatorname{spoly}(g_{j,i_{j}},f_{3})=X_{2}\left[X_{1}^{(i_{j}+1)\alpha_{1}+(( \alpha_{4}-1)i_{j}+j)\alpha_{21}}-X_{2}^{((\alpha_{4}-1)i_{j}+j)\alpha_{2}}X_{3 }^{i_{j}+1}X_{4}^{\alpha_{4}-(j+1)}\right]\), \(=r_{1}\). \(\operatorname{LM}(r_{1})=X_{2}^{((\alpha_{4}-1)i_{j}+j)\alpha_{2}+1}X_{3}^{i_{j }+1}X_{4}^{\alpha_{4}-(j+1)}\) by lemma 3.4 and \(T_{r_{1}}=\{f_{1,j}\}\). Then \(\operatorname{spoly}(f_{1,j},r_{1})=X_{1}^{\alpha_{1}+j\alpha_{21}}X_{2}\left[X_{ 1}^{i_{j}\alpha_{1}+(\alpha_{4}-1)i_{j}\alpha_{21}}-X_{2}^{(\alpha_{4}-1)i_{j} \alpha_{2}}X_{3}^{i_{j}}\right]=r_{2}\). Since the monomials inside the paranthesis are \(i_{j}\)th powers of the monomials of \(f_{1,\alpha_{4}-1}\), the result follows from proposition 3.1 and remark 3.2
* If \(i_{j}=s_{j}\): \(\operatorname{NF}(\operatorname{spoly}(g_{j,s_{j}},f_{2})|G)=0\) as \(\operatorname{spoly}(g_{j,s_{j}},f_{2})=g_{j+1,s_{j}}\in G\) \(\operatorname{NF}(\operatorname{spoly}(g_{j,s_{j}},f_{3})|G)=0\) as the leading monomials are relatively prime. \(\operatorname{NF}(\operatorname{spoly}(g_{j,s_{j}},f_{4})|G)=0\) as \(\operatorname{spoly}(g_{j,s_{j}},f_{4})=X_{1}X_{3}^{\alpha_{3}-(s_{j}+1)}\left[X_{ 1}^{s_{j}\alpha_{1}+((\alpha_{4}-1)s_{j}+j+1)\alpha_{21}}X_{4}^{j+1}-X_{2}^{(( \alpha_{4}-1)s_{j}+j+1)\alpha_{2}}X_{3}^{s_{j}}\right]=r_{1}\).
* If \(\operatorname{LM}(r_{1})=X_{1}^{s_{j}\alpha_{1}+((\alpha_{4}-1)s_{j}+j+1)\alpha_{ 21}+1}X_{3}^{\alpha_{3}-(s_{j}+1)}X_{4}^{j+1}\), then \(T_{r_{1}}=\{f_{2}\}\) and \(\operatorname{spoly}(r_{1},f_{2})=X_{1}X_{2}^{\alpha_{2}}X_{3}^{\alpha_{3}-(s_{j}+ 1)}\left[X_{1}^{s_{j}\alpha_{1}+((\alpha_{4}-1)s_{j}+j)\alpha_{21}}X_{4}^{j}-X_{ 2}^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}}X_{3}^{s_{j}}\right]=r_{2}\). Continuing inductively, \(r_{j+2}=\operatorname{spoly}(r_{j+1},f_{2})=X_{1}X_{2}^{(j+1)\alpha_{2}}X_{3}^{ \alpha_{3}-s_{j}+1}\left[X_{1}^{s_{j}\alpha_{1}+((\alpha_{4}-1)s_{j})\alpha_{ 21}}-X_{2}^{((\alpha_{4}-1)s_{j})\alpha_{22}}X_{3}^{s_{j}}\right]\). Since the monomials inside the paranthesis are \(s_{j}\)th powers of the monomials of \(f_{1,\alpha_{4}-1}\), proposition 3.1 with remark 3.2 gives the result.
* If \(\text{LM}(r_{1})=X_{1}X_{2}^{((\alpha_{4}-1)s_{j}+j+1)\alpha_{2}}X_{3}^{\alpha_{ 3}-1}\), then \(T_{r_{1}}=\{f_{1,\alpha_{4}-1}\}\) and \(\text{spoly}(r_{1},f_{1,\alpha_{4}-1})=X_{1}^{\alpha_{1}+(\alpha_{4}-1)\alpha_{ 21}+1}X_{3}^{\alpha_{3}-(s_{j}+1)}\left[X_{2}^{((\alpha_{4}-1)(s_{j}-1)+j+1) \alpha_{2}}X_{3}^{s_{j}-1}-X_{1}^{(s_{j}-1)\alpha_{1}+((\alpha_{4}-1)(s_{j}-1) +j+1)\alpha_{21}}X_{4}^{j+1}\right]=r_{2}\). Continuing inductively, \(r_{s_{j}+1}=\text{spoly}(r_{s_{j}},f_{1,\alpha_{4}-1})=X_{1}^{(\alpha_{1}+( \alpha_{4}-1)\alpha_{21})s_{j}+1}X_{3}^{\alpha_{3}-(s_{j}+1)}\left[X_{2}^{(j+1 )\alpha_{2}}-X_{1}^{(j+1)\alpha_{21}X_{4}^{j+1}}\right]\). Since the monomials inside the paranthesis are \(j+1\)th powers of the monomials of \(f_{2}\), proposition 3.1 with remark 3.2 gives the result.
* \(\text{NF}(\text{spoly}(g_{i,j_{j}},f_{1,j})|G)=0\) for \(j=0,1,\ldots,\alpha_{4}-1\). Indeed, \(\text{spoly}(g_{j,i_{j}},f_{1,j})=X_{2}^{((\alpha_{4}-1)i_{j}+2j)\alpha_{2}+1 }X_{4}^{2\alpha_{4}-2(j+1)}-X_{1}^{(i_{j}+1)\alpha_{1}+((\alpha_{4}-1)i_{j}+ 2j+1)\alpha_{21}+1}X_{3}^{\alpha_{3}-(i_{j}+2)}=r_{1}\). Since \(i_{j}\geq s_{j-1}\), by the definition of \(s_{j}\) we have \[((\alpha_{4}-1)i_{j}+j-1)\alpha_{2}+\alpha_{4}-j+1<i_{j}\alpha_{1}+((\alpha_{ 4}-1)i_{j}+j)\alpha_{21}+\alpha_{3}-i_{j}\] Also, Lemma 3.4 gives, \[(j+1)\alpha_{2}+\alpha_{4}<\alpha_{1}+(j+1)\alpha_{21}+j+1\] Adding these two we obtain \(\text{LM}(r_{1})=X_{2}^{((\alpha_{4}-1)i_{j}+2j)\alpha_{2}+1}X_{4}^{2\alpha_{4 }-2(j+1)}\) and \(T_{r_{1}}=\{g_{0,0}\}\). Then \(r_{2}=\text{spoly}(r_{1},g_{0,0})=X_{1}^{(i_{j}+1)\alpha_{1}+((\alpha_{4}-1)i_ {j}+2j+1)\alpha_{21}+1}X_{3}^{\alpha_{3}-(i_{j}+2)}-X_{1}^{\alpha_{21}+1}X_{2} ^{((\alpha_{4}-1)i_{j}+2j)\alpha_{2}}X_{3}^{\alpha_{3}-1}X_{4}^{\alpha_{4}-2j -1}\). Then \(T_{r_{2}}=\{f_{1,2j}\}\) and \(r_{3}=\text{spoly}(r_{2},f_{1,2j})=X_{1}^{\alpha_{1}+(2j+1)\alpha_{21}+1}X_{3} ^{\alpha_{3}-(i_{j}+2)}\left[X_{1}^{i_{j}\alpha_{1}+(\alpha_{4}-1)i_{j}\alpha_ {21}}-X_{2}^{(\alpha_{4}-1)i_{j}\alpha_{22}}X_{3}^{i_{j}}\right]\). Since the monomials inside the paranthesis are \(i_{j}\)th powers of the monomials inside \(f_{1,\alpha_{4}-1}\), the result follows from remark 3.2 and proposition 3.1.
* \(\text{NF}(\text{spoly}(g_{j,s_{j}},g_{j,i_{j}})|G)=0\) as the leading monomials are relatively prime.
* \(\text{NF}(\text{spoly}(g_{j,s_{j}},f_{1,j})|G)=0\) as \(\text{spoly}(g_{j,s_{j}},f_{1,j})=X_{1}^{\alpha_{1}+j\alpha_{21}}g_{\alpha_{4 }-1,s_{j-1}}\), remark 3.2 and lemma 3.5 gives the result.
* \(\text{NF}(\text{spoly}(g_{j,s_{j}},f_{1,\alpha_{4}-1})|G)=0\) as \(\text{spoly}(g_{j,s_{j}},f_{1,\alpha_{4}-1})=X_{1}^{\alpha_{1}+(\alpha_{4}-1) \alpha_{21}}g_{j,s_{j}-1}\) and remark 3.2 gives the result.
* \(\text{NF}(\text{spoly}(g_{j,n},g_{j,m})|G)=0\) for any \(n<m<s_{j}\). Indeed, \(\text{spoly}(g_{j,n},g_{j,m})=X_{2}^{((\alpha_{4}-1)n+j)\alpha_{2}+1}X_{4}^{ \alpha_{4}-(j+1)}\left[\right.\)\(X_{1}^{(\alpha_{1}+(\alpha_{4}-1)\alpha_{21})(m-n)}-X_{2}^{(\alpha_{4}-1) \alpha_{21}(m-n)}X_{3}^{m-n}\right]\) and monomials inside the paranthesis are \((m-n)\)th powers of the monomials of \(f_{1,\alpha_{4}-1}\). Then the result follows from proposition 3.1 and remark 3.2.
* \(\text{NF}(\text{spoly}(g_{j_{1},s_{j_{1}}},g_{2,i_{j_{2}}})|G)=0\) as the leading monomials are relatively prime.
* \(\text{NF}(\text{spoly}(g_{j_{1},i_{j_{1}}},g_{2,i_{j_{2}}})|G)=0\) as the leading monomials are relatively prime.
* \(\text{NF}(\text{spoly}(g_{j_{1},i_{j_{1}}},g_{2,i_{j_{2}}})|G)=0\) since we have \(\text{spoly}(g_{j_{1},i_{j_{1}}},g_{j_{2},i_{j_{2}}})=X_{2}^{((\alpha_{4}-1)i_{j_ {1}}+j_{1})\alpha_{2}+1}X_{4}^{\alpha_{4}-(j_{2}+1)}\left[\right.\)\(X_{1}^{(i_{j_{2}}-i_{j_{1}})\alpha_{1}+((\alpha_{4}-1)(i_{j_{2}}-i_{j_{1}})+(j_{2}-j_{1} ))\alpha_{21}}X_{4}^{i_{j_{2}}-i_{1}}-X_{2}^{((\alpha_{4}-1)(i_{j_{2}}-i_{j_{1}}) +(j_{2}-j_{1}))\alpha_{2}}X_{3}^{i_{j_{2}}-i_{j_{1}}}\right]=r_{1}\) and \(\text{LM}(r_{1})=X_{1}^{(i_{j_{2}}-i_{j_{1}})\alpha_{1}+((\alpha_{4}-1)(i_{j_ {2}}-i_{j_{1}})+(j_{2}-j_{1}))\alpha_{21}}X_{2}^{((\alpha_{4}-1)i_{j_{1}}+j_{1}) \alpha_{2}+1}X_{4}^{\alpha_{4}-(j_{1}+1)}\) then \(T_{r_{1}}=\{f_{2}\}\) and \(\text{spoly}(r_{1},f_{2})=X_{2}^{((\alpha_{4}-1)i_{j_{1}}+j_{1})\alpha_{2}+1}X_{4} ^{\alpha_{4}-(j_{2}+1)}\left[\right.\)\(X_{1}^{(i_{2}-i_{j_{1}})\alpha_{1}+((\alpha_{4}-1)(i_{j_ {2}}-i_{j_{1}})+(j_{2}-(j_{1}+1)))\alpha_{21}}X_{4}^{j_{2}-(j_{1}+1)}-\)\(X_{2}^{((\alpha_{4}-1)(i_{j_{2}}-i_{j_{1}})+(j_{2}-(j_{1}+1))) \alpha_{21}}X_{3}^{i_{j_{2}}-i_{j_{1}}}\right]=r_{2}\). Continuing inductively, \(r_{j_{2}-j_{1}+1}=\text{spoly}(r_{2-j_{1}},f_{2})=X_{2}^{((\alpha_{4}-1)i_{j_ {1}}+j_{2})\alpha_{2}+1}X_{4}^{\alpha_{4}-(j_{2}+1)}\left[X_{1}^{(i_{j_{2}}-i_{j_{1}}) \alpha_{1}+((\alpha_{4}-1)(i_{j_{2}}-i_{j_{1}}))\alpha_{21}}-X_{2}^{((\alpha_{4}-1)(i_{j _{2}}-i_{j_{1}}))\alpha_{21}}X_{3}^{i_{j_{2}}-
monomials appearing inside the paranthesis are \((\alpha_{4}-1-j_{2})\) th powers of the monomials in \(f_{2}\), the result follows from lemma 3.1 and remark 3.2.
**Corollary 3.7**.: \(\{f_{1,j_{*}},f_{2_{*}},f_{3_{*}},f_{4_{*}},g_{j,i_{j_{*}}}\}\) is a standard basis for \(I_{S*}\) for \(j=0,1,\ldots,\alpha_{4}-1\) and \(i_{j}=s_{j-1},\ldots,s_{j}\) where \(f_{1,j_{*}}=X_{2}^{j\alpha_{2}}X_{3}X_{4}^{\alpha_{4}-(j+1)}\), \(f_{2_{*}}=X_{1}^{\alpha_{21}}X_{4}\), \(f_{3_{*}}=X_{3}^{\alpha_{3}}\), \(f_{4_{*}}=X_{4}^{\alpha_{4}}\) and \(g_{j,{i_{j_{*}}}}=X_{1}^{i_{j}\alpha_{1}+((\alpha_{4}-1)i_{j}+j+1)\alpha_{21}+1 }X_{3}^{\alpha_{3}-(i_{j}+1)}\) for \(i_{j}<s_{j}\) and \(g_{j,{s_{j}}}=X_{2}^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}+1}X_{4}^{\alpha_{4}-(j +1)}\). Since \(X_{1}|f_{2_{*}}\) the tangent cone is not Cohen-Macaulay.
## 4. Hilbert Function
In this section, we show that, altough the the tangent cone of the \(4\)-generated pseudosymmetric numerical semigroup given in theorem 3.3 is non-Cohen-Macaulay, it has a non-decreasing Hilbert function.
**Theorem 4.1**.: _Let \(P(I_{S}*)\) denote the numerator of the Hilbert series of the local ring \(R_{S}\). Then_
\[P(I_{S})_{*}\,=\,1-t^{\alpha_{4}}\,-t^{\alpha_{4}}(1-t)(1-t^{\alpha_{21}})\, -t^{((\alpha_{4}-1)s_{\alpha_{4}-1}+\alpha_{4}-1)\alpha_{2}+1}(1-t)^{2}\,-\,( 1-t)\sum_{j=1}^{\alpha_{4}-1}t^{j\alpha_{2}+(\alpha_{4}-j)}\,-\]
\[t^{\alpha_{21}+1}\left[1-t^{(\alpha_{4}-2)\alpha_{2}+1}-(1-t^{\alpha_{2}})\sum _{j=0}^{\alpha_{4}-3}t^{j\alpha_{2}+(\alpha_{4}-1-j)}\right]-(1-t)^{2}(1-t^{ \alpha_{21}})\sum_{j=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1)s_{\alpha_{j}}+j) \alpha_{2}+\alpha_{4}-j}-\]
\[(1-t^{(\alpha_{4}-1)\alpha_{2}})(1-t)^{2}\left[\sum_{i=1}^{\alpha_{4}-1}\sum _{j=s_{(i-1)}}^{s_{i}-1}t^{j\alpha_{1}+((\alpha_{4}-1)j+(i+1))\alpha_{21}+ \alpha_{3}-j}\right]-t^{\alpha_{3}}\left[(1-t^{\alpha_{21}+1})(1-t^{(\alpha_ {4}-1)\alpha_{2}})-\right.\]
Proof.: We use the following algorithm of Bayer and Stillman "If \(I\) is a monomial ideal with \(I=<J,w>\), then the numerator of the Hilbert series of \(A/I\) is \(P(I)=P(J)-t^{\deg w}P(J:w)\) where \(w\) is a monomial and \(\deg w\) is the total degree of \(w\)." that appears in [5]. Though the order in which we choose the monomials inside \(I_{S}*\) as \(w\) does not matter, we picked them as follows: \(w_{i}=g_{\alpha_{4}-i,s_{\alpha_{4}-i}}*\) where \(i=1,\ldots,\alpha_{4}\); \(w_{i}=g_{k,k_{j}}*\) where \(k=\alpha_{4}-1,\alpha_{4}-2,\ldots,2,1\) and for each \(k\), \(k_{j}=s_{k}-1,s_{k}-2,\ldots,s_{k-1}+1,s_{k-1}\) for \(i=\alpha_{4}+1,\ldots,\alpha_{4}+s_{\alpha_{4}-1}\); \(w_{\alpha_{4}+s_{\alpha_{4}-1}+1}=f_{4}*\); \(w_{\alpha_{4}+s_{\alpha_{4}-1}+2}=f_{3}*\); \(w_{\alpha_{4}+s_{\alpha_{4}-1}+3}=f_{2}*\); \(w_{\alpha_{4}+s_{\alpha_{4}-1}+4+j}=f_{1,\alpha_{4}-j-1}*\) where \(j=0,\ldots,\alpha_{4}-2\)
_Remark 4.2_.: Note that for \(\alpha_{4}=2\) in theorem 4.1, we obtain theorem 3.1 of [30] with \(s_{1}=k-1\) and for \(\alpha_{4}=3\), we obtain theorem 4.1 of [31] with \(s_{1}=s\) and \(s_{2}=l\).
**Theorem 4.3**.: _The second Hilbert Series of the local ring is_
\[Q(t)=\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-1)\alpha_{2}-\alpha_{4}}t^{j+\alpha_{4}-1}+t^{(\alpha_{4}-1)\alpha_ {2}}\sum_{j=0}^{(\alpha_{4}-1)s_{\alpha_{4}-1}\alpha_{2}}t^{j}+t^{\alpha_{21} }\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{4}-2}t^{j}-\sum_{j=0}^{( \alpha_{4}-1)\alpha_{2}-1}t^{j}\left[\sum_{i=1}^{\alpha_{4}-1}\sum_{j=s_{i-1}}^{ s_{i}-1}\right.\] \[\left.t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)]\alpha_{21}+\alpha_ {3}-j}\right]+\sum_{j=0}^{\alpha_{21}-1}t^{j}\left[\sum_{j=0}^{\alpha_{3}-1}t^{ j}\sum_{j=0}^{\alpha_{4}-2}t^{j}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{2}+ \alpha_{3}-3}t^{j}-\sum_{j=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1)s_{j}+j)\alpha_{2 }+\alpha_{4}-j}-\] \[\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=1}^{\alpha_{4}-2}t^{j\alpha _{2}+\alpha_{4}-j}\Bigg{]}\]
Proof.: Using theorem 4.1, observe that
\(I_{S}\)* = \((1-t)P_{1}(t)\) where \(P_{1}(t)=(1-t^{\alpha_{21}+1})\left[\sum_{j=0}^{\alpha_{4}-2}t^{j}-t^{\alpha_{3}} \sum_{j=0}^{(\alpha_{4}-1)\alpha_{2}-1}t^{j}\right]+t^{\alpha_{4}-1}\left(1-t^{( \alpha_{4}-1)(\alpha_{2}-1)+1}\right)\)
\[-(1-t^{\alpha_{21}})\sum_{j=0}^{\alpha_{4}-2}t^{j\alpha_{2}+ \alpha_{4}-j}\left(1-t^{\alpha_{3}-1}\sum_{j=0}^{\alpha_{2}-1}t^{j}\right)-t^{( (\alpha_{4}-1)s_{\alpha_{4}-1}+\alpha_{4}-1)\alpha_{2}+1}(1-t)-(1-t)(1-t^{ \alpha_{21}})\sum_{j=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}+ \alpha_{4}-j}\] \[-\left(1-t^{(\alpha_{4}-1)\alpha_{2}}\right)(1-t)\left[\sum_{i=1 }^{\alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1 )]\alpha_{21}+\alpha_{3}-j}\right]\!.\]
\(P_{1}(t)=(1-t)P_{2}(t)\) where \(P_{2}(t)=(1-t^{\alpha_{3}+\alpha_{21}})\sum_{j=0}^{(\alpha_{4}-1)\alpha_{2}- \alpha_{4}}t^{j+\alpha_{4}-1}-(1-t^{\alpha_{2}+\alpha_{3}-2})t^{(\alpha_{4}-2) \alpha_{2}+2}\sum_{j=0}^{\alpha_{21}-1}t^{j}\) -\((1-\)
\[t^{\alpha_{3}-1})\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=1}^{ \alpha_{4}-2}t^{j\alpha_{2}+\alpha_{4}-j}+t^{(\alpha_{4}-1)\alpha_{2}}(1-t^{( \alpha_{4}-1)s_{\alpha_{4}-1}\alpha_{2}+1})+(1-t^{\alpha_{3}})\sum_{j=0}^{ \alpha_{21}}t^{j}\sum_{j=0}^{\alpha_{4}-2}t^{j}-(1-t^{\alpha_{21}})\sum_{j=0}^{ \alpha_{4}-2}t^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}+\alpha_{4}-j}\] \[-\left(1-t^{(\alpha_{4}-1)\alpha_{2}}\right)\left[\sum_{i=1}^{ \alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)] \alpha_{21}+\alpha_{3}-j}\right]\!.\]
Here, \(P_{2}(t)=(1-t)Q(t)\) where
\[Q(t)=\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-1)\alpha_{2}-\alpha_{4}}t^{j+\alpha_{4}-1}+t^{(\alpha_{4}-1)\alpha_ {2}}\sum_{j=0}^{(\alpha_{4}-1)s_{\alpha_{4}-1}\alpha_{2}}t^{j}+t^{\alpha_{21}} \sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{4}-2}t^{j}-\sum_{j=0}^{( \alpha_{4}-1)\alpha_{2}-1}t^{j}\left[\sum_{i=1}^{\alpha_{4}-1}\sum_{j=s_{i-1}} ^{s_{i}-1}\right.\] \[\left.t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)]\alpha_{21}+\alpha_{3} -j}\right]+\sum_{j=0}^{\alpha_{21}-1}t^{j}\left[\sum_{j=0}^{\alpha_{3}-1}t^{j }\sum_{j=0}^{\alpha_{4}-2}t^{j}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{ \alpha_{2}+\alpha_{3}-3}t^{j}-\sum_{j=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1)s_{j} +j)\alpha_{2}+\alpha_{4}-j}\right.\] \[\left.\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=1}^{\alpha_{4}-2}t^{ j\alpha_{2}+\alpha_{4}-j}\right]\]
To prove the Hilbert function is nondecreasing, we need to show that there are no negative terms in \(Q(t)\).
**Corollary 4.4**.: \(Q(t)\) _could be simplified as_
\[Q(t)=t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=0}^{\alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}-1+\alpha_{21}}\sum_{j=0 }^{\alpha_{2}-\alpha_{21}-2}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{ \alpha_{4}-3}t^{j(\alpha_{2}-1)}+t^{(\alpha_{4}-1)\alpha_{2}}\sum_{j=0}^{( \alpha_{4}-1)s_{\alpha_{4}-1}\alpha_{2}}t^{j}+\] \[t^{\alpha_{21}}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_ {4}-2}t^{j}-\sum_{j=0}^{(\alpha_{4}-1)\alpha_{2}-1}t^{j}\left[\sum_{i=1}^{ \alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)] \alpha_{21}+\alpha_{3}-j}\right]+\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{ \alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{4}-2}t^{j}\] \[-\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{4}-2}t^{(( \alpha_{4}-1)s_{j}+j)\alpha_{2}+\alpha_{4}-j}+t^{\alpha_{4}+\alpha_{2}+\alpha_{3 }-2}\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)- \alpha_{3}}t^{j}+t^{\alpha_{4}+\alpha_{2}+\alpha_{3}+\alpha_{21}-2}\sum_{j=0}^{( \alpha_{4}-3)\alpha_{2}-\alpha_{3}-\alpha_{21}-2}t^{j}\]
Proof.: See the appendix.
**Theorem 4.5**.: _Let the notation be as in theorem 3.3. Then the local ring has nondecreasing Hilbert function._
Proof.: Once we do the necessary cancellations, there are no negative terms in \(Q(t)\).
We will prove this in a couple of steps. Our aim is to show that the two negative sums (5th and 7th terms) in proposition 4.4 will be cancelled out by the other terms in \(Q(t)\).
1. Term 1,2,3 and 6 contains all of \(t^{j}\) where \(0\leq j\leq(\alpha_{4}-1)(s_{\alpha_{4}-1}+1)\alpha_{2}\) with a positive sign.
2. 5th term contains only SOME of \(t^{j}\) where \(\alpha_{21}+\alpha_{3}\leq j\leq(s_{\alpha_{4}-1}-1)\alpha_{1}+[(\alpha_{4}-1)(s _{\alpha_{4}-1}-1)+\alpha_{4}]\,\alpha_{21}+\alpha_{3}-s_{\alpha_{4}-1}+(\alpha_{ 4}-1)\alpha_{2}\) (where the coefficients are not grater than 1) To see the coefficients are not greater than 1, it is enough observe that the difference in between the degrees of the two consecutive terms in \(\sum_{i=1}^{\alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1) j+(i+1)]\alpha_{21}+\alpha_{3}-j}\) are either
\(1)\alpha_{21}-1(\)for a fixed i) or \(\alpha_{1}+\alpha_{4}\alpha_{21}-1(\)when i changes\()\) and both of these degrees are greater than \((\alpha_{4}-1)\alpha_{2}-1\). To see this, observe that lemma 3.4 gives \((\alpha_{4}-1)\alpha_{2}<\alpha_{1}+(\alpha_{4}-1)\alpha_{21}\), by taking \(j=\alpha_{4}-1\).
* 7th term contains only SOME of \(t^{j}\) where \(\alpha_{4}\leq j\leq\left[(\alpha_{4}-1)(s_{\alpha_{4}-2}+1)-1\right]\alpha_{2 }+\alpha_{21}+1\) (where the coefficients are not grater than 1) To see the coefficients are not greater than 1, it is enough observe that the difference in between the degrees of the two consecutive terms in \(\sum_{j=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}+ \alpha_{4}-j}\) are greater than \(\alpha_{21}-1\) but this is clear as \(\alpha_{2}>\alpha_{21}+1\).
* None of the \(t^{j}\)'s in the 5th term are in the 7th term so their sum is only the sum of \(t^{j}\)'s where \(j\leq(s_{\alpha_{4}-1}-1)\alpha_{1}+\left[(\alpha_{4}-1)(s_{\alpha_{4}-1}-1)+ \alpha_{4}\right]\alpha_{21}+\alpha_{3}-s_{\alpha_{4}-1}+(\alpha_{4}-1)\alpha _{2}\) (Bunu kontrol et).
To see none of the \(t^{j}\)'s in the 5th term are in the 7th term, observe first
\[\sum_{j=0}^{(\alpha_{4}-1)\alpha_{2}-1}t^{j}\left[\sum_{i=1}^{ \alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)] \alpha_{21}+\alpha_{3}-j}\right]=\] \[\sum_{i=1}^{\alpha_{4}-1}\sum_{j=(\alpha_{1}+(\alpha_{4}-1) \alpha_{21}-1)s_{i-1}}^{(\alpha_{1}+(\alpha_{4}-1)\alpha_{21}-1)s_{i}-1}t^{j+( i+1)\alpha_{21}+\alpha_{3}}-\sum_{j=(\alpha_{4}-1)\alpha_{2}}^{\alpha_{1}-( \alpha_{4}-1)\alpha_{21}-2}t^{j}\left[\sum_{i=1}^{\alpha_{4}-1}\sum_{j=s_{i-1} }^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)]\alpha_{21}+\alpha_{3}-j} \right].\] Hence it is enough to show that none of the terms in \(Y(t)=\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{i=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1 )s_{i}+i)\alpha_{2}+\alpha_{4}-i}\) are in
\[Z(t)=\sum_{i=1}^{\alpha_{4}-1}\sum_{j=(\alpha_{1}+(\alpha_{4}-1) \alpha_{21}-1)s_{i-1}}^{(\alpha_{1}+(\alpha_{4}-1)\alpha_{21}-1)s_{i}-1}t^{j+( i+1)\alpha_{21}+\alpha_{3}}.\]
Observe that for a fixed \(i\), by the definition of \(s_{i}\),
\[((\alpha_{4}-1)s_{i}+i)\alpha_{2}+\alpha_{4}-i+\alpha_{21}-1 < s_{i}\alpha_{1}+((\alpha_{4}-1)s_{i}+i+2)\alpha_{21}+\alpha_{3}-s_ {i}\] \[(\alpha_{4}-1)\alpha_{2}s_{i}+i(\alpha_{2}-1)+\alpha_{4}+\alpha_{ 21}-1 < (\alpha_{1}+(\alpha_{4}-1)\alpha_{21}-1)s_{i}+(i+1)\alpha_{21}+ \alpha_{21}+\alpha_{3}\]
which shows that the degree of the last term of \(Y(t)\) for \(i\), is less than the degree of the first term of \(Z(t)\) for \(i+1\).
On the other hand, by the definition \(s_{i}\),
\[s_{i-1}(\alpha_{1}-1+(\alpha_{4}-1)(-\alpha_{2}+\alpha_{21}))+ \alpha_{21}+\alpha_{3}-\alpha_{4} \leq (\alpha_{2}-\alpha_{21}-1)i\] \[s_{i-1}(\alpha_{1}-1+(\alpha_{4}-1)(-\alpha_{2}+\alpha_{21}))+ \alpha_{21}+\alpha_{3}+(\alpha_{4}-1)\alpha_{2}s_{i}+\alpha_{21}i \leq (\alpha_{2}-1)i+\alpha_{4}+(\alpha_{4}-1)\alpha_{2}s_{i}\] Since \(s_{i-1}\leq s_{i}\) for any \(i\), this implies, \[(\alpha_{1}+(\alpha_{4}-1)\alpha_{21}-1)s_{i-1}-1+(i+1)\alpha_{21}+\alpha_{3} <(\alpha_{4}-1)\alpha_{2}s_{i}+i(\alpha_{2}-1)+\alpha_{21}\] which shows that the degree last term of \(Z(t)\) for \(i-1\) is less than the degree of the first term of \(Y(t)\) for \(i\).
* Maximum power appearing in the sum of 1st, 2nd, 3rd and 6th terms is greater than the maximum power appearing in the sum of 5th and 7th terms. Indeed, by the definition of \(s_{\alpha_{4}-1}\), \(s_{\alpha_{4}-1}\) satisfies, \[(\alpha_{4}-1)s_{\alpha_{4}-1}\alpha_{2}+1 \geq (s_{\alpha_{4}-1}-1)(\alpha_{1}-1)+((\alpha_{4}-1)(s_{\alpha_{4}-1 }-1)+\alpha_{4})\alpha_{21}+\alpha_{3}\] Then adding \((\alpha_{4}-1)\alpha_{2}-1\) to both sides, we obtain the result.
Hence, when we add 1st,2nd,6th, 7th 9th and 10th terms, all of the negative ones will disappear and there won't be any negative terms and the Hilbert Function will be non-decreasing.
## 5. Examples
The following examples are verified using the computer algebra system SINGULAR, see [13].
**Example 5.1**.: Let \(\alpha_{21}=5\), \(\alpha_{1}=21\), \(\alpha_{2}=11\), \(\alpha_{3}=7\), \(\alpha_{4}=4\). One can easily check that \(n_{1}=232<n_{2}=237<n_{3}=531<n_{4}=1447\) and hence the conditions (1), (2), (3) are automatically satisfied and also (4) is satisfied. Which implies that our theorem is applicable. Using the definition of \(s_{j}\), \(s_{0}=0\), \(s_{1}=0\), \(s_{2}=2\) and \(s_{3}=4\). According to the theorem 3.3, a standard basis for the defining ideal is \(\{f_{1,0},f_{1,1},f_{1,2},f_{1,3},f_{2},f_{3},f_{4},g_{0,0},g_{1,0},g_{2,0},g_ {2,1},g_{2,2},g_{3,2},g_{3,3},g_{3,4}\}\). Indeed, the standard basis that SINGULAR gives is
\(I_{S}=\{g_{0,0}=X_{2}X_{4}^{3}-X_{1}^{6}X_{3}^{6},f_{1,0}=X_{3}X_{4}^{3}-X_{1} ^{21},f_{4}=X_{4}^{4}-X_{1}X_{2}^{10}X_{3}^{6},f_{2}=X_{1}^{5}X_{4}-X_{2}^{11},f_{3}=X_{3}^{7}-X_{1}^{15}X_{2},g_{1,0}=X_{2}^{12}X_{4}^{2}-X_{1}^{11}X_{3}^{6 },f_{1,1}=X_{1}^{21}X_{3}X_{4}^{2}-X_{1}^{26},g_{2,0}=X_{1}^{16}X_{3}^{6}-X_{2 }^{23}X_{4},f_{1,2}=X_{2}^{22}X_{3}X_{4}-X_{1}^{31},f_{1,3}=X_{2}^{33}X_{3}-X_{ 1}^{36},g_{2,1}=X_{1}^{52}X_{3}^{5}-X_{2}^{56}X_{4},g_{2,2}=X_{2}^{89}X_{4}-X_{ 1}^{88}X_{3}^{4},g_{3,2}=X_{1}^{93}X_{4}^{4}-X_{2}^{100},g_{3,3}=X_{1}^{129}X_{ 3}^{3}-X_{2}^{133},g_{3,4}=X_{2}^{126}-X_{1}^{165}X_{3}^{2}\}\) the numerator of the Hilbert series of the local ring is \(P(I_{S}*)=1-3t^{4}+3t^{5}-2t^{6}-t^{7}+3t^{9}-2t^{10}+t^{11}+t^{13}-2t^{14}+2t ^{15}-t^{16}+2t^{19}-2t^{20}-t^{22}+2t^{23}-2t^{24}+t^{26}+t^{29}-t^{31}-t^{34} +t^{36}+t^{40}-t^{41}+t^{55}-2t^{56}+2t^{58}-t^{59}+t^{95}-2t^{96}+2t^{98}-t^{99 }+t^{130}-2t^{131}+2t^{133}-t^{134}+t^{165}-3t^{166}+3t^{167}-t^{168}\) and the second Hilbert series is \(Q(t)=1+3t^{1}+6t^{2}+10t^{3}+12t^{4}+15t^{5}+17t^{6}+17t^{7}+15t^{8}+14t^{9}+12 t^{10}+10t^{11}+8t^{12}+7t^{13}+5t^{14}+4t^{15}+3t^{16}+2t^{17}+t^{18}+2t^{19}+3t^{20} +4t^{21}+4t^{22}+5t^{23}+5t^{24}+4t^{25}+3t^{26}+2t^{27}+t^{28}+t^{29}+2t^{30}+ 3t^{31}+4t^{32}+5t^{33}+5t^{34}+4t^{35}+3t^{36}+2t^{37}+t^{38}+t^{55}+t^{56}+t^{ 95}+t^{96}+t^{130}+t^{131}+t^{165}.\)
**Example 5.2**.: Let \(\alpha_{21}=10\), \(\alpha_{1}=60\), \(\alpha_{2}=20\), \(\alpha_{3}=8\), \(\alpha_{4}=6\). In this case, \(n_{1}=801<n_{2}=831<n_{3}=5010<n_{4}=8610\) and our theorem is applicable. Using the definition of \(s_{j}\), \(s_{0}=0\), \(s_{1}=0\), \(s_{2}=1\), \(s_{3}=1\), \(s_{4}=3\) and \(s_{5}=4\). According to the theorem 3.3, a standard basis for the defining ideal \(I_{S}\) is \((f_{1,0},f_{1,1},f_{1,2},f_{1,3},f_{1,4},f_{1,5},f_{2},f_{3},f_{4},g_{0,0},g_{1, 0},g_{2,0},g_{2,1},g_{3,1},\;\;g_{3,2},g_{4,2},g_{4,3},g_{5,3},g_{5,4})\). Indeed, the standard basis that SINGULAR gives is
\(I_{S}=\{g_{0,0}=X_{2}X_{4}^{5}-X_{1}^{11}X_{3}^{7},f_{1,0}=X_{3}X_{4}^{5}-X_{1} ^{60},f_{4}=X_{4}^{6}-X_{1}X_{2}^{19}X_{3}^{7},f_{3}=X_{3}^{8}-X_{1}^{49}X_{2 },f_{2}=X_{1}^{10}X_{4}-X_{2}^{20},g_{1,0}=X_{2}^{21}X_{4}^{4}-X_{1}^{21}X_{3}^{7 },f_{1,1}=X_{2}^{20}X_{3}X_{4}^{4}-X_{1}^{70},g_{2,0}=X_{1}^{31}X_{3}^{7}-X_{2 }^{41}X_{3}^{4},f_{1,2}=X_{2}^{40}X_{3}X_{3}^{3}-X_{1}^{80},f_{1,3}=X_{2}^{60} X_{3}X_{4}^{2}-X_{1}^{90},f_{1,4}=X_{8}^{80}X_{3}X_{4}-X_{1}^{100},f_{1,5}=X_{1}^{100}X_{3}-X_{ 1}^{110},g_{2,1}=X_{2}^{141}X_{4}^{3}-X_{1}^{141}X_{3}^{6},g_{3,1}=X_{1}^{151}X_{ 3}^{6}-X_{2}^{161}X_{2}^{4},g_{3,2}=X_{2}^{261}X_{4}^{2}-X_{1}^{261}X_{3}^{5},g_{ 4,2}=X_{1}^{271}X_{3}^{5}-X_{2}^{281}X_{4},g_{4,3}=X_{2}^{381}X_{4}-X_{1}^{381 }X_{3}^{4},g_{5,3}=X_{3}^{391}X_{3}^{4}-X_{2}^{401},g_{5,4}=X_{5}^{501}-X_{1}^{501}X_{ 3}^{3}\}\). The numerator of the Hilbert series of the local ring is \(P(I_{S}*)=1-3t^{6}+3t^{7}-2t^{8}-t^{11}+t^{13}+3t^{16}-3t^{17}+t^{18}+t^{19}-t^{ 23}-2t^{25}+3t^{26}-t^{27}+t^{32}-t^{33}+2t^{35}-3t^{36}+t^{37}-t^{38}+2t^{39}-t^{ 40}-t^{42}+t^{43}+t^{44}+t^{45}+5^{1}-t^{52}+t^{54}-t^{55}-t^{61}+t^{62}-t^{63}+t^{ 64}+t^{70}-t^{71}+t^{73}-t^{74}-t^{80}+t^{81}-t^{82}+t^{83}+t^{89}-t^{90}+t^{92 }-t^{93}-t^{99}+t^{100}-t^{101}+t^{102}+t^{108}-t^{109}+t^{138}-2t^{139}+t^{140}-t^{144 }+2t^{145}-t^{146}+t^{154}-2t^{155}+t^{156}-t^{157}+2t^{158}-t^{159}+t^{257}-2t^{258}+t
## 6. Conclusion
If \(n_{1}<n_{2}<n_{3}<n_{4}\) and \(<n_{1},n_{2},n_{3},n_{4}>\) is a 4 generated pseudo-symmetric numerical semigroup, then the Hilbert function of the local ring is always nondecreasing. This supports Rossi's conjecture, saying that " The Hilbert function of a one dimensional Cohen-Macaulay local ring with small enough embedding dimension is nondecreasing."
## 7. Appendix
In this appendix we show the technical details to prove the Corollary 4.4.
_Remark 7.1_.: Let \(R_{1}(t)=\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-1)\alpha_{2}-\alpha_{4}}t^{j+\alpha_{4}-1}\) and \(R_{2}(t)=t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j =0}^{\alpha_{2}+\alpha_{3}-3}t^{j}\). Then
\[R_{1}(t)-R_{2}(t)=t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=0}^{(\alpha_{4}-2)(\alpha_{2}-1)}t^{j}+t^{(\alpha_{4}-2)\alpha_{2}+ \alpha_{21}+2}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{2}-\alpha_{21 }-3}t^{j}\]
Proof.: Observe that
\[R_{1}(t) = \left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-2)(\alpha_{2}-1)}t^{j}+\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=(\alpha_{4}-2)(\alpha_{2}-1)+1}^{(\alpha_{4}-1)\alpha_{2}-\alpha_{4}} t^{j}\right]t^{\alpha_{4}-1}\] \[= \left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-2)(\alpha_{2}-1)}t^{j}\right]t^{\alpha_{4}-1}+t^{\alpha_{4}-1+( \alpha_{4}-2)(\alpha_{2}-1)+1}\left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=0}^{\alpha_{2}-3}t^{j}\right]\] \[= \left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-2)(\alpha_{2}-1)}t^{j}\right]t^{\alpha_{4}-1}+t^{(\alpha_{4}-2) \alpha_{2}+2}\left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{ \alpha_{2}-3}t^{j}\right].\]
Let \(S_{1}(t)=\left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-2)(\alpha_{2}-1)}t^{j}\right]t^{\alpha_{4}-1}\) Then
\[R_{1}(t)-R_{2}(t) = S_{1}(t)+t^{(\alpha_{4}-2)\alpha_{2}+2}\left[\sum_{j=0}^{\alpha_ {3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{2}-3}t^{j}-\sum_{j=0}^{\alpha_{21}- 1}t^{j}\sum_{j=0}^{\alpha_{2}+\alpha_{3}-3}t^{j}\right]\] \[= S_{1}(t)+t^{(\alpha_{4}-2)\alpha_{2}+2}\left[\sum_{j=0}^{\alpha _{2}-1}t^{j}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{2}- 3}t^{j}-\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{2}-3}t^{j}-\sum_{j =0}^{\alpha_{21}-1}t^{j}\sum_{j=\alpha_{2}-2}^{\alpha_{2}+\alpha_{3}-3}t^{j}\right]\] \[= S_{1}(t)+t^{(\alpha_{4}-2)\alpha_{2}+2}\left[t^{\alpha_{21}}\sum_ {j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{2}-3}t^{j}-t^{\alpha_{2}-2}\sum_{ j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{3}-1}t^{j}\right]\] \[= S_{1}(t)+t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{3}-1}t ^{j}\left[\sum_{j=0}^{\alpha_{2}-3}t^{j}-t^{\alpha_{2}-\alpha_{21}-2}\sum_{j=0} ^{\alpha_{21}-1}t^{j}\right]\] \[= S_{1}(t)+t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{3}-1}t ^{j}\sum_{j=0}^{\alpha_{2}-2\alpha_{1}-3}t^{j}\]
Hence,
\[R_{1}(t)-R_{2}(t)=\left[\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{( \alpha_{4}-2)(\alpha_{2}-1)}t^{j}\right]t^{\alpha_{4}-1}+t^{(\alpha_{4}-2) \alpha_{2}+\alpha_{21}+2}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{2}- \alpha_{21}-3}t^{j} \tag{7.1}\]
**Corollary 7.2**.: Using equation 7.1, we can rewrite \(Q(t)\) as,
\[Q(t)=t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=0}^{(\alpha_{4}-2)(\alpha_{2}-1)}t^{j}+t^{(\alpha_{4}-2)\alpha_{2}+ \alpha_{21}+2}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{2}-\alpha_{21} -3}t^{j}+t^{(\alpha_{4}-1)\alpha_{2}}\sum_{j=0}^{(\alpha_{4}-1)s_{\alpha_{4}- 1}\alpha_{2}}t^{j}+\] \[t^{\alpha_{21}}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha _{4}-2}t^{j}-\sum_{j=0}^{(\alpha_{4}-1)\alpha_{2}-1}t^{j}\left[\sum_{i=1}^{ \alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{1}+[(\alpha_{4}-1)j+(i+1)] \alpha_{21}+\alpha_{3}-j}\right]\] \[+\sum_{j=0}^{\alpha_{21}-1}t^{j}\left[\sum_{j=0}^{\alpha_{3}-1}t ^{j}\sum_{j=0}^{\alpha_{4}-2}t^{j}-\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=1}^{ \alpha_{4}-2}t^{j\alpha_{2}+\alpha_{4}-j}\right]-\sum_{j=0}^{\alpha_{21}-1}t^{ j}\sum_{j=0}^{\alpha_{4}-2}t^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}+\alpha_{4}-j}\]
_Remark 7.3_.: Let \(S_{1}(t)=t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^ {(\alpha_{4}-2)(\alpha_{2}-1)}t^{j}\) as in the the proof of remark 7.1 and let
\[S_{2}(t)=\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j= 1}^{\alpha_{4}-2}t^{j\alpha_{2}+\alpha_{4}-j}.\]
Then
\[S_{1}(t)-S_{2}(t)= t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=0}^{\alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j= 0}^{\alpha_{21}}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}\] \[+t^{\alpha_{4}+\alpha_{2}-1+\alpha_{21}}\sum_{j=0}^{\alpha_{2}- \alpha_{21}-2}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}-3}t^{j (\alpha_{2}-1)}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{3}-2}t^{j} \sum_{j=0}^{\alpha_{2}-2}t^{j}\]
Proof.: Observe that
\[S_{1}(t) = t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{ j=0}^{\alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{3}+ \alpha_{21}-1}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}\] \[= t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{ j=0}^{\alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{3}-2}t^{j} \sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}+t^{\alpha_{4}+\alpha_{2}+ \alpha_{3}-2}\sum_{j=0}^{\alpha_{21}}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2 }-1)-1}t^{j}\] \[= S_{3}(t)+t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{3}-2}t^{j }\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}\]
where \(S_{3}(t)=t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^ {\alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{ 21}}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}\). On the other hand,
\[S_{2}(t) = \sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j =1}^{\alpha_{4}-2}t^{j\alpha_{2}+\alpha_{4}-j}=t^{\alpha_{4}+\alpha_{2}-1}\sum_ {j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}- 3}t^{j(\alpha_{2}-1)}\] \[= t^{\alpha_{4}+\alpha_{2}-1}\left[\sum_{j=0}^{\alpha_{2}-2}t^{j}- \sum_{j=\alpha_{21}}^{\alpha_{2}-2}t^{j}\right]\sum_{j=0}^{\alpha_{3}-2}t^{j} \sum_{j=0}^{\alpha_{4}-3}t^{j(\alpha_{2}-1)}\] \[= t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{2}-2}t^{j}\sum_{ j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}-3}t^{j(\alpha_{2}-1)}-t^{ \alpha_{4}+\alpha_{2}-1+\alpha_{21}}\sum_{j=0}^{\alpha_{2}-\alpha_{21}-2}t^{j }\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}-3}t^{j(\alpha_{2}-1)}\] \[= t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{ j=0}^{(\alpha_{4}-2)(\alpha_{2}-1)-1}t^{j}-t^{\alpha_{4}+\alpha_{2}-1+\alpha_{21}} \sum_{j=0}^{\alpha_{2}-\alpha_{21}-2}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_ {j=0}^{\alpha_{4}-3}t^{j(\alpha_{2}-1)}\]
Let \(S_{4}(t)=S_{3}(t)+t^{\alpha_{4}+\alpha_{2}-1+\alpha_{21}}\sum_{j=0}^{\alpha_{ 2}-\alpha_{21}-2}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}-3} t^{j(\alpha_{2}-1)}\). Then
\[S_{1}(t)-S_{2}(t) = S_{4}(t)+t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{3}-2}t^{ j}\left[\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}-\sum_{j=0}^{(\alpha_{4}- 2)(\alpha_{2}-1)-1}t^{j}\right]\] \[= S_{4}(t)-t^{\alpha_{4}+\alpha_{2}-1}\sum_{j=0}^{\alpha_{3}-2}t^{ j}\sum_{j=(\alpha_{4}-2)(\alpha_{2}-1)-1}^{(\alpha_{4}-2)(\alpha_{2}-1)-1}t^{j}\] \[= S_{4}(t)-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{3}-2} t^{j}\sum_{j=0}^{\alpha_{2}-2}t^{j}\]
Then
\[S_{1}(t)-S_{2}(t)= t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j} \sum_{j=0}^{\alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j= 0}^{\alpha_{21}}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}\] \[+t^{\alpha_{4}+\alpha_{2}-1+\alpha_{21}}\sum_{j=0}^{\alpha_{2}- \alpha_{21}-2}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}-3}t^{j (\alpha_{2}-1)}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{3}-2}t^{j} \sum_{j=0}^{\alpha_{2}-2}t^{j}\]
**Corollary 7.4**.: Using the previous remark, \(Q(t)=t^{\alpha_{4}-1}\sum_{j=0}^{\alpha_{3}+\alpha_{21}-1}t^{j}\sum_{j=0}^{ \alpha_{2}-1}t^{j}+t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{2 1}}t^{j}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}+\)
\[t^{\alpha_{4}+\alpha_{2}-1+\alpha_{21}}\sum_{j=0}^{\alpha_{2}- \alpha_{21}-2}t^{j}\sum_{j=0}^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{4}-3}t^{ j(\alpha_{2}-1)}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{\alpha_{3}-2}t^{j} \sum_{j=0}^{\alpha_{2}-2}t^{j}+t^{(\alpha_{4}-2)\alpha_{2}+\alpha_{21}+2}\sum_ {j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{2}-\alpha_{21}-3}t^{j}+\] \[t^{(\alpha_{4}-1)\alpha_{2}}\sum_{j=0}^{(\alpha_{4}-1)s_{\alpha_ {4}-1}\alpha_{2}}t^{j}+t^{\alpha_{21}}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{ \alpha_{4}-2}t^{j}-\sum_{j=0}^{(\alpha_{4}-1)\alpha_{2}-1}t^{j}\left[\sum_{i=1} ^{\alpha_{4}-1}\sum_{j=s_{i-1}}^{s_{i}-1}t^{j\alpha_{i}+[(\alpha_{4}-1)j+(i+1) ]\alpha_{21}+\alpha_{3}-j}\right]\] \[+\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{\alpha_{3}-1}t^{j} \sum_{j=0}^{\alpha_{4}-2}t^{j}-\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{j=0}^{ \alpha_{4}-2}t^{((\alpha_{4}-1)s_{j}+j)\alpha_{2}+\alpha_{4}-j}\]
Now we will focus on 2nd, 4th and 5th terms of \(Q(t)\).
_Remark 7.5_.: \(t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}}t^{j}\sum_{j=0}^{( \alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0}^{ \alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{2}-2}t^{j}+t^{(\alpha_{4}-2)\alpha_{2}+ \alpha_{21}+2}\sum_{j=0}^{\alpha_{3}-1}t^{j}-\sum_{j=0}^{\alpha_{2}-\alpha_{21} -3}t^{j}=t^{n_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}-1}t^{j}\sum_{ j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-\alpha_{3}}t^{j}+t^{n_{4}+\alpha_{2}+\alpha_{3}+ \alpha_{21}-2}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}-t^{(\alpha_{4}- 2)\alpha_{2}+\alpha_{3}+\alpha_{21}-2}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1 )-1}t^{j}-t^{(\alpha_{4}-1)\alpha_{2}}\sum_{j=0}^{\alpha_{3}-2}t^{j}+\]
Proof.: \(t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}}t^{j}\sum_{j=0}^ {(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}-t^{(\alpha_{4}-2)\alpha_{2}+2}\sum_{j=0} ^{\alpha_{3}-2}t^{j}\sum_{j=0}^{\alpha_{2}-2}t^{j}+t^{(\alpha_{4}-2)\alpha_{2} +\alpha_{21}+2}\sum_{j=0}^{\alpha_{3}-1}t^{j}\sum_{j=0}^{\alpha_{2}-\alpha_{21 }-3}t^{j}=t^{n_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}-1}t^{j}+t^{ (\alpha_{4}-2)\alpha_{2}+\alpha_{21}+1+\alpha_{3}}\sum_{j=0}^{\alpha_{2}- \alpha_{21}-3}t^{j}=t^{n_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}- 3}t^{j}=t^{n_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}-3}t^{j}-t^{( \alpha_{4}-2)\alpha_{2}+\alpha_{21}+1+\alpha_{3}}\sum_{j=0}^{\alpha_{2}-\alpha _{21}-3}t^{j}=t^{n_{4}+\alpha_{2}+\alpha_{3}-2}\sum_{j=0}^{\alpha_{21}-1}t^{j}- t^{(\alpha_{4}-1)\alpha_{2}}\sum_{j=0}^{\alpha_{21}-1}t^{j}-t^{(\alpha_{4}-1) \alpha_{2}}\sum_{j=0}^{\alpha_{2}-\alpha_{21}+1+\alpha_{3}}\sum_{j=0}^{\alpha_ {2}-\alpha_{21}-3}t^{j}=\]
\[\sum_{j=0}^{\alpha_{21}-1}t^{j}\left[t^{\alpha_{4}+\alpha_{2}+\alpha_{3}-2} \sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}-t^{(\alpha_{4}-2)\alpha_{2}+ 2}\sum_{j=0}^{\alpha_{3}-2}t^{j}\right]+t^{a_{4}+\alpha_{2}+\alpha_{3}+\alpha _{21}-2}\sum_{j=0}^{(\alpha_{4}-3)(\alpha_{2}-1)-1}t^{j}-t^{(\alpha_{4}-1) \alpha_{2}}\sum_{j=0}^{\alpha_{3}-2}t^{j}+\]
\[t^{(\alpha_{4}-2)\alpha_{2}+\alpha_{21}+1+\alpha_{3}}\sum_{j=0}^{\alpha_{2}- \alpha_{21}-3}t^{j}\]
|
2308.04629
|
Instabilities of explicit finite difference schemes with ghost points on
the diffusion equation
|
Ghost, or fictitious points allow to capture boundary conditions that are not
located on the finite difference grid discretization. We explore in this paper
the impact of ghost points on the stability of the explicit Euler finite
difference scheme in the context of the diffusion equation. In particular, we
consider the case of a one-touch option under the Black-Scholes model. The
observations and results are however valid for a much wider range of financial
contracts and models.
|
Fabien Le Floc'h
|
2023-08-08T23:41:15Z
|
http://arxiv.org/abs/2308.04629v2
|
# Instabilities of explicit finite difference schemes with ghost points on the diffusion equation
###### Abstract
Ghost, or fictitious points allow to capture boundary conditions that are not located on the finite difference grid discretization. We explore in this paper the impact of ghost points on the stability of the explicit Euler finite difference scheme in the context of the diffusion equation. In particular, we consider the case of a one-touch option under the Black-Scholes model. The observations and results are however valid for a much wider range of financial contracts and models.
Finite difference method; stability; quantitative finance; Barrier options F. Le Floc'h
## 1 Introduction
Under the Black-Scholes model, the price of barrier options financial contracts is strongly impacted by the term-structure of interest rates, dividends and volatilities. In order to take those into account, the standard practice is to rely on a numerical scheme to price those, as the closed form formulae for the Black-Scholes model assume constant rates, dividend yield and volatility. Finite difference methods are an efficient technique on this kind of problem, since the dimensionality of the partial differential equation involved is low. This stays true under the local volatility model or under the Heston stochastic volatility model.
To make the problem concrete, we will consider in this paper the example of the one-touch option, which pays 1 USD if the underlying asset spot price \(S\) moves over the specified barrier level \(L^{+}\) at some time before the maturity date \(T\) and 0 USD otherwise. The observations and results stay relevant in more general settings, for example for knock-in barrier options, or more exotic derivative products. The Black-Scholes PDE for the contract value \(V(S,t)\) reads (Shreve 2004)
\[\frac{\partial V}{\partial t}+\frac{1}{2}\sigma^{2}(t)S^{2}\frac{\partial^{2} V}{\partial S^{2}}+(r(t)-q(t))\frac{\partial V}{\partial S}-r(t)V=0 \tag{1}\]
where \(r\) is the interest rate, \(q\) the dividend yield and \(\sigma\) the Black-Scholes volatility.
At \(S=0\), we let the price be linear. The one-touch contract imposes a Dirichlet boundary condition \(V(L^{+},t)=1\) for \(t\leq T\) and the initial condition reads \(V(S,T)=1_{S\in L^{+}}\).
The most popular way to enforce the boundary condition at \(L^{+}\) is to make sure that there is a grid point at \(L^{+}\). For a one-touch option, we only need to set the grid upper bound to \(L^{+}\). More exotic structures may involve a set of barrier levels active at different dates. It is then less obvious to properly define the grid such that all the barrier levels fall on the grid. One technique, described in (Tavella and Randall 2000, p. 171) consists in using a cubic spline interpolation to map a uniform grid to a smoothly deformed grid. A robust implementation is not so simple, and grid points may end up very close to each other, potentially deteriorating the accuracy of the scheme. Another use case where the smooth deformation may become problematic is the valuation of a portfolio of financial derivative contracts on the same finite difference grid.
An alternative technique is to keep the grid simple (for example uniform) and use _ghost points1_ to enforce the boundary condition(s) (Wilmott 2013, p. 1209-1210). Healy (2022) finds it to be as accurate as the deformed grid approach for standard barrier options contracts, through the use of quadratic interpolation instead of the linear interpolation presented in (Wilmott 2013).
|
2310.08259
|
Invisible Threats: Backdoor Attack in OCR Systems
|
Optical Character Recognition (OCR) is a widely used tool to extract text
from scanned documents. Today, the state-of-the-art is achieved by exploiting
deep neural networks. However, the cost of this performance is paid at the
price of system vulnerability. For instance, in backdoor attacks, attackers
compromise the training phase by inserting a backdoor in the victim's model
that will be activated at testing time by specific patterns while leaving the
overall model performance intact. This work proposes a backdoor attack for OCR
resulting in the injection of non-readable characters from malicious input
images. This simple but effective attack exposes the state-of-the-art OCR
weakness, making the extracted text correct to human eyes but simultaneously
unusable for the NLP application that uses OCR as a preprocessing step.
Experimental results show that the attacked models successfully output
non-readable characters for around 90% of the poisoned instances without
harming their performance for the remaining instances.
|
Mauro Conti, Nicola Farronato, Stefanos Koffas, Luca Pajola, Stjepan Picek
|
2023-10-12T12:05:51Z
|
http://arxiv.org/abs/2310.08259v1
|
# Invisible Threats: Backdoor Attack in OCR Systems
###### Abstract
Optical Character Recognition (OCR) is a widely used tool to extract text from scanned documents. Today, the state-of-the-art is achieved by exploiting deep neural networks. However, the cost of this performance is paid at the price of system vulnerability. For instance, in backdoor attacks, attackers compromise the training phase by inserting a backdoor in the victim's model that will be activated at testing time by specific patterns while leaving the overall model performance intact.
This work proposes a backdoor attack for OCR resulting in the injection of non-readable characters from malicious input images. This simple but effective attack exposes the state-of-the-art OCR weakness, making the extracted text correct to human eyes but simultaneously unusable for the NLP application that uses OCR as a preprocessing step. Experimental results show that the attacked models successfully output non-readable characters for around 90% of the poisoned instances without harming their performance for the remaining instances.
Mauro Conti\({}^{1,2}\), Nicola Farronato\({}^{1}\), Stefanos Koffas\({}^{2}\), Luca Pajola\({}^{1}\), Stiepan Picek\({}^{3,2}\)\({}^{1}\) University of Padua, Italy
\({}^{2}\) Delft University of Technology, The Netherlands
\({}^{3}\) Radboud University, The Netherlands OCR, adversarial machine learning, backdoor attack, trojan attack
## 1 Introduction
Optical Character Recognition (OCR) is a common commercial solution adopted to extract text from images. Over the years, researchers attempted to solve many distinct tasks related to OCR, like typewritten text [1], handwritten text [2], and natural scenes [3, 4]. OCR performance and application scenarios evolved over the years, especially thanks to the advancements in the field of Artificial Intelligence, like Deep Neural Networks (DNNs). However, DNNs open security threats to the applications. Indeed, attackers might leverage DNN vulnerabilities to manipulate their performance: a domain commonly called _adversarial machine learning_[5, 6]. In this work, we focus on _backdoor attacks_ (or trojan neural networks) [7] that form an active field of research.
In backdoor attacks, attackers insert a backdoor into the generated model, through data [7], code [8], or weight [9] poisoning, by relating a pattern (trigger) to the targeted malicious behavior. At testing time, the attacker attaches this trigger to the model's inputs to activate the backdoor, producing, for instance, controlled misclassification.
_Contributions._ This work focuses on backdoors on OCR and is based on the findings of the ZeW evasion attack described in our previous work [10]. In particular, we showed that Natural Language Processing (NLP) applications can be easily manipulated by injecting non-printable (and thus invisible for humans) UNICODE characters producing a denial of service in victims' models. As OCR is an essential component of many applications like document classification [11] and toxicity detection on images [12], we aim to produce a ZeW attack by leveraging a corrupted OCR. In particular, our attack consists of associating unnoticeable patterns (triggers) in images with invisible characters, resulting in a denial of service, as shown in [10]. Our attack is orthogonal with state-of-the-art backdoors in OCR since they primarily attempt to misclassify target characters rather than introducing new ones [13, 14]. Our main contributions are:
* We present a novel stealthy backdoor attack for OCRs which, when activated, introduces new invisible characters instead of causing targeted misclassifications that result in easily spotted different letters.
* We use Calamari-OCR, which is a state-of-the-art OCR tool, to demonstrate the effectiveness of our attack. Through an extensive analysis consisting of the testing of 60 OCRs at varying trigger styles and poisoning rates, we demonstrated that the attack has high performance, reaching 90% success in some cases.
## 2 Background Ad Related Works
### Optical Character Recognition
Optical Character Recognition is a well-known family of tools aiming to extract text from a given image. There is a broad application of OCR for many distinct scenarios, like the extraction of typewritten documents [1], handwritten documents [2], and even text in natural scenes (e.g., traffic signs) [3, 4]. Usually, OCRs are a two-step process: _text segmentation_, aiming to identify textual regions in a given image, and _text recognition_, aiming to extract the text contained in a given area. OCR commonly integrates several types of
|
2310.10416
|
On the conductor of Ciani plane quartics
|
In this paper we determine the conductor exponent of non-special Ciani
quartics at primes of potentially good reduction in terms of the Ciani
invariants. As an intermediate step in order to do so, we provide a
reconstruction algorithm to construct Ciani quartics with given invariants. We
also discuss how to descend the provided model to be defined over the same
field as the invariants.
|
Irene Bouw, Nirvana Coppola, Elisa Lorenzo García, Anna Somoza
|
2023-10-16T13:59:23Z
|
http://arxiv.org/abs/2310.10416v1
|
# On the conductor of Ciani plane quartics
###### Abstract
In this paper we determine the conductor exponent of non-special Ciani quartics at primes of potentially good reduction in terms of the Ciani invariants. As an intermediate step in order to do so, we provide a reconstruction algorithm to construct Ciani quartics with given invariants. We also discuss how to descend the provided model to be defined over the same field as the invariants.
+
Footnote †: _Keywords_. Plane quartic curves, Ciani quartic curves, invariants, minimal discriminant, stable reduction, conductor.
+
Footnote †: _Keywords_. Plane quartic curves, Ciani quartic curves, invariants, minimal discriminant, stable reduction, conductor.
## 1 Introduction
Let \((K,\nu)\) be a complete local field of characteristic zero with valuation \(\nu\), whose residue field is an algebraically closed field \(k\) of odd characteristic \(p>2\). We start by recalling some facts on elliptic curves. For \(j\in K\) there exists an elliptic curve \(E/K\) with \(j(E)=j.\) It has potentially good reduction if and only if \(\nu(j(E))\geq 0\). Moreover, if the valuation of \(j\in K\) is non-negative, there exists an elliptic curve \(E_{0}/K\) with \(j(E_{0})=j\) that has good reduction over \(K\). A motivation for the results in this paper is to explore whether similar statements hold for non-hyperelliptic curves of genus \(3\), i.e. plane quartic curves.
The Dixmier-Ohno invariants \(DO(Y)\in\overline{K}^{14}\), where \(\overline{K}\) is an algebraic closure of \(K\), are a set of invariants for plane quartic curves \(Y/\overline{K}\) that determine \(\overline{K}\)-isomorphism classes. One of these is the discriminant \(\Delta(Y)\). In principle, one should be able to read off from the Dixmier-Ohno invariants
all information of \(Y\) that only depends on the \(\overline{K}\)-isomorphism class. It is known for example how to read off the automorphism group \(\operatorname{Aut}_{\overline{K}}(Y)\) from the Dixmier-Ohno invariants, but for other information it is less clear how to do this in practice.
A first difference between elliptic curves and plane quartics is that the field of moduli need not be a field of definition. However, given a smooth quartic \(Y/\overline{K}\) with \(|\operatorname{Aut}_{\overline{K}}(Y)|>2\) and a set \(DO(Y)\in K^{14}\) of Dixmier-Ohno invariants, there exists a \(K\)-model of \(Y\) with those (projective) invariants, see e.g. [10].
In this paper, we restrict to the locus of Ciani quartics. A _Ciani quartic_ is a smooth quartic \(Y\) whose automorphism group contains a subgroup \(V\simeq C_{2}^{2}\) with \(g(Y/V)=0\). We call a subgroup \(V\) with these properties a _Ciani subgroup_. Ciani quartics form a \(3\)-dimensional stratum in the moduli space of plane quartics. In fact, it is the largest-dimensional stratum in the moduli space of plane quartics with \(|\operatorname{Aut}_{\overline{K}}(Y)|>2\), where we consider the stratification by automorphism group. In our previous paper [1] we defined a set of invariants \(\underline{I}=\underline{I}(Y)=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{ 6})\in K^{4}\) for Ciani quartics \((Y,V)\), where \(V\) is a Ciani subgroup. The main result of [1] is a recipe to determine the type of the stable reduction from the Ciani invariants.
This set of Ciani invariants is much smaller than the Dixmier-Ohno invariants, and therefore easier to work with in practice. It is convenient to consider the Ciani invariants as a point \([\underline{I}]\in\mathbb{P}_{1,1,1,2}(K)\) in a weighted projective space since this determines the \(\overline{K}\)-isomorphism class. We usually assume that all invariants have non-negative valuation and that at least one has valuation zero, i.e. the invariants are _normalised_. The discriminant of a Ciani quartic can be expressed in terms of the Ciani invariants. For a set of Ciani invariants \(\underline{I}\), we write \(\Delta(\underline{I})=2^{20}I_{3}(I_{3}^{\prime\prime})^{4}I_{6}^{2}\). It is the discriminant of a curve with (exactly) these invariants. If \(\underline{I}\) is normalised, it can be considered as the minimal discriminant of curves in the corresponding \(\overline{K}\)-isomorphism class, i.e. having the minimal valuation among all the integral \(\overline{K}\)-models.
In the present paper, we focus on a more arithmetic question, namely we study the conductor exponent of a Ciani quartic. Choose \(\underline{I}\in K^{4}\) a normalised set of Ciani invariants with \(\Delta(\underline{I})\neq 0\). Let \(Y/K\) be a Ciani quartic with invariants \([\underline{I}]\). For Ciani quartics, the conductor exponent \(f_{p}(Y)\) of \(Y\) is zero if and only if \(Y\) has good reduction to characteristic \(p\) (Corollary 4.5). There may exist more than one non-(\(K\)-)isomorphic \(K\)-model of \(Y\), and the conductor exponent depends on the chosen model, in general. One of our main results characterises, given a set of invariants \(\underline{I}\), whether there exists a \(K\)-model of \(Y\) with good reduction, i.e. with \(f_{p}=0\), under the assumption that \(\operatorname{Aut}_{\overline{K}}(Y)=V\). The last condition is equivalent to \(\operatorname{Aut}_{\overline{K}}(Y)\) containing a unique Ciani subgroup. If this condition is satisfied, we say that \(Y\) is _non-special_. There are two complementary cases: if \(Y\) has potentially good reduction to characteristic \(p\), the reduction \(\overline{Y}\) is either a
smooth quartic (good quartic reduction) or hyperelliptic (good hyperelliptic reduction). In our proofs, we treat both cases separately.
Our results are stated in terms of some extra invariants \(Q\) and \(R\), which are defined in Sections 3.1 and 5.2. We refer to Section 3.1 for a definition of \(Q\) in terms of the Ciani invariants. It satisfies that \(Y\) is non-special if and only if \(Q\neq 0\), see Section 3.2. It also occurs as the discriminant of a polynomial \(\mathcal{P}\) of degree \(3\), introduced in Section 3.1, which is important in the description of a field extension of \(K\) over which we can find a model with stable reduction. The invariant \(R\) is defined in Equation 5.10. It is also related to the extension of \(K\) over which \(Y\) acquires stable reduction. We can summarise our main result as follows. Here we state it for simplicity in the case where \(K=\mathbb{Q}_{p}^{\mathrm{nr}}\) is the maximal unramified extension of \(\mathbb{Q}_{p}\).
**Theorem 1.1**.: _Let \(p\neq 2,3\) and \(K=\mathbb{Q}_{p}^{\mathrm{nr}}\). Let \(\underline{I}\in K^{4}\) be a normalised set of invariants. Let \(Y/\overline{K}\) be a Ciani quartic with \(\underline{I}(Y)=\underline{I}\). Assume that \(Y\) is smooth and that \(\mathrm{Aut}_{\overline{K}}(Y)=V\)._
1. _Assume that_ \(p\nmid\Delta(\underline{I})\)_._ 1. _The curve_ \(Y/\overline{K}\) _has good quartic reduction._ 2. _There exists a_ \(K\)_-model of_ \(Y\) _with good reduction if and only if_ \(\nu(Q)=0\)_._
2. _Assume that_ \(\nu(I_{3})=0,\nu(I_{3}^{\prime})\geq e,\nu(I_{3}^{\prime\prime})=2e,\nu(I_{6}) =3e\) _for some_ \(e>0\)_._ 1. _The curve_ \(Y/\overline{K}\) _has good hyperelliptic reduction._ 2. _There exists a_ \(K\)_-model of_ \(Y\) _with good reduction if and only if_ \(e\) _is even and the polynomial_ \(\mathcal{P}\) _splits completely over_ \(K\)_. This is equivalent to_ \(e\) _being even,_ \(\nu(Q)\equiv 0,2,4\pmod{6}\) _and_ \(3\nu(R)>\nu(Q)\) _if_ \(\nu(Q)\not\equiv 0\pmod{6}\)_._
3. _In all other cases, the curve_ \(Y/\overline{K}\) _has bad reduction._
Our results are more precise. For the precise statements, we refer to Propositions 5.1 and 5.3 for (I), to Proposition 5.9 for (II), and to Lemmas 4.2 and 4.3 for (III). If the Ciani curve has potentially good reduction, i.e. in the situation of (I) or (II), we determine the minimal value for the conductor exponent \(f_{p}\) among all \(K\)-models of \(Y\). We also find the concrete model, together with the minimal field over which it has good reduction. In case (III), we prove that the conductor, for any \(K\)-model, will be always positive, see Corollary 4.5.
Theorem 1.1 points to a further difference between elliptic curves and Ciani quartics (or plane quartics in general). Assume that the invariants \(\underline{I}\) are normalised. In the case that \(\nu(\Delta(\underline{I}))=0\) there need not exist a curve \(Y/K\) with good reduction over the minimal field \(K\) with those invariants.
In fact, in the case of good quartic reduction, we show that such a \(K\)-model exists if and only if the automorphism group of the reduction \(\overline{Y}\) is equal to \(\operatorname{Aut}_{\overline{K}}(Y)\), which we assumed to be \(V\). In the case of good hyperelliptic reduction this is not true: the automorphism group of \(\overline{Y}\) is always strictly larger than \(V\), but it is possible for the conductor exponent to be \(0\). However, if there is no \(K\)-model of \(Y\) with good reduction over \(K\), i.e. with \(f_{p}(Y)=0\), then the automorphism group of \(\overline{Y}\) is strictly larger than the group generated by the hyperelliptic involution and the elements of the fixed Klein \(4\)-group \(V\), see Remark 5.10.
The paper is structured as follows. In Section 2 we introduce Ciani plane quartics, their different models and invariants. In Section 3 we give a reconstruction algorithm to obtain a Ciani plane quartic with given Ciani invariants. We characterise when they are special and we compute their twists in the non-special case. In Section 4 we recall the basic definitions of stable reduction and the conductor of a curve and we present the main results that will allow us to compute the conductor of a Ciani plane quartic. Our main results are stated and proved in Section 5. Finally, in Section 6 we discuss how to use our results to bound the conductor of a Ciani plane quartic.
**Notation**
Throughout the paper, we use the following notation.
**Acknowledgements**
The research of the second author is supported by the NWO Vidi grant No. 639.032.613, New Diophantine Directions. The research of the third author is partially funded by the Melodia ANR-20-CE40-0013 project.
\begin{table}
\begin{tabular}{r l} \(K\) & a complete local field, usually assumed to be \(\mathbb{Q}_{p}^{\mathrm{nr}}\), with valuation \(\nu\), \\ \(Y_{0}\) & a \(K\)-model of a Ciani quartic as in Definition 2.6, \\ \(Y_{1}\) & a standard model as in Definition 2.6, \\ \(\underline{I}=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{6})\in K^{4}\) & the set of Ciani invariants as in Equation 2.4, \\ \(\Delta(\underline{I})=2^{20}I_{3}I_{3}^{\prime\prime 4}I_{6}^{2}\) & the discriminant as in Equation 2.5, \\ \(\underline{I}_{\nu}\) & the normalisation of \(\underline{I}\) at \(\nu\), Definition 2.15, \\ \([\underline{I}]\in\mathbb{P}_{1,1,1,2}^{3}\) & the point corresponding to \(\underline{I}\in K^{4}\) in weighted projective space, \\ \(Y(\underline{I})/\overline{K}\) & a Ciani quartic with projective Ciani invariants \([\underline{I}]\in\mathbb{P}_{1,1,1,2}^{3}\), \\ \(L/K\) & the field of decomposition of the polynomial \(\mathcal{P}\) defined in Equation (3.1), \\ \(Y_{0}(\underline{I})/K\) & a Ciani quartic with Ciani invariants \([\underline{I}]\) as in Proposition 3.4, \\ \(M/K\) & the unique minimal extension over which \(Y(\underline{I})\) has stable reduction. \\ \end{tabular}
\end{table}
Table 1: Notation
Properties of Ciani quartics
In this section, we recall some general properties of Ciani plane quartic curves (usually abbreviated as Ciani quartics), and introduce some useful notation.
### The standard model of a Ciani quartic
Let \(K\) be, as in the previous section.
**Definition 2.1**.: A _Ciani plane quartic curve_, or Ciani quartic, is a smooth non-hyperelliptic curve \(Y/K\) of genus \(g=3\) such that there exists \(V\subset\mathrm{Aut}_{\overline{K}}(Y)\), with \(V\) isomorphic to \(C_{2}^{2}\) and \(g(Y/V)=0\). A subgroup \(V\) satisfying these conditions is called a _Ciani subgroup_.
The Ciani subgroup is part of the data of a Ciani quartic. If the Ciani subgroup is clear from the context, we will sometimes omit it from the notation. We say that a Ciani quartic \((Y,V)\) is defined over \(K\) if and only if \(Y\) is defined over \(K\) and \(\,{}^{\sigma}V=V\) for all \(\sigma\in\mathrm{Gal}(\overline{K}/K)\).
Let \((Y,V)\) be a Ciani quartic. The map \(\phi:Y\to Y/V=:X\) is branched above \(6\) points. Let \(\sigma\in V\setminus\{e\}\) one of the nontrivial elements. Then \(\sigma\) generates the inertia above exactly \(2\) of the branch points of \(\phi\), and the genus of \(Y/\langle\sigma\rangle\) is \(1\).
**Definition 2.2**.: A Ciani quartic \(Y\) is called _non-special_ if there is a unique Ciani subgroup. Otherwise, we call it _special_.
The following lemma follows immediately from the classification of possible automorphism groups of smooth quartics, see for example [14, Section 3.1] or [15] for this.
**Lemma 2.3**.: _A Ciani quartic is non-special if and only if \(\mathrm{Aut}_{\overline{K}}(Y)\simeq C_{2}^{2}\). In particular, every non-special Ciani quartic \((Y,V)\) is defined over a field \(K\) if and only if the curve \(Y\) is defined over \(K\)._
**Example 2.4**.: We consider the Ciani quartic
\[Y_{a,b}/K:\;ax^{4}+y^{4}+z^{4}+bx^{2}y^{2}+xyz^{2}=0,\]
with \(a,b\in K\) such that its discriminant \(\Delta(Y_{a,b})=2^{20}(a-2)^{2}(a+2)^{2}(4a-b^{2}-8)^{4}(4a-b^{2}+8)^{4}\neq 0\). The automorphism group of \(Y\) contains the dihedral group \(D_{4}\) of order \(8\) as a subgroup generated by
\[\left\langle\sigma_{1}=\begin{pmatrix}i&0&0\\ 0&-i&0\\ 0&0&1\end{pmatrix},\;\sigma_{2}=\begin{pmatrix}0&1/\sqrt[4]{a}&0\\ \sqrt[4]{a}&0&0\\ 0&0&1\end{pmatrix}\right\rangle.\]
and hence contains at least \(2\) Ciani subgroups: \(V=\langle\sigma_{1}^{2},\sigma_{2}\rangle\) and \(V^{\prime}=\langle\sigma_{1}^{2},\sigma_{1}\sigma_{2}\rangle\). If \(a\notin(K^{*})^{2}\) then \(V\) and \(V^{\prime}\) are (Galois-)conjugated subgroups over the quadratic extension \(K(\sqrt{a})\) but not conjugated in \(\operatorname{Aut}_{\overline{K}}(Y_{a,b})\subset\operatorname{PGL}_{3}( \overline{K})\). Hence \(Y_{a,b}\) is defined over \(K\) as a curve, but the Ciani quartic \((Y_{a,b},V)\) is not.
**Remark 2.5**.: All special Ciani quartics belong to the family \(Y_{a,b}\) in Example 2.4, see [14, Thm. 3.3].
**Definition 2.6**.: Let \((Y,V)\) be a Ciani quartic over the algebraic closure \(\overline{K}\) of \(K\).
1. A \(K\)-_model_ of \(Y\) is a smooth quartic \(Y_{0}/K\) such that \(Y_{0}\otimes_{K}\overline{K}\simeq Y\).
2. A _standard \(K\)-model_ of \((Y,V)\) is a \(K\)-model \(Y_{1}\) of \(Y\) given by an equation \[Y_{1}:\;Ax^{4}+By^{4}+Cz^{4}+ay^{2}z^{2}+bx^{2}z^{2}+cx^{2}y^{2}=0,\] (2.1) such that the elements of \(V\) act as \((x:y:z)\mapsto(\pm x:\pm y:z)\).
**Notation 2.7**.: Let \(Y_{1}\) be a standard model of a Ciani quartic. We write
\[\sigma_{a}(x:y:z) =(-x:y:z),\] \[\sigma_{b}(x:y:z) =(x:-y:z),\] \[\sigma_{c}(x:y:z) =(-x:-y:z)\]
for the nontrivial elements of \(V\). For \(i\in\{a,b,c\}\), we write \(E_{i}=Y_{1}/\langle\sigma_{i}\rangle\).
It is well-known that every Ciani quartic over a field of characteristic not \(2\) admits a standard model over a finite extension. In the following lemma, we give a bound on the field extension needed.
Figure 1: Stratification by automorphisms of Ciani quartics
**Lemma 2.8**.: _Let \((Y_{0},V)\) be a Ciani quartic over \(K\). Then there exists a Galois extension \(L/K\) with \(\operatorname{Gal}(L/K)<S_{3}\) such that \(Y_{0}\) admits a standard model \(Y_{1}\) over \(L\) and such that \(Y_{0}\otimes_{K}L\simeq_{L}Y_{1}\)._
Proof.: Since \((Y_{0},V)\) is defined over \(K\), the map \(\varphi:Y_{0}\to X_{0}:=Y_{0}/V\) is defined over \(K\), as well. The curve \(X_{0}\) is a conic. We denote the branch locus of \(\varphi\) by \(D\). The elements of \(V\) of order \(2\), which we call \(\sigma_{a},\sigma_{b},\sigma_{c}\), are each branched at a pair of points on \(X_{0}\otimes_{K}\overline{K}\). It follows that \(\Gamma_{K}=\operatorname{Gal}(\overline{K}/K)\) acts on the branch locus \(D\) by permuting the three pairs of points. Let \(L\) be the field extension such that each pair is rational over \(L\). Note that \(\operatorname{Gal}(L/K)\) is a subgroup of \(S_{3}\). Over \(L\) we may write \(D\) as the sum of three effective divisors \(D_{a},D_{b},D_{c}\), where \(D_{i}\) is the divisor corresponding to the branch points with inertia generator \(\sigma_{i}\).
We claim that \(Y_{0}\) admits a standard model over \(L\). We write \(\ell_{a},\ell_{b},\ell_{c}\) for the three lines on \(X_{0}\otimes_{K}L\) with \((\ell_{i})_{0}=D_{i}\) for \(i\in\{a,b,c\}\). The three lines are defined over \(L\), and define an embedding
\[\psi_{L}:X\otimes_{K}L\hookrightarrow\mathbb{P}^{2}_{L}.\]
The coordinates \(u,v,w\) of \(\mathbb{P}^{2}_{L}\) define the images of the lines \(\ell_{a},\ell_{b},\ell_{c}\), respectively. We denote the image of \(X_{0}\) under \(\psi_{L}\) by \(X_{1}\). It is a conic with equation
\[Au^{2}+Bv^{2}+Cw^{2}+avw+buw+cuv=0. \tag{2.2}\]
We define three functions \(x,y,z\) on \(Y_{0}\) satisfying \(x^{2}=u,y^{2}=v,z^{2}=w\). The functions \((x,y,z)\) define an embedding
\[\varphi_{L}:Y_{0}\otimes_{K}L\to\mathbb{P}^{2}_{L}\]
such that the image \(Y_{1}\) of \(Y_{0}\otimes_{K}L\) is given by
\[Ax^{4}+By^{4}+Cz^{4}+ay^{2}z^{2}+bx^{2}z^{2}+cx^{2}y^{2}=0.\]
Hence \(Y_{1}\) is a standard \(L\)-model.
**Remark 2.9**.: In the proof of previous lemma, one may alternatively characterise the coordinates \(x,y,z\) and the embedding \(\varphi_{L}\) by the property that the nontrivial elements of \(V\) are the diagonal matrices
\[\sigma_{a}=\begin{pmatrix}-1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\quad\sigma_{b}=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix},\quad\sigma_{c}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&-1\end{pmatrix}. \tag{2.3}\]
### Ciani invariants
The canonical embedding of a Ciani quartic \(Y\) as a plane quartic in \(\mathbb{P}^{2}\) only depends on the choice of a basis of \(H^{0}(Y_{\overline{K}},\Omega)\). Therefore, isomorphisms
between Ciani quartics are induced by elements of \(\mathrm{GL}_{3}(\overline{K}).\) If \(\mathrm{char}(K)\neq 2,3,5,\) we have that two Ciani quartics are \(\overline{K}\)-isomorphic as plane quartics if and only they have the same Dixmier-Ohno invariants (see [11, Thm. 4.1] for the cases of small characteristic).
In [16, Section 3.1] we introduced a set of invariants for Ciani quartics \((Y,V)\) for \(\mathrm{char}(K)\neq 2\). For a standard model (2.1) \(Y_{1}\) over a field \(K\), the _Ciani invariants_ are defined by
\[\begin{split} I_{3}&=ABC\\ I^{\prime}_{3}&=A(a^{2}-4BC)+B(b^{2}-4AC)+C(c^{2}-4 AB)\\ I^{\prime\prime}_{3}&=-4ABC+Aa^{2}+Bb^{2}+Cc^{2}- abc\\ I_{6}&=(a^{2}-4BC)(b^{2}-4AC)(c^{2}-4AB).\end{split} \tag{2.4}\]
These invariants are algebraically independent and generate the ring of invariants of the locus parametrising Ciani quartics in the moduli space of non-hyperelliptic curves of genus \(3\). The index of the invariants indicates the degree, as used in Lemma 2.11. We write \(\underline{I}(Y_{1}):=(I_{3},I^{\prime}_{3},I^{\prime\prime}_{3},I_{6})\in K^{4}\) for the tuple of Ciani invariants of the standard model \(Y_{1}/K\). This tuple defines a point \([\underline{I}]\) in the weighted projective space \(\mathbb{P}^{3}_{1,1,1,2}\). Two Ciani quartics are \(\overline{K}\)-isomorphic as Ciani quartics if and only their invariants define the same point in \(\mathbb{P}^{3}_{1,1,1,2}(\overline{K})\).
The discriminant of a standard model \(Y_{1}\) (2.1) is given by the formula:
\[\Delta(Y_{1})=2^{20}I_{3}I^{\prime\prime 4}_{3}I^{2}_{6}. \tag{2.5}\]
Note that this is an expression in terms of the invariants (2.4): we sometimes denote it by \(\Delta(\underline{I})\). The model \(Y_{1}\) is smooth if and only if \(\Delta(Y_{1})\neq 0\), see [10].
In [16] one can find an expression for the Dixmier-Ohno invariants of a Ciani quartic in terms of the Ciani invariants. Since two non-isomorphic Ciani quartics \((Y,V_{i})\) with the same underlying curve have different Ciani invariants, but the same Dixmier-Ohno invariants, it is not possible to express the Ciani invariants in terms of the Dixmier-Ohno invariants in general. We give a concrete example.
**Example 2.10**.: We consider the Ciani quartic
\[Y_{r,s}:\;x^{4}+y^{4}+z^{4}+rz^{2}(x^{2}+y^{2})+sx^{2}y^{2}=0,\]
with \(r,s\in K\) such that \(\Delta(Y_{r,s})=2^{20}(s-2)^{10}(s+2)^{6}(-r^{2}+s+2)^{4}(r^{2}-4)^{12}\neq\;0.\) The automorphism group of \(Y\) contains the dihedral group \(D_{4}\) of order \(8\) as a subgroup, and hence contains \(2\) Ciani subgroups: the standard subgroup \(V\) from (2.3) and a second subgroup with generators
\[V^{\prime}=\left\langle\sigma_{c}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&-1\end{pmatrix},\qquad\sigma_{d}=\begin{pmatrix}0&1&0\\ 1&0&0\\ 0&0&1\end{pmatrix}\right\rangle.\]
Changing coordinates so that the matrix \(\sigma_{d}\) is a diagonal matrix yields an isomorphic curve but now with \(V^{\prime}\) as standard subgroup:
\[Y^{\prime}_{r,s}:\ (2+s)x^{4}+(2+s)y^{4}+z^{4}+2rz^{2}(x^{2}+y^{2})+(12-2s)x^{2}y^ {2}=0.\]
The curves \(Y_{r,s}\) and \(Y^{\prime}_{r,s}\) have the same Dixmier-Ohno invariants, but the pairs \((Y_{r,s},V)\) and \((Y^{\prime}_{r,s},V^{\prime})\) do not have the same (projective) Ciani invariants.
We introduce the following invariant that will be useful later on:
\[\begin{split} I&=AB(a^{2}-4BC)(b^{2}-4AC)+BC(b^{2}- 4AC)(c^{2}-4AB)\\ &+CA(c^{2}-4AB)(a^{2}-4BC).\end{split} \tag{2.6}\]
The invariant \(I\) can be expressed in terms of the other invariants via the relation
\[4I+I_{6}-{I^{\prime}_{3}}^{2}+16I_{3}I^{\prime\prime}_{3}+2I^{\prime}_{3}I^{ \prime\prime}_{3}-{I^{\prime\prime}_{3}}^{2}=0. \tag{2.7}\]
Let \((Y,V)\) be a Ciani quartic that is not necessarily in standard form. The following lemma allows to compute the Ciani invariants of \((Y,V)\).
**Lemma 2.11**.: _Let \(F(x,y,z)=0\) be a plane quartic and \(J\) a degree-\(k\) invariant. Let \(\lambda\in\overline{K}^{*}\) and \(M\in\text{GL}_{3}(\overline{K})\). Then_
* \(J(\lambda F)=\lambda^{k}J(F)\)_,_
* \(J(\,{}^{M}F)=(\det(M))^{4k/3}\cdot J(F)\)_._
_In particular, \(\Delta(\lambda F)=\lambda^{27}\Delta(F)\) and \(\Delta(\,{}^{M}F)=(\det(M))^{36}\Delta(F)\)._
**Example 2.12**.: The Ciani invariants of the Ciani quartic \((Y_{a,b},V)\) in Example 2.4 are:
\[I_{3}=2^{4}a(b+2\sqrt{a})^{2},\ I^{\prime}_{3}=2^{5}a(48a-48\sqrt{a}b+2\sqrt{a }-4b^{2}+b),\]
\[I^{\prime\prime}_{3}=2^{5}a\sqrt{a}(8\sqrt{a}-4b+1),\ I_{6}=2^{14}a^{2}(2\sqrt{a }-b)(8\sqrt{a}+4b-1)^{2}.\]
For the Ciani quartic \((Y_{a,b},V^{\prime})\) we find the conjugated ones over \(K(\sqrt{a})/K\). We notice here that \([\underline{I}(Y_{a,b},V)]\neq[\underline{I}(Y_{a,b},V^{\prime})]\) for \((a,b)\neq(0,0)\) and \(a\notin(K^{*})^{2}\).
Let \((Y,V)/\overline{K}\) be a Ciani quartic, and write \(\underline{I}(Y,V)\) for its set of invariants. When the subgroup \(V\) is clear from the context we may simply write \(\underline{I}(Y)\). We also use the notation \(Y(\underline{I})/\overline{K}\) to denote a Ciani quartic whose Ciani invariants are \([\underline{I}]\), i.e. \([\underline{I}(Y(\underline{I}))]=[\underline{I}]\). In Proposition 3.1 we will explicitly see that such a Ciani quartic always exists.
**Proposition 2.13**.: _Let \(Y/\overline{K}\) be a Ciani quartic, and write \(\underline{I}=\underline{I}(Y)\) for its set of invariants. Assume that \([\underline{I}]=(I_{3}:I^{\prime}_{3}:I^{\prime\prime}_{3}:I_{6})\in\mathbb{P}_ {1,1,1,2}(K)\) has a representative over \(K\). Then there exists a \(K\)-model \(Y_{0}\) of \(Y\)._
Proof.: The statement follows from [17, Prop. 2.1]. See also [14, Thm. 3.3].
**Remark 2.14**.: In the setting of Proposition 2.13, we have that \((Y_{0},V)\simeq_{\overline{K}}(Y_{0},\,^{\sigma}V)\) for all \(\sigma\in\operatorname{Gal}(\overline{K}/K)\). In particular, for each \(\sigma\in\operatorname{Gal}(\overline{K}/K)\) there exists \(M_{\sigma}\in\operatorname{Aut}_{\overline{K}}(Y_{0})\simeq\operatorname{ PGL}_{3}(\overline{K})\) such that \(\,{}^{\sigma}V=M_{\sigma}^{-1}VM_{\sigma}\). If \(Y_{0}\) is non-special, the subgroup \(V\subset\operatorname{Aut}_{\overline{K}}(Y_{0})\) is automatically \(K\)-rational. This is not always the case in the special case. For instance, consider
\[Y/\mathbb{Q}:\;x^{4}+y^{4}+xz^{3}=0\]
and the Ciani subgroup \(V\) generated by the automorphisms
\[\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix},\qquad\begin{pmatrix}1/\sqrt{3}&0&\zeta_{3}^{2}/\sqrt{3}\\ 0&1&0\\ 2\zeta_{3}/\sqrt{3}&0&-1/\sqrt{3}\end{pmatrix},\]
where \(\zeta_{3}=e^{2\pi i/3}.\) The Ciani invariants of \((Y,V)\) are \((1:-24:-16:-256)\in\mathbb{P}^{3}_{1,1,1,2}(\mathbb{Q})\), but \(V\) is not defined over \(\mathbb{Q}\).
### Normalisation of invariants
The invariants of a Ciani quartic \(Y/K\) form a tuple \(\underline{I}(Y)\in K^{4}\). Sometimes it is very important to distinguish this tuple from the corresponding point \([\underline{I}(Y)]\) in \(\mathbb{P}^{3}_{1,1,1,2}(K)\). This motivates the following definition.
**Definition 2.15**.: Let \(K\) be a local field with valuation \(\nu\), as in Section 1. A point \(\underline{I}=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{6})\in K^{4}\) is said to be _normalised at \(\nu\)_ if all coordinates have positive valuation and at least one of them has valuation equal to zero. A _normalisation \(\underline{I}_{\nu}\in(K^{\prime})^{4}\)_ of \(\underline{I}\) is a normalised vector defined over an extension field \(K^{\prime}/K\) that defines the same point in the weighted projective space \(\mathbb{P}^{3}_{1,1,1,2}(K^{\prime})\).
**Remark 2.16**.: Let \(\underline{I}=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{6})\in K^{4}\) be a set of Ciani invariants. It can be normalised over a quadratic extension \(K^{\prime}/K\). Namely, define \(m:=\min(\nu(I_{3}),\nu(I_{3}^{\prime}),\nu(I_{3}^{\prime\prime}),\nu(I_{6})/2)\) and let \(u\in K^{\prime}\) be an element with \(\nu(u)=m.\) Then \(\underline{I}_{\nu}=(I_{3}/u,I_{3}^{\prime}/u,I_{3}^{\prime\prime}/u,I_{6}/u^ {2})\in{K^{\prime}}^{4}\) is normalised. Notice that \([\underline{I}]=[\underline{I}_{\nu}]\in\mathbb{P}^{3}_{1,1,1,2}(K)\), as required in Definition 2.15.
**Remark 2.17**.: If \(\nu(I_{6})\) is even or \(m\neq\nu(I_{6})/2\), then we can normalise \(\underline{I}\) at \(\nu\) over \(K\), so \(\underline{I}_{\nu}\in K^{4}\). In the case that we cannot normalise at \(\nu\) over \(K\), we have that after normalisation \(\nu(I_{3}),\nu(I_{3}^{\prime}),\nu(I_{3}^{\prime\prime})>0\) and \(\nu(I_{6})=0\). As discussed later, see Lemmas 4.2 and 4.3, it follows that in this situation, the curve \(Y/\overline{K}\) with invariants \(\underline{I}\) has bad reduction at \(\nu\).
**Remark 2.18**.: Given a point \([\underline{I}]\in\mathbb{P}^{3}_{1,1,1,2}(K)\) there exists a (not necessarily normalised) point \(\underline{I}^{\prime}\in K^{4}\) such that \([\underline{I}]=[\underline{I}^{\prime}]\). For example, consider \([(\sqrt{\pi},\sqrt{\pi},\sqrt{\pi},1)]=[(\pi,\pi,\pi,\pi)]\) for an element \(\pi\in K\) with \(\nu(\pi)=1\).
Models of Ciani quartics
In this section, given a set of Ciani invariants \(\underline{I}\in K^{4}\), we construct a standard model \(Y_{1}(\underline{I})\) over an explicit finite extension \(L^{\prime}/K\) such that \([\underline{I}]=[\underline{I}(Y_{1}(\underline{I}))]\). We also construct a \(K\)-model of this curve that we call \(Y_{0}(\underline{I})\). We finally characterise when \(Y(\underline{I})\) is special in terms of the invariants and we study the twists of \(Y_{0}(\underline{I})/K\).
### Reconstruction of a Ciani quartic with given invariants
In this section and in the following, we exceptionally assume that \(K\) is an arbitrary (even global) field of characteristic different from \(2\). We can change the assumptions on \(K\) as the result works in this generality. Let \(\underline{I}=(I_{3},I^{\prime}_{3},I^{\prime\prime}_{3},I_{6})\in K^{4}\) be a fixed set of invariants with \(\Delta(\underline{I})=2^{20}I_{3}I_{3}^{\prime\prime 4}I_{6}^{2}\neq 0\).
The goal of this subsection is to construct a Ciani quartic \(Y_{1}(\underline{I})\) given by a standard model over an explicit field extension \(L^{\prime}/K\) such that \([\underline{I}]=[\underline{I}(Y_{1}(\underline{I}))]\). We also provide a \(K\)-model \(Y_{0}(\underline{I})\) of this quartic.
We start by introducing some notation, which is used in the next proposition. Set \(P=8I_{3}+I^{\prime}_{3}-I^{\prime\prime}_{3}\), consider the polynomial \(\mathcal{P}(T)=T^{3}-S_{1}T^{2}+S_{2}T-S_{3}\) with
\[S_{1}= \,I^{\prime}_{3}+12I_{3}, \tag{3.1}\] \[S_{2}= \,\frac{1}{4}(P^{2}+16I_{3}(P+I^{\prime\prime}_{3})-I_{6}),\] \[S_{3}= \,I_{3}P^{2},\]
and denote the roots of \(\mathcal{P}\) by \(\mathcal{A},\mathcal{B},\mathcal{C}\).
**Proposition 3.1**.: _Let \(\underline{I}=(I_{3},I^{\prime}_{3},I^{\prime\prime}_{3},I_{6})\in K^{4}\) be a set of invariants. Assume \(\Delta(\underline{I})=2^{20}I_{3}I_{3}^{\prime\prime 4}I_{6}^{2}\neq 0\). Let \(L/K\) be the splitting field of the polynomial \(\mathcal{P}\)._
1. _Assume_ \(P\neq 0\)_. The Ciani quartic defined by the standard model_ \[Y_{1}(\underline{I}):\,\mathcal{A}x^{4}+\mathcal{B}y^{4}+\mathcal{C}z^{4}+P(x^ {2}y^{2}+y^{2}z^{2}+z^{2}x^{2})=0\] (3.2) _has invariants_ \(\underline{I}(Y_{1}(\underline{I}))=(P^{2}I_{3},P^{2}I^{\prime}_{3},P^{2}I^{ \prime\prime}_{3},P^{4}I_{6})\)_. The discriminant is_ \(\Delta(Y_{1}(\underline{I}))=\Delta(\underline{I})P^{18}\)_. The quartic_ \(Y_{1}(\underline{I})\) _is defined over_ \(L^{\prime}=L=K(\mathcal{A},\mathcal{B},\mathcal{C})\)_._
2. _Assume that_ \(P=0\) _and that_ \(0\) _is a simple root of_ \(\mathcal{P}\)_, hence_ \(S_{2}\neq 0\)_. It is no restriction to assume that_ \(\mathcal{A}=0\) _and_ \(\mathcal{B},\mathcal{C}\neq 0\)_. The Ciani quartic defined by the standard model_ \[Y_{1}(\underline{I}):\,I_{3}S_{2}x^{4}+\mathcal{B}y^{4}+\mathcal{C}z^{4}+S_{2} x^{2}(y^{2}+z^{2})=0\] (3.3) _has invariants_ \(\underline{I}(Y_{1}(\underline{I}))=(S_{2}^{2}I_{3},S_{2}^{2}I^{\prime}_{3},S_ {2}^{2}I^{\prime\prime}_{3},S_{2}^{4}I_{6})\)_. The discriminant of this model is_ \(\Delta(Y_{1}(\underline{I}))=\Delta(\underline{I})S_{2}^{18}\) _and it is defined over_ \(L^{\prime}=L=K(\mathcal{B},\mathcal{C})\)
_._
3. _Assume that_ \(P=0\) _and_ \(0\) _is at least a double root of_ \(\mathcal{P}\)_. It is no restriction to assume that_ \(\mathcal{A}=\mathcal{B}=0\)_. Choose_ \(r^{2}=I_{3}S_{1}\)_. The Ciani quartic defined by the standard model_ \[Y_{1}(\underline{I}):\,I_{3}(x^{4}+y^{4}+z^{4})+rx^{2}y^{2}=0\] (3.4) _has invariants_ \(\underline{I}(Y_{1}(\underline{I}))=(I_{3}^{3},I_{3}^{2}I_{3}^{\prime},I_{3}^ {2}I_{3}^{\prime\prime},I_{3}^{4}I_{6})\)_, the discriminant is_ \(\Delta(Y_{1}(\underline{I}))=\Delta(\underline{I})I_{3}^{18}\)_, and it is defined over the extension_ \(L^{\prime}=L(r)/K\)_._
Proof.: It is straightforward to check that the given models have the given invariants and that they are defined over the given fields.
**Notation 3.2**.: Let \(\underline{I}\in K^{4}\) be a set of invariants. We write \(L(\underline{I})\) for the field \(L\) from Proposition 3.1, i.e. for the splitting field of the polynomial \(\mathcal{P}\). It only depends on \([\underline{I}]\in\mathbb{P}^{3}_{1,1,1,2}(K)\). In addition, in cases (a) and (b) we have \(L^{\prime}=L\).
**Remark 3.3**.: Assume that \(Y\) is a Ciani quartic given by an arbitrary standard model (2.1). Take \(\underline{I}=\underline{I}(Y)\).
1. If \(P=8I_{3}+I_{3}^{\prime}-I_{3}^{\prime\prime}=abc\neq 0\), then \(Y_{1}(\underline{I})\) is given by (3.2), and we may take \[\mathcal{A}=Aa^{2},\quad\mathcal{B}=Bb^{2},\quad\mathcal{C}=Cc^{2}.\]
2. If \(P=0\), the polynomial \(\mathcal{P}\) has \(0\) as a root. Assume that \(0\) is a simple root of \(\mathcal{P}\). In this case \(Y_{1}(\underline{I})\) is given by (3.3), where we take \[\mathcal{B}=Bb^{2},\qquad\mathcal{C}=Cc^{2}.\]
3. Assume that \(P=0\) and that \(0\) is a root of \(\mathcal{P}\) of multiplicity at least \(2\). In this case \(Y_{1}(\underline{I})\) is given by (3.4) with \[r^{2}=ABC^{2}c^{2}.\] Note that in this case \(Y_{1}(\underline{I})\) is defined over the extension \(L(r)/L\) of degree at most \(2\).
The Ciani quartics \(Y_{1}(\underline{I})\) can be descended to \(K\). The next result provides a particular \(K\)-model that we call \(Y_{0}(\underline{I})\). Later on, in Lemma 3.6, we will give all the \(K\)-models of \(Y_{1}(\underline{I})\). In order to do that we will compute the twists of \(Y_{0}(\underline{I})/K\).
**Proposition 3.4**.: _With the notation in Proposition 3.1 and the data in Table 2, the morphism \(\phi:\,Y_{0}(\underline{I})\to Y_{1}(\underline{I})\) defines a \(K\)-model \(Y_{0}(\underline{I})\) with \(\underline{I}(Y_{0}(\underline{I}))=(\lambda I_{3}(Y_{1}(\underline{I})), \lambda I_{3}^{\prime}(Y_{1}(\underline{I})),\lambda I_{3}^{\prime\prime}(Y_{ 1}(\underline{I})),\lambda^{2}I_{6}(Y_{1}(\underline{I})))\)._
Proof.: It is straightforward to check that the curves \(Y_{0}(\underline{I})\) defined by the isomorphisms \(\phi\) are defined over \(K\). The equality relating the invariants of both models is a consequence of Lemma 2.11.
### Characterisation of special Ciani quartics
As in the previous section, we again assume that \(K\) is any (possibly global) field of characteristic \(\neq 2\). Fix a set of Ciani invariants \(\underline{I}\in K^{4}\). In this section, we characterise special Ciani quartics in terms of their Ciani invariants. We refer to [10] for a similar result characterising the automorphism group of an arbitrary smooth quartic in terms of its Dixmier-Ohno invariants.
We write \(Q\) for the discriminant of the polynomial \(\mathcal{P}\) introduced in Section 3.1. A straightforward calculation shows that
\[Q=-4I_{3}I_{3}^{\prime 3}I_{6}-27I_{3}^{2}I_{6}^{2}+18I_{3}I_{3}^{\prime}I_{6} I+I_{3}^{\prime 2}I^{2}-4I^{3}. \tag{3.5}\]
**Lemma 3.5**.: _The Ciani quartic \((Y(\underline{I}),V)\) with invariants \(\underline{I}\) is special if and only if \(Q=0\)._
Proof.: Being special is preserved under \(\overline{K}\)-isomorphism. We can therefore check the condition for a standard model \(Y_{1}(\underline{I})\) as in Proposition 3.1.
In case (c) of Proposition 3.1 the curve \(Y_{1}(\underline{I})\) is special and \(Q=0\). Therefore it remains to consider the cases (a) and (b) of Proposition 3.1, and the statement is automatically satisfied.
The condition \(Q=0\) implies that the polynomial \(\mathcal{P}\) has a double root. In case (b) of Proposition 3.1 we have \(\mathcal{B}=\mathcal{C}\). This implies that \(\tau(x:y:z)\mapsto(x:z:y)\) is an automorphism of \(Y_{1}(\underline{I})\) that is not contained in the fixed Ciani subgroup \(V\). In case (a) the curve \(Y(\underline{I})\) admits a similar automorphism permuting two of the variables, depending on which two roots are equal.
We prove the reverse implication. Assume that \(Y(\underline{I})\) is special. From the classification of automorphism group of plane quartics, it follows that \(\mathrm{Aut}_{\overline{L}}(Y(\underline{I}))\) contains the dihedral group \(D_{4}\) of order \(8\) as subgroup, see
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \multicolumn{2}{c|}{\((a)\)} \\ \hline \([L:K]\) & \(1\) & \(2\) & \(3\) \\ \hline \(\phi\) & \(\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\) & \(\begin{pmatrix}1&0&0\\ 0&1&\mathcal{B}\\ 0&1&\mathcal{C}\end{pmatrix}\) & \(\begin{pmatrix}1&\mathcal{A}&\mathcal{A}^{2}\\ 1&\mathcal{B}&\mathcal{B}^{2}\\ 1&\mathcal{C}&\mathcal{C}^{2}\end{pmatrix}\) \\ \hline \(\lambda\) & \(1\) & \((\mathcal{B}-\mathcal{C})^{4}\) & \(Q^{4}\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline & \multicolumn{2}{c|}{\((b)\)} & \multicolumn{2}{c|}{\((c)\)} \\ \hline \([L:K]\) & \(1\) & \(2\) & \(1\) \\ \hline \(\phi\) & \(\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\) & \(\begin{pmatrix}1&0&0\\ 0&1&\mathcal{B}\\ 0&1&\mathcal{C}\end{pmatrix}\) & \(\begin{pmatrix}\sqrt{r}&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\) \\ \hline \(\lambda\) & \(1\) & \((\mathcal{B}-\mathcal{C})^{4}\) & \(I_{3}S_{1}\) \\ \hline \end{tabular}
\end{table}
Table 2: Explicit isomorphisms for \(K\)-models of Ciani quartics.
[LRRS14] or Figure 1. Using the explicit description of these curves in Examples 2.4 and 2.12 it is easy to check that for any quartic in this family we have \(Q=0\).
### Twists of non-special Ciani quartics
From this section on, we assume that \(p>3\) is a prime and that \(K\) is a finite extension of \(\mathbb{Q}_{p}^{\mathrm{nr}}\). Write \(\nu\) for the valuation on \(K\). When working over an extension field \(K^{\prime}/K\), we always extend \(\nu\) to \(K^{\prime}\). Let \(\underline{I}\in K^{4}\) be a set of invariants such that \(Y(\underline{I})\) is non-special. Let \(L=L(\underline{I})/K\) be the minimal finite extension over which \(Y_{1}(\underline{I})\) is defined, as in Proposition 3.1. Since \(K\) is a local field, we have that \(L/K\) is Galois and \([L:K]\leq 3\). We write \(Y_{0}(\underline{I})\) for the \(K\)-model of \(Y(\underline{I})\) we constructed in Proposition 3.4.
In this section, we describe the \(K\)-twists of \(Y_{0}(\underline{I})\). The results are a special case of the result of [10], presented in a way we will use afterwards.
The group \(\Gamma_{K}:=\mathrm{Gal}(\overline{K}/K)\) acts on \(\mathrm{Aut}(Y_{0}(\underline{I}))=V\simeq C_{2}^{2}\), with action induced by that of \(\mathrm{Gal}(L/K)\) on the roots of \(\mathcal{P}\). If \([L:K]=1\), the action is trivial. In the case that \([L:K]=2\), exactly two of the roots of \(\mathcal{P}\) are congruent modulo \(\nu\), i.e. \(\mathcal{B}\equiv\mathcal{C}\pmod{\nu}\). Then the generator \(\tau\in\mathrm{Gal}(L/K)\) acts on \(V\) by \({}^{\tau}\sigma_{b}=\sigma_{c}\), where we use the notation from Notation 2.7. Similarly, if \([L:K]=3\), all three roots of \(\mathcal{P}\) are congruent modulo \(\nu\), and the generator of \(\mathrm{Gal}(L/K)\) acts by cyclically permuting the nontrivial elements of order \(2\) in \(V\).
The following lemma determines the \(K\)-models of \(Y(\underline{I})\) up to \(K\)-isomorphism, that is the set of twists \(\mathrm{Twist}(Y_{0}(\underline{I})/K)\) of \(Y_{0}(\underline{I})\).
**Lemma 3.6**.: _Let \(\underline{I}\in K^{4}\) and \(Y_{0}(\underline{I})\), defined as in Proposition 3.4, be non-special. Let \(L\) be the splitting field of the polynomial \(\mathcal{P}\) introduced in Section 3.1. Then_
\[\#\,\mathrm{Twist}(Y_{0}(\underline{I})/K)=\begin{cases}4\text{ if }[L:K]=1,\\ 2\text{ if }[L:K]=2,\\ 1\text{ if }[L:K]=3.\end{cases}\]
_In addition:_
1. _If_ \([L:K]=1\)_, then the Ciani quartics_ \(Y_{0}^{\prime}(\underline{I})/K\) _given by the isomorphisms_ \[\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\begin{pmatrix}\pi&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\begin{pmatrix}1&0&0\\ 0&\pi&0\\ 0&0&1\end{pmatrix},\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&\pi\end{pmatrix}:\,Y_{0}^{\prime}(\underline{I})\to Y_{0}(\underline{I}),\] _correspond to different elements in_ \(\mathrm{Twist}(Y_{0}(\underline{I})/K)\)_. Here_ \(\pi\in K^{\prime}\) _is an element with_ \(\nu(\pi)=1/2\) _in a quadratic extension_ \(K^{\prime}/K\)
_._
2. _If_ \([L:K]=2\)_, then the Ciani quartics_ \(Y^{\prime}_{0}(\underline{I})/K\) _given by the isomorphisms_ \[\begin{pmatrix}1&0&0\\ 0&1&\mathcal{B}\\ 0&1&\mathcal{C}\end{pmatrix},\begin{pmatrix}\pi_{1}&0&0\\ 0&1&\mathcal{B}\\ 0&\zeta_{4}&\zeta_{4}\mathcal{C}\end{pmatrix}:\,Y^{\prime}_{0}(\underline{I })\to Y_{0}(\underline{I}),\] _correspond to different elements in_ \(\operatorname{Twist}(Y_{0}(\underline{I})/K)\)_. Here_ \(\pi_{1}\in K^{\prime\prime}\) _is an element with_ \(\nu(\pi_{1})=1/4\) _in an extension_ \(K^{\prime\prime}/K\) _and_ \(\zeta_{4}\) _is a primitive_ \(4\)_-th root of unity._
Proof.: This is a consequence of [11, Prop. 5.2], but we include the proof for the sake of completeness. In order to compute \(\operatorname{Twist}(Y_{0}(\underline{I})/K)=H^{1}(\operatorname{Gal}( \overline{K}/K),V)\) we just notice that every cocycle splits (in the sense of [11, Def. 2.1]) over a Galois extension with Galois group a subgroup of \(V\rtimes\operatorname{Gal}(L/K)\), see [11, Sec. 4, Step 2]. In our case, this gives a finite number of cyclic extensions. A cocycle is then determined by the image of a generator of the Galois group in \(V\). Two cocycles are equivalent if they are conjugated as in Equation (2) in [11]. This is again a finite number of checks. We compute the numbers given in the statement of the lemma. It is straightforward to check that the isomorphisms \(\phi\) that we give define non-equivalent cocycles in \(H^{1}(\operatorname{Gal}(\overline{K}/K),V)\) by the rule: \(\xi(\sigma)=\phi\circ^{\sigma}\phi^{-1}\) for each \(\sigma\in\operatorname{Gal}(\overline{K}/K)\).
## 4 Stable reduction and conductor exponents
In this section, we collect some results on the stable reduction of Ciani quartics. Let \(K\) be as in the previous section. We write \(k\) for the residue field of \(K\). In the rest of the paper, we will mostly assume that \(K=\mathbb{Q}_{p}^{\operatorname{nr}}\) as this is the case that is most interesting for our purposes.
**Definition 4.1**.: Let \(Y/K\) be a smooth projective and absolutely irreducible plane quartic.
1. The curve \(Y\) has _good reduction_ if there exists a flat and proper \(\mathcal{O}_{K}\)-scheme \(\mathcal{Y}\) with generic fiber \(Y\) such that the special fiber \(\overline{Y}:=Y\otimes_{\mathcal{O}_{K}}k\), which we call the _reduction_ of \(Y\), is smooth.
2. The curve \(Y\) has _good quartic reduction_ if \(Y\) has good reduction and its reduction \(\overline{Y}\) is also a plane quartic. If \(Y\) has good but not good quartic reduction, we say that it has _good hyperelliptic reduction_.
3. We say that \(Y\) has _potentially_ good quartic (resp. hyperelliptic) reduction if \(Y\) has good quartic (resp. hyperelliptic) reduction after replacing \(K\) by a finite extension.
4. Otherwise, we say that \(Y\) has _geometrically bad reduction_.
We note that, if \(Y\) has good quartic reduction over \(K\), then the \(\mathcal{O}_{K}\)-scheme from Definition 4.1 may be defined by a homogeneous polynomial \(F\in\mathcal{O}_{K}[x,y,z]\) of degree \(4\) whose reduction \(\overline{F}\) modulo the uniformising element \(\pi\) of \(K\) is a quartic with \(\Delta(\overline{F})\neq 0\).
**Lemma 4.2**.: _Let \(\underline{I}\in K^{4}\). A Ciani quartic \(Y\) with \([\underline{I}(Y)]=[\underline{I}]\) has potentially good quartic reduction at \(\nu\) if and only if \(\nu(\Delta(\underline{I}_{\nu}))=0\)._
Proof.: The statement follows from [1, Prop. 6].
In Theorem 2 of [1] one finds a characterisation of the type of stable reduction of \(Y_{\underline{I}}\) at \(\nu\) in terms of the invariants in the case that \(Y\) has geometrically bad reduction at \(\nu\). A Ciani quartic with \(\nu(\Delta(\underline{I}_{\nu}))>0\) may still have potentially good reduction at \(\nu\). However, in this case, the reduction of \(Y\) is hyperelliptic.
**Lemma 4.3**.: _Let \(Y\) be a Ciani plane quartic with invariants \(\underline{I}\), which we assume to be normalised, i.e. \(\underline{I}_{\nu}=\underline{I}\). Then \(Y\) has potentially good hyperelliptic reduction at \(\nu\) if and only if_
\[0<3\nu(I_{3}^{\prime\prime})=2\nu(I_{6})\leq 6\nu(I_{3}^{\prime}).\]
Proof.: This is a special case of [1, Theorem 3].
Let \(Y/K\) be a curve with potentially good reduction. It follows from the Stable Reduction Theorem, proved by Deligne-Mumford ([1]), and the assumption that the residue field \(k\) of \(K\) is algebraically closed, that there is a unique minimal extension \(M/K\) over which \(Y_{M}:=Y\otimes_{K}M\) has good reduction. The uniqueness of \(M\) implies that the extension \(M/K\) is Galois. We write \(G:=\operatorname{Gal}(M/K)\). Since \(g(Y)=3\geq 2\), there exists a unique smooth model \(\mathcal{Y}\) of \(Y_{M}\) over \(\mathcal{O}_{M}\).
The Galois group \(G\) acts faithfully and \(k\)-linearly on the special fiber \(\overline{Y}\) of \(\mathcal{Y}\). We obtain an embedding
\[G\hookrightarrow\operatorname{Aut}_{k}(\overline{Y}).\]
We define the _inertial reduction of \(Y/K\)_ as the curve \(\overline{Z}:=\overline{Y}/G\).
An important arithmetic invariant associated with a curve \(Y\) over a local field is its _conductor exponent_\(f_{p}=f_{p}(Y)\). It is a number that gives information on the reduction type of the Jacobian of \(Y\), and on the Galois representation associated to it. We refrain from giving a precise definition, for which we refer to [1], but instead we give a formula for computing it in our situation. Since we assume that \(p>3\), it follows from Corollary 4.7 that the conditions of the next proposition are satisfied in our situation.
**Proposition 4.4**.: _Let \(p>2\) be a prime and \(K\) be a finite extension of \(\mathbb{Q}_{p}^{\mathrm{nr}}\). Let \(Y/K\) be a smooth projective curve with good reduction over a tamely ramified extension \(M/K\). Then the conductor exponent of \(Y\) satisfies_
\[f_{p}(Y)=2g(Y)-2g(\overline{Z}). \tag{4.1}\]
Proof.: This is a very special case of [1, Theorem 1.1].
The following general result follows from Proposition 4.4.
**Corollary 4.5**.: _Let \(K\) be a field containing \(\mathbb{Q}_{p}^{\mathrm{nr}}\) and let \(Y/K\) be a Ciani quartic. The following are equivalent:_
1. \(Y\) _has good reduction,_
2. \(f_{p}=0\)_._
Proof.: The forward implication follows from Proposition 4.4. For the converse, it follows from the Neron-Ogg-Shaferevich criterion ([10, Theorems 1,2]) that the Jacobian of \(Y\) has good reduction over \(K\). In [1] we proved that \(Y\) has good reduction, as well. Namely, in the list of cases in [1, Theorems 2 and 3], all possibilities for the stable reduction of a Ciani quartic in the case of geometrically bad reduction have at least one loop, i.e. are not of compact type, which implies that the corresponding Jacobian has bad reduction.
Let \((Y,V)/K\) be a Ciani quartic. Let \(L/K\) be a minimal Galois extension such that \(Y\) admits a standard model \(Y_{1}\) over \(L\). Recall from Lemma 2.8 that \(\mathrm{Gal}(L/K)\subset S_{3}\).
**Proposition 4.6**.: _The curve \(Y\) admits stable reduction over an extension \(M/L\) with \([M:L]\mid 4\)._
Proof.: Let \(Y_{1}\) be a standard model of \(Y\) over \(L\). We use the notation from (2.1). Denote by \(J\) the Jacobian of \(Y_{1}\). By Raynaud's Criterion [1, Proposition 4.7], \(Y_{1}\) and \(J\) acquire stable reduction over the extension \(L(J[m])\), where \(m\) is any integer with \(m\geq 3\) and \(p\nmid m\). We choose \(m=4\), and set \(M=L(J[4])\).
**Claim**: The degree of \(L(J[2])/L\) is at most \(2\).
As in Notation 2.7, we write \(E_{i}=Y_{\overline{K}}/\langle\sigma_{i}\rangle\) for \(i\in\{a,b,c\}.\) Then \(g(E_{i})=1\). Set \(\mathrm{A}=E_{a}\times E_{b}\times E_{c}\). We obtain an isogeny \(\iota:\mathrm{A}\to J\) over \(\overline{K}.\) It follows from [1, Section 4.1] that the kernel of \(\iota\) is a subgroup of \(\mathrm{A}[2]\). We show that \([L(\mathrm{A}[2]):L]\leq 2.\) The claim follows, as \(\iota\) may be defined over \(L(\mathrm{A}[2])\). Moreover, \(J[2]\) is the image of \(\mathrm{A}[2]\) under \(\iota\). Hence \(L(J[2])\subset L(\mathrm{A}[2])\).
The \(2\)-torsion points of the \(E_{i}\), and hence of \(\mathrm{A}\), are easily described explicitly. Namely, they are the points on \(E_{i}\) above \(D\setminus D_{i}\), where we use the notation from the proof of Lemma 2.8. In other words, \(L(\mathrm{A}[2])\) is the
field over which all \(6\) branch points of the cover \(\varphi:Y_{1}\to Y_{1}/V=:X_{1}\) are rational. In terms of the coefficients of the standard model (2.1), \(L(\mathrm{A}[2])\) is the field obtained by adjoining the roots of the polynomials
\[p_{a}(T)=T^{2}-2aT+4BC,\quad p_{b}(T)=T^{2}-2bt+4AC,\quad p_{c}(T)=T^{2}-2cT+4AB.\]
see for example [BCK\({}^{+}\)21, Section 2.1]. The claim follows, since \(L(\mathrm{A}[2])/L\) is a totally ramified extension of local fields of characteristic different from \(2\).
It is well known that \(L(J[4])/L(J[2])\) is an abelian extension of exponent \(2\). Therefore, the degree of this extension is at most \(2\), as well, and \([L(J[4]):L]\) divides \(4\). The statement of the lemma follows.
**Corollary 4.7**.: _Every Ciani quartic \(Y/K\) has good reduction over a cyclic extension \(M/K\) of degree dividing either \(8\) or \(12\)._
Proof.: Let \(M/K\) be as in the proof of Proposition 4.6. The statement on the degree of \(M/K\) follows immediately from Lemma 2.8 and Proposition 4.6, since the residue field \(k\) of \(K\) is algebraically closed and of characteristic \(p\geq 5\). Since \(M/K\) is totally ramified and its degree \(p\) does not divide \([M:K]\) it follows that \(M/K\) is Galois, and that its Galois group is cyclic.
## 5 Reduction of Ciani quartics
In this section, we study Ciani quartics with potentially good reduction to characteristic \(p>3\). We only treat the case where \(Y(\underline{I})\) is non-special (Definition 2.2). It is possible to treat the special case analogously, but for the sake of brevity we decided not to include it. We treat the two cases of potentially good quartic and hyperelliptic reduction separately.
Fix a prime \(p>3\) and let \(K=\mathbb{Q}_{p}^{\mathrm{nr}}\). We write \(\nu\) for the valuation on \(K\) with \(\nu(p)=1\). The arguments also work if \(K\) is a finite extension of \(\mathbb{Q}_{p}^{\mathrm{nr}}\), though one has to adapt the statements a bit.
Let \(\underline{I}=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{6})\in K^{4}\) be a set of invariants. We assume that the curve \(Y(\underline{I})/\overline{K}\) is smooth and non-special. Moreover, we assume that the invariants \(\underline{I}\) are normalised. As we are only interested in the case of potentially good reduction, this is no restriction by Remark 2.17.
### The case of potentially good quartic reduction
In this section, we assume that \(p>3\) is a prime at which \(Y(\underline{I})\) has potentially good quartic reduction. Recall from Lemma 4.2 that this happens if and only if \(p\nmid\Delta(\underline{I}_{\nu})\). Since \(\underline{I}\) is normalised, it follows that \(\nu(I_{3})=\nu(I_{3}^{\prime\prime})=\nu(I_{6})=0.\) We set \(v:=\nu(I_{3}^{\prime}).\) This is the only coefficient of \(\underline{I}\) whose valuation may be positive in our situation.
In Proposition 3.1 we constructed an explicit field extension \(L=L(\underline{I})/K\) together with a standard model \(Y_{1}(\underline{I})\) over \(L\). The discriminant \(\Delta(Y_{1}(\underline{I}))\)
of the model \(Y_{1}(\underline{I})\) differs from \(\Delta(\underline{I})\) by a factor. In Proposition 3.4 we constructed an explicit model \(Y_{0}(\underline{I})\) over \(K\) of \(Y_{1}(\underline{I})\). The discriminant \(\Delta(Y_{0}(\underline{I}))\) differs from \(\Delta(Y_{1}(\underline{I}))\) by another factor. Therefore, \(Y_{0}(\underline{I})\) does not need to have good reduction over \(K\). In the following propositions, we analyse whether \(Y_{0}(\underline{I})\) or any of its twists have good reduction over \(K\).
**Proposition 5.1**.: _Let \(\underline{I}=(I_{3},I^{\prime}_{3},I^{\prime\prime}_{3},I_{6})\in K^{4}\) be a set of normalised invariants. Assume that \(\nu(\Delta(\underline{I}))=\nu(Q)=0\) and that \(Y(\underline{I})\) is non-special. There exists a standard Ciani model \(Y_{2}/K\) with good reduction at \(p\). This model satisfies \(f_{p}(Y_{2})=0.\) For the other twists \(Y_{2}^{\prime}\) of \(Y_{2}\), one has \(f_{p}(Y_{2}^{\prime})=4.\)_
Proof.: The assumption that \(\nu(Q)=0\) implies that the polynomial \(\mathcal{P}\) from Proposition 3.1 splits in \(K\). In particular, the standard model \(Y_{1}(\underline{I})\) of \(Y(\underline{I})\) from that proposition may be defined over \(L=K\).
If \(P\neq 0\), we are in case (a) of Proposition 3.1, and the discriminant of this standard model is \(\Delta(Y_{1}(\underline{I}))=\Delta(\underline{I})P^{18}\). The assumption that \(Y_{1}(\underline{I})\) has potentially good reduction implies that \(p\nmid\Delta(\underline{I})\). If \(\nu(P)=0\) it follows that \(Y_{1}(\underline{I})\) already has good reduction. We take \(Y_{2}(\underline{I})=Y_{1}(\underline{I})\) in this case. If \(\nu(P)>0\), the assumption that \(\nu(Q)=0\) implies that only one of the roots \(\mathcal{A},\mathcal{B},\mathcal{C}\) of \(\mathcal{P}\) defined in Equation 3.1 has positive valuation. Let us say that it is \(\mathcal{A}\). Moreover, we have from that equation that
\[2\nu(P)=\nu(\mathcal{A})+\nu(\mathcal{B})+\nu(\mathcal{C}). \tag{5.1}\]
So \(\nu(\mathcal{A})\) is even, and dividing \(x\) by \(p^{\nu(\mathcal{A})/4}\) we obtain another standard model \(Y_{2}(\underline{I})\), still defined over \(K\), whose discriminant satisfies
\[\nu(\Delta(Y_{2}))=\nu(\Delta(Y_{1}(\underline{I})))+36(\nu(\mathcal{A}))/4= \nu(\Delta(Y_{1}(\underline{I})))-18\nu(P)=\nu(\Delta(\underline{I})).\]
Hence, the model \(Y_{2}\) has good reduction over \(K\).
If \(P=0\), i.e. we are in case (b) of Proposition 3.1, we argue similarly: we can make \(\mathcal{A}=0\) and then \(\Delta(Y_{1}(\underline{I}))=\Delta(\underline{I})S_{2}^{18}\) with \(S_{2}=\mathcal{B}\mathcal{C}\). Moreover, \(Q=\mathcal{B}^{2}\mathcal{C}^{2}(\mathcal{B}-\mathcal{C})^{2}\). The assumption \(\nu(Q)=0\) implies then that \(Y_{1}(\underline{I})\) already has good reduction. In this case, we may therefore choose \(Y_{2}(\underline{I})=Y_{1}(\underline{I})\).
We do not need to consider case (c) of Proposition 3.1, since \(Y(\underline{I})\) is special in this case.
The claim for the twists follows from Lemma 3.6 (if \([L:K]=1\)) and Proposition 4.4; see the proof of the Proposition 5.3 for more details on the computation of the conductor in a more complicated case.
**Remark 5.2**.: Another way of proving that the other twists have positive conductor, i.e. that they are not \(K\)-isomorphic to a model with good reduction, is via the Elsenhans-Stoll minimal model reduction algorithm in [1]. This approach is computationally more expensive and does not provide the value of the conductor of the twists that do not have good reduction.
We now study the case in which \(\nu(Q)>0\).
**Proposition 5.3**.: _We use the notation from Proposition 3.1. Let \(p\nmid\Delta(\underline{I})\) be a prime of potentially good quartic reduction. Assume that \(\nu(Q)>0\)._
1. _The_ \(K\)_-model_ \(Y_{0}(\underline{I})\) _of_ \(Y(\underline{I})\) _given in Proposition_ 3.4 _has good reduction over an extension of_ \(L(\underline{I})\) _degree at most_ \(2\)_. The conductor is_ \(f_{p}(Y_{0}(\underline{I}))=\ 4\)_._
2. _If_ \(Y(\underline{I})\) _is non-special, all_ \(K\)_-models_ \(Y_{0}^{\prime}\) _of_ \(Y(\underline{I})\) _have_ \(f_{p}(Y_{0}^{\prime})=4\)_._
Proof.: The proof proceeds in the following steps. In step 1 we define a standard model \(Y_{2}(\underline{I})\) over an extension \(L_{8}\) of \(L\) with good reduction. This model is defined by giving an explicit isomorphism \(\psi:\,Y_{1}(\underline{I})\to Y_{2}(\underline{I})\). Composing \(\psi\) with the isomorphism \(\phi\) from Proposition 3.4, we get an isomorphism from \(Y_{0}(\underline{I})\) to \(Y_{2}(\underline{I})\) defined over \(L_{2}\). In step 2 we compute the action of \(\operatorname{Gal}(L_{8}/K)\) on the special fiber \(\overline{Y}\) of \(Y_{2}(\underline{I})\) and compute the conductor exponent \(f_{p}(Y_{0}(\underline{I}))\) via Proposition 4.4. In step 3 we proceed in the same way with the other \(K\)-twists of \(Y_{0}(\underline{I})\) described in Lemma 3.6.
We assume we are in case (a) of Proposition 3.1 and that \([L:K]=2\). The other cases are easier, so we discuss these at the end of the proof. By assumption, \(P\neq 0\). As in the proof of Proposition 5.1, the roots \(\mathcal{A},\mathcal{B},\mathcal{C}\) of the polynomial \(\mathcal{P}\) from Proposition 3.1 satisfy (5.1):
\[2\nu(P)=\nu(\mathcal{A})+\nu(\mathcal{B})+\nu(\mathcal{C}).\]
_Step 1_: Without loss of generality, we may assume \(\mathcal{A}\in K\) and \(\mathcal{B}\), \(\mathcal{C}\) conjugated in \(L\setminus K\). In particular, \(\nu(\mathcal{B})=\nu(\mathcal{C})\). Let \(L_{8}:=K(p^{1/8})/K\) be the ramified extension of degree \(8\). We consider the isomorphism:
\[\psi=\begin{pmatrix}p^{\nu(\mathcal{A})/4}&0&0\\ 0&p^{\nu(\mathcal{B})/4}&0\\ 0&0&p^{\nu(\mathcal{C})/4}\end{pmatrix}:\,Y_{1}(\underline{I})\to Y_{2}( \underline{I}), \tag{5.2}\]
where \(Y_{2}(\underline{I})/L_{8}\) is a model of \(Y_{1}(\underline{I})\) defined by the equation
\[\frac{\mathcal{A}}{p^{\nu(\mathcal{A})}}x_{2}^{4}+\frac{\mathcal{B}}{p^{\nu( \mathcal{B})}}y_{2}^{4}+\frac{\mathcal{C}}{p^{\nu(\mathcal{C})}}z_{2}^{4}+ \frac{P}{p^{(\nu(\mathcal{A})+\nu(\mathcal{B}))/2}}\left(x_{2}^{2}y_{2}^{2}+x_ {2}^{2}z_{2}^{2}\right)+\frac{P}{p^{\nu(\mathcal{B})}}y_{2}^{2}z_{2}^{2}=0. \tag{5.3}\]
It follows from (5.1) that the coefficients of this equation are integral. A direct calculation using Lemma 2.11 and the expression for \(\Delta(Y_{1}(\underline{I}))\) in Proposition 3.1.(a) yields
\[\begin{split}\nu(\Delta(Y_{2}(\underline{I})))&= \nu(\Delta(Y_{1}(\underline{I})))-36(\nu(\mathcal{A})+\nu(\mathcal{B})+\nu( \mathcal{C}))/4\\ &=\nu(\Delta(Y_{1}(\underline{I})))-18\nu(P)=\nu(\Delta( \underline{I})).\end{split} \tag{5.4}\]
It follows that the model \(Y_{2}(\underline{I})\) has good reduction over \(L_{8}\). In fact, the matrix defining \(\psi\) in (5.2) should be considered as element of \(\operatorname{PGL}_{3}(L_{8})\) and can therefore be divided by
\[p^{\min(\nu(\mathcal{A})/4,\nu(\mathcal{B})/4)}.\]
Checking the possibilities for the minimum separately, one sees that \(\psi\) is defined over the subextension \(L_{4}\) with \([L_{4}:K]=4.\) It follows that the model \(Y_{2}(\underline{I})\) has good reduction over \(L_{4}.\)
_Step 2_: To compute the conductor of \(Y_{0}(\underline{I})\) via Proposition 4.4, we need to determine the action of the Galois group \(\operatorname{Gal}(L_{8}/K)\) on the reduction \(\overline{Y}\) of \(Y_{2}(\underline{I})\). We choose a generator \(\sigma\) of \(\operatorname{Gal}(L_{8}/K)\) with \(\sigma(p^{1/8})=\zeta_{8}p^{1/8}\) for a fixed primitive \(8\)-th root of unity \(\zeta_{8}\in K\).
Composing \(\psi\) with the isomorphism \(\phi:Y_{0}(\underline{I})\to Y_{1}(\underline{I})\) from Proposition 3.4, we obtain an isomorphism \(\psi\circ\phi:\,Y_{0}(\underline{I})\to Y_{2}(\underline{I})\) given by
\[\begin{pmatrix}x_{2}\\ y_{2}\\ z_{2}\end{pmatrix}=\begin{pmatrix}p^{\nu(\mathcal{A})/4}&0&0\\ 0&p^{\nu(\mathcal{B})/4}&p^{\nu(\mathcal{B})/4}\mathcal{B}\\ 0&p^{\nu(\mathcal{B})/4}&p^{\nu(\mathcal{B})/4}\mathcal{C}\end{pmatrix} \begin{pmatrix}x_{0}\\ y_{0}\\ z_{0}\end{pmatrix}. \tag{5.5}\]
Recall that \(Y_{0}(\underline{I})\) is defined over \(K\). Using that the functions \(x_{0},y_{0},z_{0}\) are defined over \(K\), we have
\[\begin{split}\sigma^{*}(x_{2},y_{2},z_{2})&=\left(\sigma(p^{ \nu(\mathcal{A})/4})x_{0},\sigma(p^{\nu(\mathcal{B})/4})\left(y_{0}+\sigma( \mathcal{B})z_{0}\right),\sigma(p^{\nu(\mathcal{B})/4})\left(y_{0}+\sigma( \mathcal{C})z_{0}\right)\right)\\ &=(\zeta_{8}^{2\nu(\mathcal{A})}x_{2},\zeta_{8}^{2\nu(\mathcal{B} )}z_{2},\zeta_{8}^{2\nu(\mathcal{B})}y_{2}).\end{split} \tag{5.6}\]
More details on this calculation in a slightly different situation can be found in [3, Section 5]. Note that \(\nu(\mathcal{B})=\nu(\mathcal{C})\in(1/2)\mathbb{Z}\), so we use the integer \(2\nu(\mathcal{B})\) in (5.6).
It follows that the automorphism of the stable reduction \(\overline{Y}\) induced by \(\sigma\) is
\[\sigma(x_{2}:y_{2}:z_{2})=(x_{2}:\alpha z_{2}:\alpha y_{2})\qquad\text{ for some $\alpha\in k$ with $\alpha^{4}=1$.}\]
We compute that \(g(\overline{Y}/\langle\sigma\rangle)=1\), independent of whether the order of \(\sigma\) is two or four. By Proposition 4.4 we find \(f_{p}(Y_{0}(\underline{I}))=4\).
_Step 3:_ We proceed similarly with the quadratic twist \(Y_{0}^{\prime}\) of \(Y_{0}(\underline{I})\) as in Lemma 3.6, and conclude that \(f_{p}(Y_{0}^{\prime})=4\), as well. This proves the proposition in case (a) of Proposition 3.1 if \([L:K]=2\).
We omit the proof for case (b) of Proposition 3.1 and \([L:K]=2\), since it is very similar, except that additionally we have \(\mathcal{A}=0\).
It remains to consider the case \([L:K]=3\). We are automatically in case (a) of Proposition 3.1. The argument is again similar, but in this case \(\sigma\) acts on \(\overline{Y}\) by cyclically permuting the variables \(x_{2},y_{2},z_{2}\). One computes that \(g(\overline{Y}/\langle\sigma\rangle)=1\) in this case, as well. This finishes the proof of the proposition.
**Remark 5.4**.:
1. In the case that \(L/K\) is ramified with ramification index \(e_{p}=2\) in Proposition 5.3, we see that \(\sigma\in\operatorname{Gal}(L/K)\) acts on the reduction \(\overline{Y}\) as an automorphism of order \(2\) or \(4\) that is not contained in the fixed Ciani subgroup. This is consistent with the statement of Lemma 3.5, as \(\nu(Q)>0\) implies that the reduction \(\overline{Y}\) is special, and hence that \(\operatorname{Aut}_{k}(\overline{Y})\) contains at least the dihedral group \(D_{4}\) of order \(8\) as subgroup. In the case that \(L/K\) is ramified with ramification index \(e_{p}=3\), the automorphism group \(\operatorname{Aut}_{k}(\overline{Y})\) contains an element of order \(3\). It follows that \(S_{4}\subset\operatorname{Aut}_{k}(\overline{Y})\) in this case.
2. In the proof of Proposition 5.3, we showed that \(Y_{1}(\underline{I})\) acquires stable reduction over an extension of \(K\) of degree \(4\) by looking at the equation of \(Y_{2}(\underline{I}).\) One can also deduce this from the fact that the action of \(\operatorname{Gal}(L_{8}/K)\simeq C_{8}\) on the reduction \(\overline{Y}\) acts via a group of order at most \(4\).
**Example 5.5**.: We finish this section with a concrete example. We choose \(I_{3}=I_{3}^{\prime\prime}=I_{6}=1,I_{3}^{\prime}=-6\). Using the notation from Proposition 3.1, we find \(P=1\) and \(\mathcal{P}=T^{3}-6T^{2}+8T-1\). Write \(L/\mathbb{Q}\) for the splitting field of \(\mathcal{P}\), and \(\mathcal{A},\mathcal{B},\mathcal{C}\) for its roots. The discriminant of \(\mathcal{P}\) is \(Q=229\), which is prime. Hence, the Galois group \(\operatorname{Gal}(L/\mathbb{Q})\) is \(S_{3}\). The standard model \(Y_{1}/L\) of \(Y(\underline{I})\) from Proposition 3.1 is given by
\[Y_{1}:\;\mathcal{A}x^{4}+\mathcal{B}y^{4}+\mathcal{C}z^{4}+x^{2}y^{2}+y^{2}z^{ 2}+x^{2}z^{2}=0. \tag{5.7}\]
It has Ciani invariants \(\underline{I}(Y_{1})=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{6})\in \mathbb{Q}^{4}.\) Since \(Q\neq 0\), the curve is non-special, see Lemma 3.5. The discriminant of this model is \(\Delta(Y_{1})=2^{20}.\)
Proposition 5.1 implies that for all odd primes \(p\neq 229\), the curve \(Y\) is defined over \(\mathbb{Q}_{p}^{\mathrm{nr}}\) and has good reduction. Hence, \(f_{p}(Y_{1})=0\) for these primes. For \(p=229\) Proposition 5.3 implies that all models \(Y_{0}\) of \(Y(\underline{I})\) defined over \(\mathbb{Q}_{p}^{\mathrm{nr}}\) have conductor exponent \(f_{229}(Y_{0})=4\).
Recall that \(229\nmid\Delta(\underline{I})=2^{20}.\) This illustrates that for primes with \(p\nmid\Delta(\underline{I})\), there need not exist a model of \(Y(\underline{I})\) over \(\mathbb{Q}(\underline{I})\) with good reduction at \(p\). In this example, the standard model \(Y_{1}/L\) from (5.7) satisfies \(\Delta(Y_{1})=\Delta(\underline{I})\). Therefore, \(Y_{1}\) has good reduction to any prime of \(L\) above \(p=229\). However, there is no model of \(Y(\underline{I})\) over \(\mathbb{Q}\) with good reduction to characteristic \(p=229\).
**Remark 5.6**.: The invariants in Example 5.5 were chosen to ensure that the discriminant \(\Delta(\underline{I})=2^{20}I_{3}(I_{3}^{\prime\prime})^{4}I_{6}^{2}\) is a power of \(2\). In other words, the curve \(Y(\underline{I})\) has potentially good reduction at all odd primes. However, we have seen that the relatively large prime \(p=229\) divides the conductor of every \(\mathbb{Q}\)-model of \(Y(\underline{I})\). Proposition 5.3 gives a geometric interpretation: the automorphism group of the reduction \(\overline{Y}\) of \(Y(\underline{I})\) to characteristic \(p=229\)
is strictly larger than the Ciani subgroup \(V\). In the example, \(p=229\) is the only odd prime for which this happens.
### The case of potentially good hyperelliptic reduction
In this section, we prove results similar to those of the previous section, in the case of potentially good hyperelliptic reduction. As in Section 5.1, we fix a prime \(p>3\) and set \(K=\mathbb{Q}_{p}^{\mathrm{nr}}\). Let \(\underline{I}=(I_{3},I_{3}^{\prime},I_{3}^{\prime\prime},I_{6})\in K^{4}\) be a set of Ciani invariants. We assume that \(Y(\underline{I})\) has potentially good hyperelliptic reduction. We can then assume the invariants to be normalised because of Remark 2.17. Hence, Lemma 4.3 implies that there exists an integer \(e>0\) such that the Ciani invariants \(\underline{I}\) satisfy:
\[\nu(I_{3})=0,\quad\nu(I_{3}^{\prime})\geq e,\quad\nu(I_{3}^{\prime\prime})=2e,\text{ and }\nu(I_{6})=3e. \tag{5.8}\]
Namely, \(e=\nu(I_{6})/\nu(I_{3}^{\prime\prime}).\) We note that \(\nu(P)=0\) and that the discriminant \(Q\) of the polynomial \(\mathcal{P}\) defined in Section 3.1 satisfies \(\nu(Q)\geq 6e\). Let \(L/K\) be the splitting field of the polynomial \(\mathcal{P}\). As in Section 3.1, we write \(\mathcal{A},\mathcal{B},\mathcal{C}\) for the roots of \(\mathcal{P}\).
The fact that \(\nu(P)=0\) implies that we are in the situation of Proposition 3.1(a), so there exists a a standard \(L\)-model
\[Y_{1}:\;Ax^{4}+By^{4}+Cz^{4}+ay^{2}z^{2}+bx^{2}z^{2}+cx^{2}y^{2}=0 \tag{5.9}\]
with normalised invariants and \(\underline{I}(Y_{1})=\underline{I}\). Remark 3.3(a) implies that \(\mathcal{A}=Aa^{2}\), \(\mathcal{B}=Bb^{2}\) and \(\mathcal{C}=Cc^{2}\).
Define, with the notation in Subsection 3.1,
\[R=(\mathcal{A}-\mathcal{B})(\mathcal{A}-\mathcal{C})+(\mathcal{ B}-\mathcal{A})(\mathcal{B}-\mathcal{C})+(\mathcal{C}-\mathcal{A})(\mathcal{C}- \mathcal{B})=\\ \mathcal{A}^{2}+\mathcal{B}^{2}+\mathcal{C}^{2}-\mathcal{B} \mathcal{C}-\mathcal{C}\mathcal{A}-\mathcal{A}\mathcal{B}=S_{1}^{2}-3S_{2}. \tag{5.10}\]
The following lemma allows us to determine the degree \([L:K]\) in terms of valuations of some invariants without needing to compute \(L\). This is useful for the case distinction in Theorem 1.1.
**Lemma 5.7**.: _In the setting above, let \(\underline{I}\in K^{4}\) be a normalised set of invariants such that \(Y(\underline{I})\) has potentially good hyperelliptic reduction to characteristic \(p>3\)._
1. _We have that_ \(\nu(\Delta_{a})=\nu(\Delta_{b})=\nu(\Delta_{c})=e\)_._
2. _If_ \([L:K]=1\) _then_ \(\nu(Q)\equiv 0,2,4\pmod{6}\)_. Moreover, if_ \(\nu(Q)\not\equiv 0\) _then_ \(\nu(Q)>3\nu(R)\)_._
3. _If_ \([L:K]=2\) _then_ \(\nu(Q)\equiv 1,3,5\pmod{6}\)_._
4. _If_ \([L:K]=3\) _then_ \(\nu(Q)\equiv 2,4\pmod{6}\) _and_ \(\nu(Q)\leq 3\nu(R)\)
Proof.: In order to prove (a), notice that \(\nu(I_{6})=\nu(\Delta_{a}\Delta_{b}\Delta_{c})=3e\) but \(\nu(I^{\prime}_{3})=\nu(A\Delta_{a}+B\Delta_{b}+C\Delta_{c})\geq e\) with \(\nu(I_{3})=\nu(ABC)=0\), so the minimum of the valuations of the \(\Delta_{i}\), if attained only once, cannot be smaller than \(e\) and \(\nu(I)=\nu(AB\Delta_{a}\Delta_{b}+BC\Delta_{b}\Delta_{c}+CA\Delta_{c}\Delta_{a })\geq 2e\) (see 2.7), so if the minimum is smaller than \(e\) it cannot be attained twice.
In order to prove the remaining statements, notice that \(\nu(I_{3})=0\) implies that we can assume \(A=B=C=1\) and hence \(a=2+\pi^{e}a^{\prime}\), \(b=2+\pi^{e}b^{\prime}\) and \(c=2+\pi^{e}c^{\prime}\) with \(\nu(a^{\prime}b^{\prime}c^{\prime})=0\). So, \(\nu(Q)=6e+2(\nu(a^{\prime}-b^{\prime})+\nu(b^{\prime}-c^{\prime})+\nu(c^{ \prime}-a^{\prime}))\). If \(L=K\) the last three valuations are integers, if \([L:K]=2\), then two of them are equal and belonging to \(\mathbb{Z}_{\geq 0}/2\) and the third one is in \(1/2+\mathbb{Z}_{\geq 0}\). If \([L:K]=3\), the three of them are equal and belong to \(1/3+\mathbb{Z}_{\geq 0}\) or \(2/3+\mathbb{Z}_{\geq 0}\). The claims about the valuation of \(Q\) modulo \(6\) follow.
Assume in the case \([L:K]=1\) and \(\nu(Q)\not\equiv 0\pmod{6}\) that \(b^{\prime}\) and \(c^{\prime}\) are \(p\)-adically closest between them than to \(a^{\prime}\). Then \(\nu(Q)=6e+6\nu(b^{\prime}-a^{\prime})+2(\nu(b^{\prime}-c^{\prime})-\nu(b^{ \prime}-a^{\prime}))\) and \(\nu(R)=2e+2\nu(a^{\prime}-b^{\prime})\). In the case \([L:K]=3\), \(\nu(Q)=6e+6\nu(a^{\prime}-b^{\prime})\) and \(\nu(R)\geq 2e+2\nu(a^{\prime}-b^{\prime})\). So we can distinguish both cases by looking at the conditions \(\nu(Q)>3\nu(R)\) or \(\nu(Q)\leq 3\nu(R)\).
**Proposition 5.8**.: _Let \(\underline{I}=(I_{3},I^{\prime}_{3},I^{\prime\prime}_{3},I_{6})\in K^{4}\) be a set of normalised Ciani invariants. Assume that \(Y(\underline{I})\) has potentially good hyperelliptic reduction. Let \(L/K\) be the splitting field of the polynomial \(\mathcal{P}\) defined in Section 3.1._
1. _Assume_ \([L:K]=1\) _or_ \(3\)_._ 1. _If_ \(e\) _is even,_ \(Y_{1}(\underline{I})\) _has good hyperelliptic reduction over_ \(L\)_._ 2. _If_ \(e\) _is odd,_ \(Y_{1}(\underline{I})\) _acquires good reduction over a quadratic extension_ \(L^{\prime}/L\)_._
2. _If_ \([L:K]=2\) _then_ \(Y_{1}(\underline{I})\) _has good reduction over_ \(L\)_._
Proof.: Because of Lemma 5.7(a), and as we did in its proof, we can assume \(A=B=C=1\) and work with the model:
\[Y_{1}:\;(x^{2}+y^{2}+z^{2})^{2}+\pi^{e}(a^{\prime}y^{2}z^{2}+b^{\prime}z^{2}x^ {2}+c^{\prime}x^{2}y^{2})=0. \tag{5.11}\]
Set \(H(x,y,z)=x^{2}+y^{2}+z^{2}\) and \(G(x,y,z)=a^{\prime}y^{2}z^{2}+b^{\prime}z^{2}x^{2}+c^{\prime}x^{2}y^{2}\). We are in the setting of [16, Theorem 1.4]. Let \(\pi^{e}\in K\) be an element of valuation \(e\). By [16, Proposition 1.2], a model \(\mathcal{Y}\) defined over the ring of integers of the extension \(M=L(\pi^{e/2})\) of \(L\) is given by
\[\left\{\begin{array}{c}t^{2}+G=0,\\ \pi^{e/2}t-H=0.\end{array}\right. \tag{5.12}\]
From [17, Lemma 14] it follows that the special fiber of \(\mathcal{Y}\) is smooth.
In the case that \(e\) is even, we have that \(M=L\), and it follows that \(Y_{1}\) has already good reduction over \(L\). In the case that \([L:K]=2\), we have that \(\pi^{e/2}\in L\) and hence that \(M=L\), regardless of whether \(e\) is odd or even.
**Proposition 5.9**.: _Let \(\underline{I}\) be a normalised set of invariants with \(\Delta(\underline{I})\neq 0\), and assume that \(Y(\underline{I})\) has good hyperelliptic reduction. Let \(e\) be as introduced in Equation (5.8)._
1. _There exists a_ \(K\)_-model_ \(Y_{0}\) _of_ \(Y(\underline{I})\) _with_ \(f_{p}(Y_{0})\) _equal to:_
2. _If the curve is non-special, then any_ \(K\)_-model_ \(Y_{0}\) _of_ \(Y(\underline{I})\) _has the same conductor exponent._
Proof.: The proof goes in the same direction as the proof of Proposition 5.3. We start by computing an isomorphism between a \(K\)-model and a model with good reduction over a field extension. This allows to compute the action of the inertia and we can then apply Proposition 4.4 to compute the conductor.
As in the proof of Proposition 5.8 we can assume \(A=B=C=1\) and we work with the model
\[\mathcal{Y}:\;\left\{\begin{array}{c}t^{2}+a^{\prime}y^{2}z^{2}+b^{\prime}z ^{2}x^{2}+c^{\prime}x^{2}y^{2}=0\\ \pi^{e/2}t-(x^{2}+y^{2}+z^{2})=0,\end{array}\right.\]
defined over \(M=L(\pi^{e/2})\) and that has good hyperelliptic reduction.
The isomorphism \(\phi\), defined in Proposition 3.4(a) for each case of \([L:K]\), applies to the variables \(x,y,z\) of \(\mathcal{Y}\) and it provides a model \(\mathcal{Y}_{0}\) defined over \(K\) with variables \((x_{0},y_{0},z_{0})=\phi^{-1}(x,y,z)\) and \(t_{0}=\pi^{e/2}t\).
For each case depending on the degree \([L:K]\) and the parity of \(e\), we compute the action of the inertia on \(\mathcal{Y}\), and we use Proposition 4.4 to compute the conductor exponent, as follows.
For \([L:K]=1\), the inertia is generated by \((\overline{x},\overline{y},\overline{z},\overline{t})\mapsto(\overline{x}, \overline{y},\overline{z},\overline{t})\) or \((\overline{x},\overline{y},\overline{z},\overline{t})\mapsto(\overline{x}, \overline{y},\overline{z},-\overline{t})\) depending on \(e\) being even or odd, hence the conductor is \(2\cdot 3-2\cdot 3=0\) or \(2\cdot 3-2\cdot 0=6\).
For \([L:K]=2\), assuming \(a^{\prime}\) is the one element in \(K\), the inertia is generated by \(\sigma:(\overline{x},\overline{y},\overline{z},\overline{t})\mapsto(\overline {x},\overline{z},\overline{y},\pm\overline{t})\), and the conductor is then \(2\cdot 3-2\cdot 1=4\) or \(2\cdot 3-2\cdot 2=2\). We notice that the automorphism \(\sigma\) has no fixed points if and only if \(\sigma^{*}t=-t\), and this happens if and only if \(e\) is
odd. If \(e\) is even, the automorphism \(\sigma\) of \(\overline{Y}\) has \(4\) fixed points. This can be checked using the classification of automorphism groups of hyperelliptic curves in [12, Section 3.1].
Finally, for \([L:K]=3\), the inertia is generated by \((\overline{x},\overline{y},\overline{z},\overline{t})\mapsto(\overline{y}, \overline{z},\overline{x},\pm\overline{t})\) and the conductor is then \(2\cdot 3-2\cdot 1=4\) or \(2\cdot 3-2\cdot 0=6\), depending on whether \(e\) is even or odd.
We repeat the computations for each of the twists of \(\mathcal{Y}_{0}\) described in Lemma 3.6.
**Remark 5.10**.: Again, as in the case of potentially good quartic reduction (Remark 5.4), when \([L:K]>1\), we have that the special fiber, this time a hyperelliptic curve, has automorphisms that are not contained in the group generated by the fixed Ciani subgroup and the hyperelliptic involution.
## 6 Comparison between the conductor and the discriminant
Let \((K,\nu)\) be a complete local field of characteristic zero with valuation \(\nu\), whose residue field is an algebraically closed field \(k\) of odd characteristic \(p>0\). If \(E/K\) is an elliptic curve, Ogg's formula [1] implies that
\[f_{p}(E)\leq\nu(\Delta(E)), \tag{6.1}\]
where \(\Delta(E)\) is the discriminant. It is natural to ask whether this inequality also holds for plane curves of arbitrary degree. If the inequality holds, a list of all curves with bounded discriminant as in [16] also contains all curves whose conductor is bounded (by a certain slightly different bound). This inequality is discussed for example in [1, Section 5] for Picard curves and in [11] for hyperelliptic curves.
Our results prove that (6.1) hold in a rather special case. The following is a corollary of Theorem 1.1.
**Corollary 6.1**.: _Let \(K=\mathbb{Q}_{p}^{\mathrm{nr}}\) with \(p>3\) and that \(Y_{0}/K\) is a non-special Ciani curve with potentially good reduction at \(p\). Then_
\[f_{p}(Y_{0})\leq\nu(\Delta(Y_{0})).\]
We note that every curve \(Y_{0}\) occurs in Lemma 3.6 as one of the \(K\)-twist for some tuple \([\underline{I}]\in\mathbb{P}_{1,1,1,2}^{3}(K)\) of Ciani invariants. Moreover, since we assume that \(Y_{0}\) is non-special, the conductor exponent \(f_{p}(Y_{0})\) only depends on the \(\overline{K}\)-isomorphism class and the field \(K\). Replacing \(K\) by a larger field decreases the conductor exponent in general. For this reason, we mostly assumed that \(K=\mathbb{Q}_{p}^{\mathrm{nr}}\) in this paper.
Proof.: Let \(\underline{I}\in K^{4}\) be a normalised set of Ciani invariant such that \(Y_{0}\otimes_{K}\overline{K}\) is isomorphic to \(Y[\underline{I}]\). Since we assume that \(Y_{0}\) has potentially good reduction, we can normalise the invariants over \(K\), see Remark 2.17.
Let \(M/K\) be the extension over which \(Y_{0}\) acquires good reduction, and let \(Y_{2}(\underline{I})/M\) be the model with good reduction. The extension \(M/K\) is at most tamely ramified, Corollary 4.7. Therefore, Proposition 4.4 implies that \(f_{p}(Y_{0})\leq 6\).
We only consider the case that \(Y_{0}\) has potentially good quartic reduction, but not good reduction over \(K\). Moreover, we assume that we are in case (a) of Proposition 3.1. The case (b) and the case of potentially good hyperelliptic reduction are very similar. Since \(Y_{2}/M\) has good quartic reduction, the discriminant \(\Delta(Y_{2}(\underline{I}))\) has valuation \(0\), see (5.4). The valuation of the determinant of the isomorphism \(\psi\circ\phi:Y_{0}(\underline{I})\otimes_{K}M\to Y_{2}(\underline{I})\) is positive, since we assume that \(M\neq K\). Moreover, \([M:K]\nu(\det(\psi\circ\phi))\) is an integer. Using Lemma 2.11, we conclude that
\[\nu(\Delta(Y_{0}(\underline{I}))\geq\frac{36}{[M:K]}\geq 6.\]
The statement of the corollary in this case follows.
In [1], we find an upper bound on the conductor exponent only in terms of the genus and the ramification index. In our more special setup, we improve on the upper bound of Brumer-Kramer only for \(p=5,7\). The reason is that a Ciani curve acquires stable reduction over a tame extension if \(p>3\). For arbitrary curves of genus \(3\) this only holds if \(p>7\).
|
2302.10607
|
Differentiable Multi-Target Causal Bayesian Experimental Design
|
We introduce a gradient-based approach for the problem of Bayesian optimal
experimental design to learn causal models in a batch setting -- a critical
component for causal discovery from finite data where interventions can be
costly or risky. Existing methods rely on greedy approximations to construct a
batch of experiments while using black-box methods to optimize over a single
target-state pair to intervene with. In this work, we completely dispose of the
black-box optimization techniques and greedy heuristics and instead propose a
conceptually simple end-to-end gradient-based optimization procedure to acquire
a set of optimal intervention target-state pairs. Such a procedure enables
parameterization of the design space to efficiently optimize over a batch of
multi-target-state interventions, a setting which has hitherto not been
explored due to its complexity. We demonstrate that our proposed method
outperforms baselines and existing acquisition strategies in both single-target
and multi-target settings across a number of synthetic datasets.
|
Yashas Annadani, Panagiotis Tigas, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, Stefan Bauer
|
2023-02-21T11:32:59Z
|
http://arxiv.org/abs/2302.10607v2
|
# Differentiable Multi-Target Causal Bayesian Experimental Design
###### Abstract
We introduce a gradient-based approach for the problem of Bayesian optimal experimental design to learn causal models in a batch setting -- a critical component for causal discovery from finite data where interventions can be costly or risky. Existing methods rely on greedy approximations to construct a batch of experiments while using black-box methods to optimize over a _single target-state_ pair to intervene with. In this work, we completely dispose of the black-box optimization techniques and greedy heuristics and instead propose a conceptually simple end-to-end gradient-based optimization procedure to acquire a set of optimal intervention target-state pairs. Such a procedure enables parameterization of the design space to efficiently optimize over a batch of _multi-target-state_ interventions, a setting which has hitherto not been explored due to its complexity. We demonstrate that our proposed method outperforms baselines and existing acquisition strategies in both single-target and multi-target settings across a number of synthetic datasets.
Machine Learning, Causal Bayesian Experiment, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian Approach, Bayesian, Bayesian Approach, Bayesian Bayesian Approach, Bayesian, Bayesian Approach, Bayesian Bayesian Approach, Bayesian Bayesian Approach, Bayesian Bayesian, Bayesian Approach, Bayesian Bayesian, Bayesian Approach, Bayesian Approach, Bayesian Bayesian Approach, Bayesian Bayesian, Bayesian Approach, Bayesian Bayesian, Bayesian Approach, Bayesian Bayesian Approach, Bayesian, Bayesian Approach, Bayesian Bayesian Bayesian, Bayesian Approach, Bayesian Bayesian Bayesian, Bayesian Approach, Bayesian Bayesian, Bayesian Approach, Bayesian Bayesian Bayesian, Bayesian Bayesian Bayesian, Bayesian Bayesian Bayesian, Bayesian Bayesian Bayesian, Bayesian Bayesian Bayesian, Bayesian
tionally, in causal discovery, one is interested not only in identifying the variable (target) to intervene _on_ but also the state to set the intervention _to_, resulting in a design space which is a product space of discrete targets and continuous states, making experimental design even more challenging. Tigas et al. (2022) proposed to use Bayesian Optimization (BO) to optimize over the continuous state-space of the interventions and a soft-top-\(k\) heuristic to select a batch.
In this work, we propose a method for estimating and optimizing the BOED objective in a differentiable end-to-end manner, alleviating the inefficiencies introduced by the heuristics of the batch selection but also the black-box optimization over the intervening states. Specifically, we introduce estimators of mutual information based on nested estimation (Ryan, 2003; Myung et al., 2013; Huan and Marzouk, 2014; Foster et al., 2019) and importance sampling and extend it to the problem of causal discovery where the optimization is over both discrete nodes and continuous states. We cast the problem of batch experiment selection as a policy optimization where the policy uses either the Gumbel-Softmax or relaxed Bernoulli distribution (Jang et al., 2016; Maddison et al., 2016) for single target and multi-target settings respectively. When combined with the straight-through gradient estimator (Bengio et al., 2013) to optimize over the targets and gradient ascent over corresponding states, we can explore the space of optimal designs efficiently with powerful optimizers (Kingma and Ba, 2014). Our proposed method requires very few assumptions about the causal model and can explore wide range of design settings as compared to prior work (see Table 1), thus opening up possibilities of experimental design for causal discovery in a broader range of applications.
## 2 Related Work
Differentiable Bayesian Optimal Experimental Design.Huan and Marzouk (2014); Foster et al. (2019, 2020); Kleinegesse and Gutmann (2020, 2021) developed a unified framework for estimating Expected Information Gain and optimizing the designs with gradient-based methods. More recently, Ivanova et al. (2022) applied the Gumbel-Softmax relaxation within gradient-based BOED for contextual optimization. In Ivanova et al. (2021); Foster et al. (2021), the authors introduced a policy-based method for performing adaptive experimentation. More recently, work like Blau et al. (2022); Lim et al. (2022) used Reinforcement Learning to train policies for adaptive experimental design.
Experimental Design for Causal Discovery.One of the earliest works of experimental design for causal discovery in a BOED setting was proposed by (Murphy, 2001) and (Tong and Koller, 2001) in the case of discrete variables for single target acquisition. Since then, a number of works have attempted to address this problem for continuous variables in both the BOED framework (Agrawal et al., 2019; von Kugelgen et al., 2019; Toth et al., 2022; Cho et al., 2016) and other frameworks (Kocaoglu et al., 2017; Gamella and Heinze-Deml, 2020; Eberhardt et al., 2012; Lindgren et al., 2018; Mokhtarian et al., 2022; Ghassami et al., 2018; Olko et al., 2022; Scherrer et al., 2021). In contrast to the setting studied in this paper, of particular note, are the approaches for experimental design for causal discovery in a non-BOED setting in the presence of cycles (Mokhtarian et al., 2022) and latent variables (Kocaoglu et al., 2017).
Closer to our BOED setting are the approaches of (Tigas et al., 2022) and (Sussex et al., 2021). Specifically, in (Tigas et al., 2022), the authors introduce a method for selecting single target-state pair with stochastic batch acquisition while (Sussex et al., 2021) introduce a method for selecting a batch of multi-target experiments with a greedy strategy, based on a gradient-based approximation to mutual information, without selecting the intervention state. Our presented method in contrast can acquire a batch of multi-target-state pairs.
Bayesian Causal Discovery.While our work is not directly concerned with Bayesian causal discovery as such, our design procedure method benefits from using the approximate posteriors for causal models to estimate mutual information and design interventions (Friedman et al., 2013; Annadani et al., 2021; Lorch et al., 2021; Cundy et al., 2021; Deleu et al., 2022; Nishikawa-Toomey et al., 2022).
\begin{table}
\begin{tabular}{l|c c c c|c|c} \hline \hline \multicolumn{6}{c}{**Design Space Assumptions**} \\ \hline & Target Acquisition & State Acquisition & Target Acquisition & State Acquisition & Batch \\ & (Single Target) & (Single Target) & (Multi-target) & (Multi-target) & Acquisition \\ \hline Murphy (2001) & ✓ & & & & \\ Tong and Koller (2001) & ✓ & & & & \\ Cho et al. (2016) & ✓ & & & & \\ Agrawal et al. (2019) & ✓ & & & & ✓ \\ Toth et al. (2022) & ✓ & ✓ & & & \\ Tigas et al. (2022) & ✓ & ✓ & & & ✓ \\ Sussex et al. (2021) & ✓ & & ✓ & & ✓ \\
**Ours** & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different BOED for Causal Discovery methods based on their design space assumptions.
## 3 Background
### Causality
Notation.Let \(\mathbf{V}=\{1,\ldots,d\}\) be the vertex set of any Directed Acyclic Graph (DAG) \(\mathbf{g}=(\mathbf{V},E)\) and \(\mathbf{X}_{\mathbf{V}}=\{X_{1},\ldots,X_{d}\}\subseteq\mathcal{X}\) be the random variables of interest indexed by \(\mathbf{V}\).
Structural Causal Model.To deal with questions related to modelling causal relations between variables of interest, we employ the framework of Structural Causal Models (SCM) (Peters et al., 2017). In many fields of empirical sciences like network inference in single cell genomics (Greenfield et al., 2010), inferring protein-signalling networks (Sachs et al., 2005) and medicine (Shen et al., 2020), SCMs provide a framework to model the effects of interventions (Pearl, 2009)- experiments which perturb the state of a variable to a desired state, thereby allowing to study the mechanisms which affect the downstream variables (for example, CRISPR gene knockouts (Meinshausen et al., 2016) in genomics). Under this framework, each variable \(X_{i}\) has an associated _structural equation_, and is assigned a value which is a deterministic function of its direct causes \(X_{\text{pa}(i)}\) as well as an exogenous noise variable \(\epsilon_{i}\) with a distribution \(P_{\epsilon_{i}}\):
\[X_{i}:=f_{i}(X_{\text{pa}(i)},\epsilon_{i})\ \ \forall i\in\mathbf{V}\]
\(f_{i}\)'s are mechanisms that relate how the direct causes affect the variable \(X_{i}\). If the structural assignments are assumed to be acyclic, these equations induce a DAG \(\mathbf{g}=(\mathbf{V},E)\) whose vertices correspond to the variables and edges indicate direct causes. An intervention on any variable \(X_{i}\) corresponds to changing the structural equation of that variable to the desired state (value), \(X_{i}\coloneqq s_{i}\), where \(s_{i}\in\mathcal{X}_{i}\). It is denoted by the do-operator (Pearl, 2009) as \(\operatorname{do}(X_{i}=s_{i})\).
In this work, we assume that the SCM is causally sufficient, i.e. all the variables are measurable, and the noise variables are mutually independent. Though the mechanisms \(f\) can be nonparametric in the general case, we assume that there exists a parametric approximation to these mechanisms with parameters \(\boldsymbol{\gamma}\in\Gamma\). In the case of linear SCMs, \(\boldsymbol{\gamma}\) corresponds to the weights of the edges in \(E\). We further require that the functions \(f\) are differentiable with respect to their parameters. Many classes of SCMs fall under this category, including the most commonly studied one - the Gaussian additive noise models (ANM)1:
Footnote 1: Note that differentiability of \(f\) is the only assumption we require with respect to an SCM. We do not require that the noise is additive. For clarity of exposition, we restrict our focus to an ANM as they are the most commonly studied class of SCMs.
\[X_{i}\coloneqq f_{i}(X_{\text{pa}(i)};\boldsymbol{\gamma}_{i})+\epsilon_{i}, \ \ \epsilon_{i}\sim\mathcal{N}(0,\sigma_{i}^{2})\]
Bayesian Causal Discovery.If the SCM for a given set of variables \(\mathbf{X}_{\mathbf{V}}\) is unknown, it has to be estimated from a combination of observational data (data obtained in an unperturbed state of a system) and experimental data under an intervention. This problem is called causal induction or causal discovery (Spirtes et al., 2000). This amounts to learning the parameters of the unknown SCM given by DAG \(\mathbf{g}\), parameters of mechanisms, \(\boldsymbol{\gamma}=\left[\gamma_{1},\ldots,\gamma_{d}\right]\), and variances, \(\sigma^{2}=\left[\sigma_{1}^{2},\ldots,\sigma_{d}^{2}\right]\). For notational brevity, henceforth we denote \(\boldsymbol{\phi}=(\boldsymbol{\gamma},\sigma^{2})\) and all the parameters of interest with \(\boldsymbol{\theta}=(\mathbf{g},\boldsymbol{\phi})\). In Bayesian causal discovery (Heckerman et al., 1997), the parameters of SCM are treated as random variables whose beliefs are updated according to the Bayes rule. A Bayesian method for causal discovery is preferable to model epistemic uncertainty about the model due to finite data as well as characterize equivalence classes of SCM like Markov Equivalence Class (MEC) in the case of non-identifiability (Peters et al., 2012). Interventions improve identifiability, but they have to be planned carefully. After acquiring interventional data, Bayesian methods update the posterior distribution to reduce the uncertainty of the SCM.
### Bayesian Optimal Experimental Design
_Bayesian Optimal Experimental Design_ (BOED) (Lindley, 1956; Chaloner and Verdinelli, 1995) is an information theoretic approach to the problem of selecting the optimal experiment to estimate any parameter \(\boldsymbol{\theta}\). For BOED, the _utility_ of the experiment \(\xi\) is the mutual information (MI) between the observation \(\mathbf{y}\) and \(\boldsymbol{\theta}\):
\[\operatorname{U_{\text{BOED}}}(\xi) \triangleq\mathcal{I}(\mathbf{Y};\boldsymbol{\Theta}\mid\xi)\] \[=\operatorname*{\mathbb{E}}_{p(\boldsymbol{\theta})p(\mathbf{y} \mid\boldsymbol{\theta},\xi)}[\log p(\mathbf{y}\mid\boldsymbol{\theta},\xi)- \log p(\mathbf{y}\mid\xi)]\]
This objective is also known as the _Expected Information Gain_ (EIG). The goal of BOED is to select the experiment that maximizes this objective \(\xi^{*}=\arg\max_{\xi}\operatorname{U_{\text{BOED}}}(\xi)\). Unfortunately, evaluating and optimizing this objective is challenging because of the nested expectations (Rainforth et al., 2018) and several estimators have been introduced (Foster et al., 2019; Kleinegesse and Gutmann, 2019), which can be combined with various optimization methods to select the designs (Foster et al., 2020; Ivanova et al., 2021; Foster et al., 2021; Blau et al., 2022).
A common setting, called _static_, _fixed_ or _batch_ design, is to optimize \(B\) designs \(\{\xi_{1},\ldots,\xi_{B}\}\) at the same time. The designs are then executed, and the experimental outcomes are collected for a Bayesian update of the model parameters.
### Causal Bayesian Experimental Design
_Causal Bayesian Experimental Design_ is concerned with designing the most informative experiments to identify the true SCM so that the number of experiments required is as
few as possible. An experiment in causal discovery corresponds to picking the intervention targets \(I\in\mathcal{P}(\mathbf{V})\) and the corresponding states \(S^{I}\in\underset{i\in I}{\cup}\mathcal{X}_{i}\) to set those targets to.
A key component of such methods is computing a posterior over the parameters of the SCM. However, computing the posterior is a difficult task since the number of DAGs grows exponentially in the number of variables. Nevertheless, a plethora of methods exist (Friedman et al., 2013; Annadani et al., 2021; Lorch et al., 2021; Cundy et al., 2021) which can be used with our approach.
Having access to such posterior models, one can estimate the EIG objective. One difficulty that still remains though is that optimizing the EIG objective over the experiments is a mixed discrete and continuous optimization problem, for which previous work has proposed to find the optimal value per candidate target via the use of black-box methods like _Bayesian Optimization_ (BO) (Tigas et al., 2022). Additionally, for the construction of the batch of experimental designs, a greedy approximation is used to incrementally select experiments, a method that is \(1-\frac{1}{\epsilon}\)-approximate to the optimal solution (Krause and Golovin, 2014).
## 4 Differentiable Causal Bayesian Experimental Design
Let \(\mathbf{\Theta}\) be a random variable that models the uncertainty in the parameters of the true SCM, of which \(\mathbf{\theta}\coloneqq(\mathbf{g},\mathbf{\phi})\) is a realization. An experiment to identify an intervention is denoted by \(\xi\coloneqq\{(I,S^{I})\}\coloneqq\mathrm{do}(\mathbf{X}_{I}=S^{I})\), where \(I\in\mathcal{P}(\mathbf{V})\) is a set of target indices in the multi-target setting, and \(S^{I}\) are the corresponding states of those targets under intervention. The outcome of the experiment is denoted by \(\mathbf{y}\sim P\left(\mathrm{X}_{1}=\mathrm{x}_{1},\ldots,\mathrm{X}_{d}= \mathrm{x}_{d}\mid\mathrm{do}\left(\mathbf{X}_{I}=S^{I}\right)\right)=p( \mathbf{y}\mid\xi)\). Here, \(\mathbf{y}\) is an instance of the random variable \(\mathbf{Y}\subseteq\mathcal{X}\) distributed according to the interventional distribution2. Due to causal sufficiency, the likelihood of data for a given \(\mathbf{\theta}\) satisfies the causal Markov condition:
Footnote 2: Note that when \(I=\varnothing\), it corresponds to an observational/ non-experimental setting. In this case, \(\mathbf{Y}=\mathbf{X}_{\mathbf{V}}\).
\[p(\mathbf{y}\mid\mathbf{\theta},\xi)=\prod_{j\in\mathbf{V}\setminus I}p\left(x_{j} |\mathbf{\phi}_{j},\mathbf{x}_{\mathbf{p_{g}}(j)},\mathrm{do}\left(\mathbf{X}_{I}= S^{I}\right)\right) \tag{1}\]
Along with a prior \(p(\mathbf{\theta})\), the above equation defines a generative model of the data.
Design setting.As in prior work (Tigas et al., 2022; Sussex et al., 2021), we are interested in the setting of batch design where we design \(B\) experiments at once before collecting experimental data. In other words, we seek a multiset of intervention targets and corresponding states which are jointly maximally informative about the parameters. We denote this multiset as \(\xi_{1:B}\coloneqq(I_{1:B},S^{I}_{1:B})\). After executing a batch of experiments and collecting experimental outcomes, an experimenter might wish to design a new batch of experiments based on collected data (as summarized by the posterior distribution). Let \(h_{t}\) denote experimental history \((\xi^{1},\mathbf{y}^{1}),\ldots,(\xi^{t},\mathbf{y}^{t})\) after \(t\) batches of acquisition. The BOED objective for this batch setting at any point \(t\) is given by the joint mutual information:
\[\mathcal{I}(\mathbf{Y}^{t}_{1:B}; \mathbf{\Theta}\mid\xi^{t}_{1:B},h_{t-1})\] \[=\underset{p(\mathbf{\theta}|h_{t-1})}{\mathbb{E}}\left[\log\frac{p( \mathbf{y}^{t}_{1:B}\mid\mathbf{\theta},\xi^{t}_{1:B})}{p(\mathbf{y}^{t}_{1:B} \mid\xi^{t}_{1:B},h_{t-1})}\right] \tag{2}\]
where \(\mathbf{Y}^{t}_{1:B}\) are the random variables corresponding to experimental outcomes for iteration \(t\), \(\mathbf{y}^{t}_{1:B}\) are the instances of these random variables and \(\xi^{t}_{1:B}\) is the corresponding multiset of experimental designs. We drop the superscript \(t\) from these variables for simplicity of exposition. Ideally, we wish to maximize the above objective by obtaining the gradients \(\nabla_{\xi_{1:B}}\)I and performing gradient ascent. However, the above objective is doubly intractable (Rainforth et al., 2018) and approximations are required. This usually leads to a two-stage procedure where the above objective is first estimated with respect to an inference network and then maximized with respect to designs (Foster et al., 2019), which can be typically inefficient (Foster et al., 2020).
### Estimators of the Joint Mutual Information Nested Monte Carlo
Following (Huan and Marzouk, 2014; Foster et al., 2020, 2021), we consider an estimator that allows for approximating the EIG objective while _simultaneously_ optimizing for the experiment \(\xi\) that maximizes the objective via gradient-based methods. This estimator, called Nested Monte Carlo (NMC), is based on contrastive estimation of the experimental likelihood and has been extensively used in Bayesian experimental design (Ryan, 2003; Myung et al., 2013). More precisely, assuming some past observational and interventional data \(h_{t-1}=\{(\xi^{1},\mathbf{y}^{1}),\ldots,(\xi^{t-1},\mathbf{y}^{t-1})\}\), for every parameter sample from the posterior distribution \(\mathbf{\theta}_{0}\sim p(\mathbf{\theta}\mid h_{t-1})\), a set of contrastive samples \(\mathbf{\theta}_{1:L}\sim p(\mathbf{\theta}\mid h_{t-1})\) are considered to obtain a unified objective:
\[\mathcal{U}^{\xi}_{\text{NMC}}(\xi_{1:B})=\underset{p(\mathbf{\theta}_{0:L}|h_{t-1 })}{\mathbb{E}}\left[\log\frac{p(\mathbf{y}_{1:B}\mid\xi_{1:B},\mathbf{\theta}_{0}) }{\frac{1}{L}\sum_{\ell=1}^{L}p(\mathbf{y}_{1:B}\mid\xi_{1:B},\mathbf{\theta}_{ \ell})}\right] \tag{3}\]
This estimator converges to the true mutual information as \(L\rightarrow\infty\)(Rainforth et al., 2018). If the design space is continuous, the optimal _batch_ of experiment \(\xi^{*}_{1:B}\) can be found by _directly_ maximizing the NMC objective (\(\xi^{*}_{1:B}\leftarrow\arg\max_{\xi_{1:B}}\mathcal{U}^{t}_{\text{NMC}}(\xi_{1:B})\)) with gradient-based techniques (Huan and Marzouk, 2014).
The above objective requires estimating the posterior distribution \(p(\mathbf{\theta}\mid h_{t-1})\) after every acquisition. For causal models, while it is generally hard to estimate this posterior
due to DAG space of causal structures being discrete and super-exponential in the number of variables (Tong and Koller, 2001), many approaches exist in the literature (Agrawal et al., 2019; Lorch et al., 2021; Cundy et al., 2021). These approximate posteriors can be nevertheless used for estimating the NMC objective.
### Importance Weighted Nested Monte Carlo
To establish an alternative path to estimating the mutual information, we begin by utilizing an observation from Foster et al. (2019) that it is possible to draw the contrastive samples from a distribution other than \(p(\mathbf{\theta}\mid h_{t-1})\) and obtain an asymptotically exact estimator, up to a constant \(C\) that does not depend on \(\xi_{1:B}^{t}\). Drawing samples from the _original_ prior \(p(\mathbf{\theta})\) gives the estimator
\[\mathbb{E}_{\begin{subarray}{c}p(\mathbf{\theta}_{0}|h_{t-1})p(\mathbf{ \theta}_{1:L})\\ p(\mathbf{y}_{1:B}|\mathbf{\theta}_{0},\xi_{1:B})\end{subarray}}\log\frac{p(\mathbf{y}_{1 :B}|\xi_{1:B},\mathbf{\theta}_{0})}{\frac{1}{L}\sum\limits_{\ell=1}^{L}p(\mathbf{y}_{ 1:B}|\xi_{1:B},\mathbf{\theta}_{\ell})p(h_{t-1}|\mathbf{\theta}_{\ell})}\Bigg{]}\,.\]
The remaining wrinkle is that we must sample \(\mathbf{\theta}_{0}\) from \(p(\mathbf{\theta}_{0}|h_{t-1})\). We propose the conceptually simplest approach of applying self-normalized importance sampling (SNIS) to the outer expectation. The resulting objective, based on efficiently re-using samples in a leave-one-out manner, can optimize designs by just sampling parameters from the prior, without having to estimate the posterior:
\[\mathcal{U}_{\text{RWNMC}}^{t}(\xi_{1:B})=\] \[\mathbb{E}\left[\sum\limits_{m=1}^{L}\omega_{m}\log\frac{p(\mathbf{y} _{m,1:B}|\mathbf{\theta}_{m},\xi_{1:B})}{\frac{1}{L-1}\sum\limits_{\ell\neq m}p( \mathbf{y}_{m,1:B}|\mathbf{\theta}_{\ell},\xi_{1:B})p(h_{t-1}|\mathbf{\theta}_{\ell})} \right] \tag{4}\]
where \(\mathbf{\theta}_{1:L}\sim p(\mathbf{\theta}_{1:L})\) are sampled from the original prior, \(\mathbf{y}_{m,1:B}\sim p(\mathbf{y}_{1:B}|\mathbf{\theta}_{m},\xi_{1:B})\) are all the experimental outcomes in the batch for parameter \(\mathbf{\theta}_{m}\) and \(\omega_{m}\propto p(h_{t-1}|\mathbf{\theta}_{m})\) are self-normalized weights.
A full derivation is given in Section A.
As IWNMC does not require any posterior estimation but instead relies entirely on the prior, it completely sidesteps the causal discovery process for designing experiments. This is a paradigm change from the NMC estimator which requires causal discovery through the estimation of the posterior.
However, we note that using IWNMC with just the prior (Eq. 4) as opposed to NMC (Eq. 3) comes with trade-offs. IWNMC typically requires a large \(L\) to get a good estimate of the EIG. In high dimensions, this can be computationally infeasible. Having a small \(L\) on the other hand might result in a failure case if the effective sample size of importance samples becomes 1. We can alleviate this issue if there is some prior information available which could be leveraged to design better proposal distributions. This might consist of knowledge of certain causal mechanisms of the system under study or access to some initial observational data. In such a case, a proposal distribution which encodes this information (for example with support on graphs which are in the Markov Equivalence Class (MEC) of the observational distribution) can be used instead of the prior. If no prior information is available or a good approximate inference technique is at our disposal, NMC is preferable in high dimensions. Surprisingly, we get good results on variables of size up to \(5\) with IWNMC from just the prior and up to \(40\) variables from a proposal distribution which has support on the MEC of observational distribution (see Sec 5.5).
### Optimizing over Targets and States (DiffCbed)
While the NMC estimator provides a unified objective to directly optimize over the designs \(\xi_{1:B}\), it requires that the design space is continuous so that the gradients \(\frac{\partial\mathcal{U}_{\text{SMC}}}{\partial I_{1:B}}\) and \(\frac{\partial\mathcal{U}_{\text{SMC}}}{\partial S_{1:B}^{t}}\) can be computed. However, in the case of designing experiments for causal models, the challenge still remains that optimizing over intervention targets \(I\) with gradient-based techniques is not possible because it is a discrete choice.
In order to address this problem, we introduce a _design policy_\(\pi_{\phi}\) with learnable parameters \(\phi\) that parameterize a joint distribution over possible intervention targets and corresponding states. Instead of seeking the gradients \(\frac{\partial\mathcal{U}_{\text{SMC}}}{\partial I_{1:B}}\) and \(\frac{\partial\mathcal{U}_{\text{SMC}}}{\partial S_{1:B}^{t}}\), the goal now instead is to estimate \(\frac{\mathcal{U}_{\text{SMC}}}{\partial\phi}\) so that policy can be updated to be close to optimal. Such a characterization of the design space allows us to use continuous relaxations of discrete distributions (Maddison et al., 2016; Jang et al., 2016) to obtain samples of designs and estimate NMC gradients.
Let \(\mathbf{I}\) and \(\mathbf{S}\) be the random variables which model all possible intervention target combinations and states for a batch design respectively. While there are many possibilities of instantiating the policy in practice, we consider the simplest case where \(\pi_{\phi}(\mathbf{I},\mathbf{S})\triangleq\pi_{\phi_{m}}(\mathbf{I})\pi_{\phi_{ m}}(\mathbf{S})\). As the state space is continuous3, \(\pi_{\phi_{m}}\) can be either deterministic (a delta Dirac with \(\phi_{m}\in\mathbb{R}^{B\times d}\)) or Gaussian with \(\phi_{m}\in\mathbb{R}^{2\times B\times d}\) parameterizing its mean and log variance. In this work, we found it sufficient to use a deterministic policy over the state space. For the interventional targets, \(\phi_{n}\in\mathbb{R}^{B\times d}\) parameterizes the logits of different relaxed versions of discrete distributions depending on the setting, which we describe below.
Footnote 3: If the state space is discrete, optimizing \(\pi_{\phi_{m}}\) would be similar to \(\pi_{\phi_{n}}\) which involves reparameterized gradients.
Having established the basic structure of a policy, we can support different settings by structuring the policy over designs differently.
The diffCbED algorithm is outlined in Algorithm 1.
### Single Target (\(q=1\))
In this setting, the intervention targets are one-hot vectors, as demonstrated in Figure 2. To sample one-hot vectors in a differentiable manner, we parametrize \(\pi_{\phi_{n}}\) as a Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) over intervention targets, which is a continuous relaxation of the categorical distribution (in _one-hot_ form). Additionally, we use the straight-through (ST) gradient estimator (Bengio et al., 2013).
### Unconstrained Multi-Target (\(q\leq d\))
If instead of a continuous relaxation of the categorical distribution, we parametrise the policy \(\pi_{\phi_{n}}\) as a continuous relaxation of the Bernoulli distribution (Binary Concrete) (Maddison et al., 2016), we can now sample multi-target experiments. Notice that since each interventional target sample will have at most \(d\) non-zero entries, this policy is suitable for multi-target experiments with an unconstrained number of interventions per experiment.
### Constrained Multi-Target (\(q=k\))
Finally, when considering a setting where the number of targets per intervention is exactly \(k\). However, this is a significantly more challenging case, since the policy needs to select a subset of \(k\) from \(d\) nodes. By using a continuous relaxation of subset sampling, as introduced in Xie & Ermon (2019), combined with straight-through gradient estimator, we can efficiently optimize the policy to select a subset of nodes to intervene on.
## 5 Experiments
We evaluate the performance of our method on synthetic graphs and a range of baselines. We aim to investigate the following aspects empirically: (1) To what extent can we design good experiments without performing intermediate causal discovery/ posterior estimation with IWNMC estimator from the prior? (2) Ability to design good experiments with a proposal distribution with IWNMC (3) the performance of our policy-based design in combination with the differentiable NMC estimator in single-target and multi-target settings, as compared to suitable baselines.
### Bivariate Setting
First, we demonstrate the method in a two nodes graph to qualitatively assess what the objective and the optimization method do. Since computing the posterior over graphs and parameters is intractable in the general case, as a first step to study how well we can optimize the EIG objectives, we assume a simple two-variable SCM. To compute the posterior we enumerate all the graphs of size two and parametrize the conditional distributions as Neural Network parametrised Gaussian distribution (\(\mathcal{N}(X_{i};\mu_{\text{NN}}(X_{\text{pa(i)}}),\sigma_{\text{NN}}(X_{ \text{pa(i)}}))\)). We compute the posterior over the parameters of the conditional distributions via Monte-Carlo dropout (Gal & Ghahramani, 2016). We parametrize the intervention targets policy with Gumbel-Softmax and interventional states policy with a Gaussian distribution. The final policy consists of the logits of the Gumbel-Softmax and the sufficient statistics of the Gaussian distribution. We use Adam optimizer (Kingma & Ba, 2014) to optimize the parameters of the policy. As we can see in Fig.3(B-C), the optimizer successfully con
centrates the policy on the nodes and states that maximize the EIG objective.
### Results
We present experimental results in various settings for the following metrics:
Evaluation metrics
**Expected SHD**: This metric evaluates the expected structural harmming distance over the graphs sampled from the posterior model and the ground truth graph.
**Expected F1-score**: This metric evaluates the expected f1-scores over the edges of the adjacency matrices sampled from the posterior model and the ground truth graph.
**i-MMD**: interventional MMD distance uses the non-parametric distance metric MMD (Gretton et al., 2012). Contrary to the graph evaluation metrics, this metric is evaluating the distributions induced by both the graph structure and the conditional distributions. We provide the full definition in Appendix C.
### Evaluation of the IWNMC estimator
In this section, we consider optimizing the designs with respect to the IWNMC estimator entirely from the prior, introduced in 4.1, sidestepping the causal discovery procedure. As noted before, estimating posteriors of causal models is hard, so it is important to understand to what extent IWNMC can be considered a suitable candidate for designing good experiments in the absence of a posterior. For this setting, we sample from the prior distribution over graphs by first sampling an ordering of nodes at random and then sampling edges with probability \(p=0.25\) which adhere to this topological order to give a DAG. We sample the mechanism parameters and noise variances of ANM at random from a Gaussian distribution with mean 0 and variance 1.
Figure 4 demonstrates results for \(5\) variable unconstrained multi-target setting with batch size 2. For evaluation, we train DAG Bootstrap (Friedman et al., 2013) with GIES (Hauser & Buhlmann, 2012) on the data acquired from each policy. We can see that we can recover the ground truth SCM faster than a random strategy. This is a surprising, but positive result given that our policy was trained entirely from samples from the prior. We also tested this approach for \(10\) nodes (results in Appendix E). While this resulted in better performance of the policy as opposed to random in terms of downstream metrics, we observed effective sample size reach \(1\) indicating that for \(10\) dimensions or higher, indicating that we might need a better proposal distribution or a posterior estimate.
Figure 4: We test the designs acquired with IWNMC estimator with just the prior as opposed to the random policy (with random target and state acquisition) on variables of \(5\) dimensions. Plots correspond to unconstrained multi-target setting with \(B=2\) (shaded area represents \(95\%\) confidence intervals - 60 seeds).
Figure 3: Two variables and two experiments scenario. We assume a ground-truth graph \(G_{\mathrm{T}}\) of two nodes \(X\to Y\). The conditional distribution \(p(Y\mid X)\) is shown in **(A)**. The corresponding SCM is \(x=\Sigma_{x}\) and \(y=f(x)+\Sigma_{y}\). The four panels represent the EIG of all possible experiments of batch size two, when intervening on nodes \((0,0),(0,1),(1,0),(1,1)\). Each panel shows how the EIG change on different interventional states. E.g. right top panel shows how EIG changes when applying interventions with states in ranges \([-20,20]\). We can observe that the algorithm successfully places the designs (samples from the policy) on the high EIG (1.95) area of the plot (\(\bullet\) on the plot).
### Baselines
Before we evaluate the IWNMC estimator with a proposal distribution more informative than the prior and the NMC estimator with a posterior estimate of SCM, we present the baselines with which we can also compare the overall performance of our designs.
### Baseline target
**Random-Fixed**: Uniform random selection of target, fixing the state to a value of \(0\) (as introduced in (Agrawal et al., 2019; Tigas et al., 2022)). **Random-Random**: Uniform selection of node, uniform selection of state (introduced in (Toth et al., 2022)). **SoftCBED**: A stochastic approximation of greedy batch selection as introduced in (Tigas et al., 2022).
Multi-Target
**Random-Random**: Multi-target version of Uniform selection of node, uniform selection of value (introduced in (Toth et al., 2022)). **Random-Fixed**: Multitarget version of Uniform selection of node, fixed value to \(5\)(Sussex et al., 2021), as suggested by the authors. **SSGb**: Finite sample baseline from (Sussex et al., 2021) with fixed value equal \(5\). We emphasize that in contrast to our method, the baselines cannot select states, but they either assume a fixed predefined value or select a value at random.
### Evaluation in Higher Dimensions
Evaluation of IWNMC with Proposal Distribution
In this experiment, we consider \(40\) variables, constrained \((q=5)\) multi-target and batch size \(B=2\). Further, we use the same setup as Sussex et al. (2021) to make a fair comparison as well as to construct a proposal distribution. To construct a proposal distribution, we use \(800\) observational samples to train DAG Bootstrap (Friedman et al., 2013; Agrawal et al., 2019) and augment our posterior samples with samples of dags from the Markov Equivalence Class of the true graph, to make sure that there is support over the graphs from the MEC of the true graph (see Sussex et al. (2021) for details). We then acquire a single batch of experiments from IWNMC estimator for our approach. For the baseline, we acquire a single batch of experiments from the estimator defined in (Sussex et al., 2021).
For random and SSGb baseline, we set the interventional value to 5, as explained in (Sussex et al., 2021). Our approach doesn't fix the value to 5 but optimizes over a value to perform the intervention with. In Table 2 we summarize our results. As we can see, our method outperforms random and SSGb by a great margin, indicating that with a good proposal distribution, IWNMC can still be a promising candidate in higher dimensions.
Results with NMC estimator
For the following results, we use DAG-Bootstrap (Agrawal et al., 2019) with 20 components, an approximate posterior
Figure 5: **(A,B,C)** Single target-state design setting results for Erdős–Rényi (Erdős & Rényi, 1959) graphs with \(d=50\) variables. **(D,E,F)** Multi target-state design setting results for Erdős–Rényi (Erdős & Rényi, 1959) graphs with \(d=20\) variables. Each experiment was run with 30 random seeds (shaded area represents 95% CIs)
method based on GIES causal discovery method (Hauser and Buhlmann, 2012). As GIES is not a differentiable method, once we compute the posterior via the DAG-Bootstrap algorithm, we transfer the weights of the posterior samples (the bootstraps) into JAX (jax) tensors to allow for the gradients to be computed with respect to the experiments.
Single-target synthetic graphs:In this experiment, we test against synthetic graphs of 50 variables and batch size 5, where the graph is sampled from the class of Erdos-Renyi (a common benchmark in the literature (Tigas et al., 2022; Toth et al., 2022; Scherrer et al., 2021)). In Figure 9**(A,B,C)** we summarize the results. We observe that our method performs significantly better than the baselines.
\(20\) **nodes, unconstrained \((q\leq 20)\), batch size \(B=2\):** In this experiment, we want to evaluate the performance of our method as compared with the baselines, on sparse graphs over several acquisitions. Figure 9**(D,E,F)** summarizes the results of this setting. We observe strong empirical performance as compared to all the baselines. Additional results are given in Section G.1.
## 6 Discussion
**Limitations:** A primary limitation of our method is that it needs to estimate a posterior after every acquisition. While the proposed IWNMC estimator presents an interesting alternative, the designs are still non-adaptive. An interesting direction is to train a policy to be adaptive and propose new experiments in real-time.
**Conclusion:** We presented a gradient-based method for differentiable Bayesian Optimal Experimental for causal discovery. Our method allows not only for single-target but also various multi-target (constrained and unconstrained) batch acquisition of experiments. While prior work in Causal Bayesian Experimental Design relies on greedy approximations for the selection of a batch (Agrawal et al., 2019; Tigas et al., 2022) or black-box methods (Toth et al., 2022; Tigas et al., 2022) for optimizing over interventional states, our method utilizes gradient-based optimization procedures to simultaneously optimize for various design choices. Evaluation on different benchmarks suggests that our method is competitive with baselines.
|
2306.13821
|
Engineering quantum states from a spatially structured quantum eraser
|
Quantum interference is a central resource in many quantum-enhanced tasks,
from computation to communication protocols. While it usually occurs between
identical input photons, quantum interference can be enabled by projecting the
quantum state onto ambiguous properties that render the photons
indistinguishable, a process known as a quantum erasing. Structured light, on
the other hand, is another hallmark of photonics: it is achieved by
manipulating the degrees of freedom of light at the most basic level and
enables a multitude of applications in both classical and quantum regimes. By
combining these ideas, here we design and experimentally demonstrate a simple
and robust scheme that tailors quantum interference to engineer photonic states
with spatially structured coalescence along the transverse profile, a type of
quantum mode with no classical counterpart. To achieve this, we locally tune
the distinguishability of a photon pair via spatial structuring of their
polarisation, creating a structured quantum eraser. We believe these
spatially-engineered multi-photon quantum states may be of significance in
fields such as quantum metrology, microscopy, and communications.
|
Carlo Schiano, Bereneice Sephton, Roberto Aiello, Francesco Graffitti, Nijil Lal, Andrea Chiuri, Simone Santoro, Luigi Santamaria Amato, Lorenzo Marrucci, Corrado de Lisio, Vincenzo D'Ambrosio
|
2023-06-24T00:11:36Z
|
http://arxiv.org/abs/2306.13821v1
|
# Engineering quantum states from a spatially structured quantum eraser
###### Abstract
Quantum interference is a central resource in many quantum-enhanced tasks, from computation to communication protocols. While it usually occurs between identical input photons, quantum interference can be enabled by projecting the quantum state onto ambiguous properties that render the photons indistinguishable, a process known as a quantum erasing. Structured light, on the other hand, is another hallmark of photonics: it is achieved by manipulating the degrees of freedom of light at the most basic level and enables a multitude of applications in both classical and quantum regimes. By combining these ideas, here we design and experimentally demonstrate a simple and robust scheme that tailors quantum interference to engineer photonic states with spatially structured coalescence along the transverse profile, a type of quantum mode with no classical counterpart. To achieve this, we locally tune the distinguishability of a photon pair via spatial structuring of their polarisation, creating a structured quantum eraser. We believe these spatially-engineered multi-photon quantum states may be of significance in fields such as quantum metrology, microscopy, and communications.
## I Introduction
In the Hong-Ou-Mandel (HOM) effect, quantum interference occurs when two indistinguishable photons, entering a beamsplitter (BS), take the same output path (photon bunching) [1]. Although the HOM effect is typically investigated by tuning the temporal matching between two identical photons, the indistinguishability required for the phenomenon makes it fundamentally sensitive to all degrees of freedom (DoFs), from polarisation [2; 3] to frequency [4; 5], and time-bins [6] as well as for collinear spatial modes [7], including multi-particle [8; 9] and high dimensional scenarios [10; 11; 12; 13]. Monitoring this effect has rendered it a versatile tool with innate sensitivity to measure a wide array of variations between the two inputs for quantum enhanced measurements [14; 15; 3]. With the advent of sensitive cameras enabling spatially resolved observation [16], measurements across the spatial DoF [17; 18] have been made possible, such as dip tracking for spatially-resolved height measurements [19] or insight into the spatial-temporal [20] effects from multimode spontaneous parametric down-conversion (SPDC).
The fundamental quantum nature of HOM interference also lends itself to tests of quantum mechanics [21; 22], among which a notable example is the quantum eraser [23; 2]. Such a paradigm allows one to restore quantum interference, even if two photons are made distinguishable before entering the BS by'marking' a degree of freedom such as polarisation. This is achieved by projecting the outputs of both exit ports onto a basis that cannot yield information on which path the photon took through the BS and so making the photons effectively indistinguishable again. Moreover, the projection process can allow one to edit the state by introducing phases to vary bunching behaviour to anti-bunching and vice versa. One may accordingly imagine engineering a desired photon number in an augmented state, lending itself to be used as a structuring tool.
Structured light, in general, is a very powerful concept in modern optics, spanning from the classical regime to fundamental quanta [24]. For instance, classical implementations exhibit enhanced sensing [25; 26; 27; 28; 29], microscopy [30] and communication [31] capabilities, while quantum mechanically, it provides a test-bed for quantum mechanics [32; 33], secure high-dimensional communication [34] and increased resilience to noise [35; 36; 37]. As such, there is a strong interest in tailoring complex or new structures, such as designing across non-local degrees of freedom for quantum skyrmions [38] or harnessing non-homogeneity for complex entanglement structures [39; 40; 41]. It follows that, by designing a mode with structured photon coalescence, we can combine the advantages of both concepts, holding the possibility for direct impact in applications based on quantum interference or structured light as well as unlocking new prospects.
By altering a given DoF before the BS in HOM
interference, we already find vanilla photon number engineering occurs by default, where bunching forms NOON states [42; 43; 44] and, similarly, anti-bunching engineers non-local entangled states [45]. Moreover, coincidence detection acts as a filter for particular Bell and high-dimensional spatial states [10], while altering the modal distinguishability between photons, facilitates space-time entanglement engineering [46; 47] or the population of spatial modes [7; 12]. Such state engineering can be extended to photon subtraction or additions with continuous variables [48]. Where, in these cases, interference was used to postselect or allocate photons to particular spatial modes, here we demonstrate a scheme to spatially structure the quantum erasing process itself, thus obtaining a photonic state with no classical counterpart, that is directly structured in a quantum property of the field, i.e., photonic coalescence.
To do this, we harness the innate dependency of the HOM effect on identical conditions to demonstrate how spatially tailoring a degree of freedom can allow one to spatially tailor quantum interference. We achieve this by exploiting geometric phase devices to generate non-uniform structure in the polarisation degree of freedom and, having done so, a spatially-varying structure in the distinguishability for the input photons. We therefore implement a quantum eraser [2] via polarisation projective measurements in order to study the caveats associated with these engineered states and elicit conditions where coalescent structures can be heralded or erased. We subsequently provide a framework for how one may engineer the bunching and anti-bunching distribution across the transverse mode profile by exploiting the simple generation of vector vortex (VV) modes. This approach and subsequent structures, however, may be generalised to achieve arbitrary freedom in engineering this fundamentally quantum property towards developing a unique and versatile tool for applications in fields such as quantum metrology, microscopy and communications.
## II Results
**Concept.** The concept behind structuring quantum interference for tailoring photon coalescence is illustrated in Figure 1. Although typical HOM scenarios imply identical photons, quantum interference can be obtained also when the two photons entering the BS are distinguishable, thanks to the quantum erasing process [22]. When the _which-path_ information is encoded in polarisation, for instance, quantum interference can be enabled by placing a polariser on each of the output paths, thus performing two projective measurements that erase the _which-path_ information. Interestingly, depending on the initial polarisation state and each polariser's orientation, it is possible to fully tune photon coalescence from bunching to antibunching [Fig. 1(a)] [2]. We could therefore exploit this feature in order to generate a structured quantum state with a tailored coalescence in the transverse plane if we design a spatially dependent quantum eraser. To this end, we consider a scenario where the polarisation of each, otherwise indistinguishable photon, is given a non-uniform distribution in the transverse plane before impinging on separate ports of a 50:50 BS. A polariser is moreover placed in each of the output paths of the BS. By tailoring the polarisation profile of each photon and selecting the polariser's orientation, we can have full control over the spatial distribution of the photonic
Figure 1: **Spatially engineering quantum interference.** (a) In a quantum eraser, two photons in different polarisation states are quantum interfering when their _which-path_ information is erased through two polarisers. Depending on the polariser orientation, it is possible to tune photon coalescence from bunching to antibunching. This is reflected in a HOM curve (when using bucket detection of coincidences) that shows a dip or peak when the photons are temporally indistinguishable (in), relative to the coincidence rate measured when temporally distinguishable (out). (b) The polarisation distribution of input photons can be spatially tailored to engineer local variations in the quantum interference when overlapped on a 50:50 BS. The interference space-variant behaviour is fully illustrated for three representative specific locations, depicted using distinct colours (blue, green, pink). When tuning the temporal distinguishability, the polarisation mixing in the quantum eraser results in a corresponding space-variant distribution of HOM, exhibiting peaks, dips or flat curves. This takes place despite one photon being detected with a bucket detector.
coalescence [Fig. 1(b)]. In other words, by structuring quantum interference, photon bunching is tailored along the optical mode transverse profile so that in each position we can have a zero-or-two photon NOON state (HOM dips), a single photon (HOM peak), or some combination of the two.
**Experimental Implementation.** We experimentally demonstrate this concept by using the setup shown in Fig. 2, with full details provided in Methods. Initially, SPDC biphotons (\(\lambda=810\) nm) are separated in path and spatially filtered with a single mode fiber. A set of polarisation correcting waveplates (not shown in Fig. 2) facilitates uniform control of each photon's polarisation before being directed to the input ports of a 50:50 BS for interference, giving the two-photon state \(\ket{\psi}_{in}=\left[\hat{a}_{A,m}^{\dagger}(\tau)\hat{a}_{B,n}^{\dagger}( \tau^{\prime})\right]\ket{0},\quad\) where \(\hat{a}^{\dagger}\) is the creation operator for the respective modes, (\(n\),\(m\)), with the time-bins \(\tau\) and \(\tau^{\prime}\), in the discrete paths \(\{A,B\}\) of each input port. A motorised path delay is placed in one arm for tuning the temporal matching (\(\tau^{\prime}\rightarrow\tau\)) of the photons. Initial matching of the input polarisation replicates the standard HOM experiment when tuning the delay. Further insertion of geometric phase elements, known as \(q\)-plates (QP) [49], before the BS allows the controlled fashioning of spatially varying polarisation in each arm and the tuning of each photon's incident polarisation state, dependent on the topological charge (\(q\)) of the element. This allows one to prepare a transverse mode (\(n\)) of the form,
\[\mathbf{E}_{n}(\mathbf{r},t)=\mathbf{e}_{s}(\varphi)f(r,t-\tau)e^{-i\omega t}, \tag{1}\]
for each photon, where \(f\) indicates the amplitude profile generally dependent on the radial (\(r\)) coordinate, \(\{\mathbf{e}_{s}(\varphi)\}\) are the transverse unit vectors specifying the local mode polarisation as a function of the azimuthal angle \(\varphi\), \(\omega\) is the mean (carrier) frequency of the mode and time is \(t\).
In particular, without loss of generality, we consider two modes that present the same polarisation in some regions of the transverse profile, as depicted in Fig. 1(b). As such, we choose the _radial_ (\(q=0.5\)) and \(\pi\) [\(q=0.5\) followed by a half-waveplate (HWP)] modes depicted there, which are two first order orthogonal VV modes [39], to be the input states of the BS. These may respectively be described by the azimuthally dependent polarisations,
\[\begin{split}&\mathbf{e}_{rad}(\varphi)=\mathbf{e}_{H}\cos \varphi+\mathbf{e}_{V}\sin\varphi\\ &\mathbf{e}_{\pi}(\varphi)=\mathbf{e}_{H}\cos\varphi-\mathbf{e}_ {V}\sin\varphi,\end{split} \tag{2}\]
where \(e_{H}\) and \(e_{V}\) are, respectively, the horizontal and vertical polarisation unit-vectors.
As in typical HOM experiments, the interference or coalescent behaviour of our modes are observed through the two-fold coincidences measured on the state
\[\ket{\psi}_{out}=\frac{1}{2}\left[\hat{a}_{A,rad}^{\dagger}(\tau)\hat{a}_{B, \pi}^{\dagger}(\tau^{\prime})-\hat{a}_{A,\pi}^{\dagger}(\tau^{\prime})\hat{a}_ {B,rad}^{\dagger}(\tau)\right]\ket{0}, \tag{3}\]
corresponding to the projection of the full BS-output state onto the subspace in which the two photons are separated in ports A and B, and may be calculated by the fourth-order correlation function [50], as detailed in the Supplementary Information (SI). With this, we find the spatially varying coincidence probabilities for arbitrary polarisation projections {\(\alpha\),\(\beta\)} on the photons detected in each port (respectively corresponding to coordinates \(\varphi_{1}\) and \(\varphi_{2}\)) when temporally tuned (\(\tau=\tau^{\prime}\))
\[\begin{split} C_{(In)}^{\alpha,\beta}(\varphi_{1};\varphi_{2}) =&\\ &\frac{1}{4}\left|[\mathbf{u}_{\alpha}\cdot\mathbf{e}_{rad}( \varphi_{1})]\left[\mathbf{u}_{\beta}\cdot\mathbf{e}_{\pi}(\varphi_{2}) \right]-\right.\\ &\left.[\mathbf{u}_{\alpha}\cdot\mathbf{e}_{\pi}(\varphi_{1})] \left[\mathbf{u}_{\beta}\cdot\mathbf{e}_{rad}(\varphi_{2})\right]\right|^{2} \quad,\end{split} \tag{4}\]
and temporally distinguishable (\(\tau\neq\tau^{\prime}\))
\[\begin{split} C_{(Out)}^{\alpha,\beta}(\varphi_{1};\varphi_{2}) =&\\ &\frac{1}{4}\{\left|[\mathbf{u}_{\alpha}\cdot\mathbf{e}_{rad}( \varphi_{1})]\left[\mathbf{u}_{\beta}\cdot\mathbf{e}_{\pi}(\varphi_{2}) \right]\right|^{2}+&\\ &\left.\left|[\mathbf{u}_{\alpha}\cdot\mathbf{e}_{\pi}(\varphi_{1} )]\left[\mathbf{u}_{\beta}\cdot\mathbf{e}_{rad}(\varphi_{2})\right]\right|^{2} \right\}\quad.\end{split} \tag{5}\]
Here \(\mathbf{u}_{\alpha}\) and \(\mathbf{u}_{\beta}\) are the unit-vectors directed along the detection polariser axes. For simplicity, we ignore here the radial distribution as the polarisation variation depends only on \(\varphi\) (see SI for a more general treatment). Experimentally, we realise these polarisation state projections by respectively placing polarisers \(P_{1}\) and \(P_{2}\) in each output port of the BS.
As depicted in Figure 2, we adopted two strategies to observe the generated structure by means of these coincidences, both in and out of the temporal tuning. In one case, the photons from each output port were collected by coupling into multimode fibres connected to bucket detectors placed in each arm (flip mirror down), rendering no resolution of the spatial distribution. This corresponds to measuring the correlations given in Eqs. (4) and (5) after integrating over both azimuthal angles \(\varphi_{1}\) and \(\varphi_{2}\). In the second case (flip mirror up), we performed spatially-resolved measurements of one photon by replacing one bucket detector with a camera, which was then conditioned on the spatially-unresolved detection of another photon by the bucket detector in the other arm. This corresponds to measuring the correlations given in Eqs. (4) and (5) after integrating over one angle only, e.g. \(\varphi_{2}\).
It is now possible to identify polarisation projection measurements that erase the _which-path_ information and enable quantum interference between these structured modes, even for bucket detection. Intuitively, one may consider the VV spatial distributions depicted in Fig. 2 which shows they share the same polarisation state
along the horizontal (H) and vertical (V) axes, while being completely orthogonal along the diagonal (D) and anti-diagonal (A) directions. It thus follows that a projection along H and V should destroy the _which-path_ information for the latter pairing and recover HOM interference. To observe the related coalescence for such polarisations (\(\alpha\) and \(\beta\)) we define the visibility, \(\mathcal{V}_{\alpha,\beta}=(C_{out}^{\alpha,\beta}-C_{in}^{\alpha,\beta})/C_{out }^{\alpha,\beta}\in[-1,1]\) between the coincidences detected out (\(C_{out}\)) and in (\(C_{in}\)) the temporal indistinguishably criteria [as depicted in Fig. 1 (b)]. This allows us to easily recognise bunching as \(\mathcal{V}\) becomes positive from anti-bunching where \(\mathcal{V}\) becomes negative with extreme cases of perfect bunching (antibunching) being detected when \(\mathcal{V}=1\) (\(-1\)).
The graphs given in each panel of Figure 3 show the experimental outcomes of the bucket-only detection as the delay between the two input photons was varied by moving the delay line. Here, projections (\(P_{1},P_{2}\)) = (H,H) elicit a dip at \(\Delta t=0\), coinciding with a erasure of the which-way information for the two photons and thereby restoring quantum interference. For (\(P_{1},P_{2}\)) = (H,V), we find a peak instead of a dip, corresponding to photon antibunching. [11, 2] Alternatively, cases (\(P_{1},P_{2}\)) = (H,A),(H,D),(A,D) and (A,A) result in a flat curve (no HOM dip nor peak).
On the other hand, for spatially-resolvable measurements (using the camera), we expect the following spatially structured visibilities, corresponding to a set of polarisation projections of \(P_{1}\in\{H,A\}\) in the camera arm and \(P_{2}\in\{H,V,D,A\}\) in the bucket detector arm as depicted by the arrows in Fig. 3:
\[\begin{split}\mathcal{V}_{HH}&=1\\ \mathcal{V}_{HV}&=-1\\ \mathcal{V}_{AH}&=\cos\left(2\varphi_{1}\right)\\ \mathcal{V}_{AV}&=-\cos\left(2\varphi_{1}\right)\\ \mathcal{V}_{HA}&=\mathcal{V}_{HD}=\mathcal{V}_{AA }=\mathcal{V}_{AD}=0,\end{split} \tag{6}\]
obtained by integrating (4) and (5) over \(\varphi_{2}\) due to bucket detection in arm 2. In this case, images conditioned on coincidence with the bucket detector were captured "out" (upper left inset of the curves) and "in" (upper right inset of the curve) the temporally matched condition. Some asymmetry towards the left in the distributions may be noted as the result of experimental misalignment in the delay line to the camera (see Methods) as well as the \(q\)-plate with respect to the photon distribution. This, however, does not significantly detract from the expected distributions. Each measured pair of coincidence distributions then allowed us to reconstruct the visibility map (lower row) for the projection settings.
Figure 2: **Experimental setup.** After the spatial filtering with single mode fibers, two photons (\(\lambda\) = 810 nm), one in each path, are sent into two different input ports of a 50:50 beamsplitter (BS) as in a typical HOM setting. However, before reaching the BS, each of the photons is prepared in the desired vector vortex (VV) mode (with spatial profiles shown as insets) by sending horizontally polarised light through \(q\)-plates (QP) with \(q=0.5\). Placement of an additional half-waveplate (HWP) after QP\({}_{1}\) transformed one of the photons to the \(\pi\) mode. The temporal delay between the two photons was controlled by a translation stage mounted in the arm containing QP\({}_{2}\). To realise bucket detection, the output photons were coupled (D\({}_{1}\),D\({}_{2}\)) into multimode fibres and sent to two avalanche photodiodes (APD) connected to a coincidence counter (CC). For spatially resolved detection, one output was redirected via a flip mirror (FM) and focused with a lens onto an intensified charge coupled device (Camera), and gated upon detection of the other photon (D\({}_{2}\)). _Which-way_ projections were performed by rotating linear polarisers \(P_{1}\) and \(P_{2}\).
The resulting outcomes relate well to what was observed in the case of no spatial resolution (bucket detection) as the coincidences measured by bucket detection are proportional to the integrated pixel distribution over the mode profile. We additionally compare these to the calculated profiles from Eq. 6 that are shown as upper-right insets of the visibility maps. Accordingly, \((P_{1},P_{2})=\) (H,H) shows uniform coalescence across the spatial distribution, while (H,V) exhibits a uniform anti-coalescence, within the expectations of the calculated distributions. The cases of HA, HD, AA and AD similarly yield a uniform structure in line with expectations of a zero visibility across the measured profile.
Crucially, projections AH and AV, reveal spatially varying features with two complementary lobes, showing coalescence and anti-coalescence across the structure as predicted by Eqs. 6. These structures are not detectable without spatial resolution, as shown by the flat curves in the corresponding graphs. As a result, we herald a mode of structured bosonic coalescence with some parts containing either zero or two photons as in the case of NOON states. At the same time, other regions of the
Figure 3: **Spatially tailored photon distributions from quantum interference.** Table of experimental outcomes for different choices of the polarisation projections P1 (black arrows, row-wise) and P2 (black arrows, column-wise). Hong-Ou-Mandel coincidence counts measured without spatial resolution, varying the temporal delay, are shown as blue points in the graphs in the upper section of each panel (solid curves are corresponding best fits). Error bars from Poissonian statistics are too small to observe. The related spatially-resolved coincidence images (spanning \(\sim 60\) X 60 pixels) in and out of the temporal distinguishability, are given as insets to the HOM curves with arrows denoting the measured distribution for the temporally tuned (\(\Delta t=0\), ‘In’) case to the right and temporally distinguishable (\(\Delta t<<0\), ‘Out’) case to the left. The related spatially resolved visibilities are given as the transverse distributions on the bottom, using the false-color scale shown on the right-side bar. Analytically calculated visibility distributions are given as top-right circular insets.
mode, corresponding to anti-coalescence, will always be populated by one photon.
**Discussion.** With these results, we find interesting features, most notable are the asymmetries in projected states made in either arm. For instance, a diagonal projection only reveals structurally varying coalescence when placed before the camera and one only observes total non-zero visibilities for H and V polarisations projections. To understand this, one must consider, not just the polarisation, but the paired spatial structures associated with them as well as the global distinguishability. Here, projection on H and V for both VV modes results in identical spatial lobes that overlap for both distributions, thus rendering the photons indistinguishable, both globally and locally. For A and D, however, the lobes, while partially overlapping, are orthogonal structures and thus globally distinguishable. It follows that if one were to integrate over the first case, the structure is indistinguishable and thus facilitates in erasing the which-way information, while in the second case the distinguishability is held. As the photons must be indistinguishable in both arms to erase the _which-path_ information and observe interference, the perceived distinguishability for both detectors determines what is observed. Bucket detection with A and D projections thus remains distinguishable, leading to no visibility, even with an indistinguishable projection on the camera. Conversely, bucket detection with H and V allows the locally indistinguishable structures for A and D to be seen, despite the global distinguishability. Additionally, a distinct reversal of the coalescent and anticoalescent distributions can be seen for complementary polarisation projections when structural coalescence or anticoalescence is present (i.e. \(P_{2}=\) H vs V for \(P_{1}=\) H and A). One can thus locally switch between bosonic and fermionic behaviour, or equivalently, between two/zero (NOON) and single photon occupation. It follows that tailoring the quantum state can be conveniently achieved by changing the polarisation distribution of the input beams as well as the polarisation projections.
In conclusion, we have generated structured quantum states of light that have no classical counterpart, with a spatially tailored bosonic coalescence. We achieved this by exploiting a simple and robust scheme based on spatially tailored distinguishability between two photons in a quantum eraser setting. This additionally invites insight into the structured state where, in particular, the use of orthogonal modes for each interfered photon showed a global versus local HOM effect [6] such that detection without spatial resolution yielded no sign of interference, but spatially resolved measurements reveals its presence. Although these results were demonstrated for VV modes, our technique may be generalised to intelligently engineer arbitrary desired interference structures for varying purposes. Additionally, we point out that programmable approaches, for tailoring the input polarisation distributions, [51] would allow dynamic on-demand variations of these states. As distinguishabilty is the property one needs to tailor, one may extend our result to other degrees of freedom, such as temporally structured light [52]. By nature of the ability to spatially tailor the coalescent behaviour of photons, we believe these new states and engineering thereof could be beneficial for fields such as quantum microscopy, metrology and communications, including high-dimensional protocols.
## III Methods
**Experimental details.** The two (\(\lambda=810\) nm) signal-idler photons used in the experiment (Fig. 2) for interference on the BS were generated in the state \(\ket{\psi}=\ket{H}\ket{V}\) via type-II degenerate collinear SPDC from a 30 mm long ppKTP crystal pumped with a \(\lambda=405\) nm continuous wave laser and temperature phase-matched at \(\sim 38\)\({}^{\circ}\)C. They were then path separated with a PBS and each spatially filtered by coupling into single mode fibers (SMF) which directed the photons through a half-waveplate and quater-waveplate (for preparation into the horizontal polarisation state) as well as electronically tuned \(q\)-plates with the topological charge \(q=0.5\) for polarisation-dependent spatial structuring, before impinging onto a 50:50 BS for interference. One SMF was placed on a motorised translation stage to introduce a path-delay for temporally tuning the distinguishability. The output of each BS port was then directed towards detectors. We switched between space-integrated and space-resolved measurements by using a flip mirror. The first case (flip mirror down) coupled the structured photons in each arm into multimode fibres for detection by single-photon counting modules (APD) connected to a coincidence counter with an integration window of 2 ns. The second case saw the photons in one arm diverted by the flip mirror to a 30 m delay line for compensation of the electronic delay associated with the triggering of the camera, before being focused by a \(f=150\) mm lens onto an intensified CCD. This camera was then gated by the trigger provided from the APD, positioned in the other arm, so as to spatially capture the photons in coincidence. Polarisation projections on the structured states were made by inserting linear polarisers into each output arm before the relevant detectors.
**Vector vortex modes.** In the experiment, VV modes were generated by exploiting spin-to-orbital angular momentum conversion in a birefringent liquid crystal slab with uniform retardation and an azimuthally varying optical axis, known as a \(q\)-plate (QP). This device is chatacterised by a topological charge, \(q\), related to the number of rotations of the local optical axis and, accordingly, the polarisation-dependent geometric phase
imparted to incident light. In Jones matrix formalism, the QP operation in the linear basis, where \(|H\rangle=[1;0]^{T}\) and \(|V\rangle=[0;1]^{T}\) are horizontal and vertical polarisation states, is described as,
\[QP=\begin{bmatrix}\cos\left(2q\varphi\right)&\sin\left(2q\varphi\right)\\ \sin\left(2q\varphi\right)&-\cos\left(2q\varphi\right)\end{bmatrix}. \tag{7}\]
From this, it is straightforward to see the spatially-varying polarisation distribution induced and dependence on polarisation. Taking a horizontally polarised state as in the experiment, one obtains the following output:
\[QP\left|H\right\rangle=\begin{bmatrix}\cos\left(2q\varphi\right)&\sin\left(2q \varphi\right)\\ \sin\left(2q\varphi\right)&-\cos\left(2q\varphi\right)\end{bmatrix}\begin{bmatrix} 1\\ 0\end{bmatrix}=\begin{bmatrix}\cos\left(2q\varphi\right)\\ \sin\left(2q\varphi\right)\end{bmatrix} \tag{8}\]
and with \(q=0.5\) as in the experiment, the state described in Eq. 2 is obtained. By adding a \(\text{HWP}=[1,0;0,-1]^{T}\) where the optical axis aligned horizontally with the QP, the following transformation is achieved:
\[\begin{split}[HWP][QP]&=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\begin{bmatrix}\cos\left(2q\varphi\right)&\sin\left(2q\varphi \right)\\ \sin\left(2q\varphi\right)&-\cos\left(2q\varphi\right)\end{bmatrix}\\ &=\begin{bmatrix}\cos\left(2q\varphi\right)&\sin\left(2q\varphi \right)\\ -\sin\left(2q\varphi\right)&\cos\left(2q\varphi\right)\end{bmatrix}\end{split} \tag{9}\]
that, acting on the state \(\left|H\right\rangle\), produces the desired state \(\left|\pi\right\rangle=\cos\left(\varphi\right)\left|H\right\rangle-\sin \left(\varphi\right)\left|V\right\rangle\) for \(q=0.5\).
## Acknowledgements
We acknowledge support from the Italian Ministry of Research (MUR) through the PRIN 2017 project "Interacting photons in polariton circuits" (INPhPOL) and the PNRR project PE0000023-NQSTI. We also acknowledge support from NATO through SPS Project HADES - MYP G5839 and from the Italian Space Agency (ASI) through the "High dimensional quantum information" (HDQI) project.
|
2301.09606
|
Concept of Delivery System in the Smart City Environment
|
Regarding to the smart city infrastructures, there is a demand for big data
processing and its further usage. This data can be gained by various means.
There are many IoT devices in the city, which can communicate and share the
information about the environment which they are situated in. Moreover every
personal mobile device can also participate in this process and help to gain
data via various applications. Every app provides the useful data, enabling the
location and data sharing. This data can be further processed and used for
improving the city infrastructure, transport or other services. We designed the
system for shared delivery process, which can help to achieve the described
situation. It consists of frontend and backend part. The frontend part,
multiplatform mobile app, represents the graphical interface and the backend
part represents the database for the gained data.
|
Zuzana Špitálová, Oliver Leontiev, Patrik Harmaňoš
|
2023-01-23T18:10:41Z
|
http://arxiv.org/abs/2301.09606v1
|
# Concept of Delivery System in the Smart City Environment
###### Abstract
Regarding to the smart city infrastructures, there is a demand for big data processing and its further usage. This data can be gained by various means. There are many IoT devices in the city, which can communicate and share the information about the environment which they are situated in. Moreover every personal mobile device can also participate in this process and help to gain data via various applications. Every app provides the useful data, enabling the location and data sharing. This data can be further processed and used for improving the city infrastructure, transport or other services. We designed the system for shared delivery process, which can help to achieve the described situation. It consists of frontend and backend part. The frontend part, multiplatform mobile app, represents the graphical interface and the backend part represents the database for the gained data.
Smart City, Delivery System, PostgreSQL, ReactNative, API +
Footnote †: DOI:10.5121/jci.2023.120110
## 1 Introduction
More than a decade, the technical progress of the city infrastructure is increasing in many ways. Using the complex technologies, there is a contribution to smart city environment creation. Smart city can be defined as the place, which uses the modern technical solutions for conventional services and networks for improving the daily life of the people which lives there [1]. The information and communication technologies (ICTs), like artificial intelligence or autonomous vehicles, are involved into daily aspects of life for supporting the urban development. The important item of smart city environment is Internet of Things (IoT) [2]. It represents the devices, which are connected and able to communicate between each other. Via these devices, the information can be gained and processed for other purposes.
Based on the mentioned definition and used ICTs, there is a city ranking, which evaluates the achieved level of smart city. According to [3], these studies are done regularly to find the smartest city of the year. There are many aspects, which are required for achieving the certain level of smart city. There is rated the integration of urban technologies, sensors and any personal device. Via them, the useful data can be gained and further used for improving the existed services or finding the information. The data can be also gained by various applications used in the city, like tourist apps, traffic apps, etc.
There is another study related to the citizen cloud, implemented in the city of Shanghai [4]. There can be seen the implementation of the city platform, which provides the public services, like
culture or healthcare, transportation and etc. So, all the information, related to the services in the city, can be found there.
In the last years, there is an increasing number of mobile apps, which use the device location during providing the required service. This location can be seen on the digital map. All these devices, via used applications, can provide the data related to the environment, which they are situated in. Using this gained data, there is much useful information. Between these apps, there are services like taxi, food delivery, shared travelling, persons and goods transport. As there is an intention to provide the services via apps, we designed the system for shared delivery, consisted of frontend and backend part. We studied also the systems for delivery process, which are available on the market. The most of them belongs to the portfolio of local and global companies, like post and courier services. There was the possibility to study only the frontend part of them. These apps are used provided that, there is the courier, which is employee of the company, and the customer, which receives or sends a packet. The main idea of our system is the shared delivery. So, everyone can participate in the process choosing the courier or user mode. We suppose the shared delivery in the cases, when the delivery person and the delivered parcel have the same destination address. This helps to save sources and decrease emissions. The other characteristic of our system is the possibility to gain the data from the environment and its further usage. By the conventional systems, belonged to the big delivery companies, there is not the information about this functionality.
Regarding to the system's design forshared delivery in smart city environment, the used technology has a significant importance. The main aim is the multiplatform usage. Thanks to this possibility, many people can participate in that process independently from the mobile platform.
To achieve the multiplatform usage, the React Native [5] is a good choice for frontend part of the system. It is open-source framework based on JavaScript for mobile apps development using the components. As it is open-source, it is not needed to develop everything by yourself. As the frontend part represents the graphical interface, the other characteristic is the user-friendly functionality and usage. There is also the request for quick and secure communication with the server and the secure local data storage.
Regarding to the backend part of the system, the multiplatform characteristic of the database system has also the importance. We chose the PostgreSQL [6] database because of the Relational Database Management System (RDBMS) requirement to provide the relational database management functionality. The data is stored in the tables. RDBMS ensures the integrity and rules between databases. The first of the main requirements for the backend part is the speed of the system. So, the server has to respond as quickly as possible. The second requirement is the encryption, related to the communication with the frontend part, and the data which is stored in the database.
## 2 Design
As it was mentioned before, the aim of our work was to design the system for shared delivery process, which could be used by people in the certain urban area, consisted of frontend and backend part. The main idea is the shared delivery in case of the same direction of delivered parcel and travelling person. So, everyone in the community can participate in the process. The means of transport (car, van, motorcycle, e-bike, etc.), used for delivery, depends on the person, which wants to use the proposed system. We wanted to create the open-source project, which could be used by various people, delivering the items during the travelling to school, work, etc., if the destination address of the item would be the same as they intend to reach. This helps to save time, money and also the other sources. As the smart city environment is also ranked according to the implementation by various technologies and devices used inside it, the other aim of our
project is the usage in the urban area, where will be the possibility to gain various data regarded to traffic, large concentration of cars, number of daily orders, etc. The gained data can be processed and used for other purposes. Using that, there can be the infrastructure improvements, green and busy zones pick outs, and many others assets regarded to the daily life in the city.
### Frontend
Our proposal was designed in the open-source multiplatform framework, React Native. Regarding to this, it can be used by multiple mobile platform, like iOS and Android. The app works in respecting the good manners.
The functional and non-functional requirements were also considered. From non-functional point of view, there are the following considered requirements for the app:
- Multiple mobile platform availability
- Check of the input data from the users
- Request check during the communication with server (backend)
- User interface should be able to adopt for various type of devices
- Application will send the delivery person location every few seconds
- The delivery person role will be available only under registration
- The data from the server will be provided in JSON format.
-
From functional point of view, there are two roles, the user and the delivery person. It is not necessary to use the both roles. The requirements were defined for both separately and for both in common. The common functional requirements for both roles are the following:
- Registration: User account creation.
- Login: Login into existing account via email and password.
After registration or login, there are other available possibilities. For the user role regarded to functional requirements, there are the following items:
- Delivered package tracking: The possibility to track the delivered package.
- Availability of user profile: User information, like name, email, address, etc.
- Edit the user profile: The possibility to change the user data.
- Add the payment data: The data regarded to credit card. This item is optional.
- New item creation: Delivered parcel creation and specification.
- My items: Active and inactive items. It means the delivered packages and the ones in delivery.
- Statistics: Information about the activity.
- Delivery person location: Availability of updated package location during the delivery process.
For the delivery person role, respecting the functional requirements, there are the items:
- Registration as delivery person: After creation the account, the user role is automatically chosen. For the delivery person role, it is necessary to choose this role in the app's menu.
- Active items: Availability of the item in the delivery.
- Acceptance of parcel for delivery: The item for accepting the package which can be delivered by the delivery person.
- Parcel delivery: Mark the end of the delivery by courier.
- History of delivered items.
For the usage of app functionalities, it is necessary to communicate with the server (backend) part of our project. The communication is realized via the REST API interface using HTTPS protocol. REST API enables the data transfer between the app and the server. HTTP request has to be
authorized. The access token is necessary for the authorization. It is gained during the login. After login, the server sends the access token and the renew token, which are stored in the local storage of the mobile device. So, the access token is used in every request. If the access token is not valid, the renew token is used. If this one is also not valid, the server sends an error and the user has to login again. If the renew token is valid, the server sends the new access and renew one, which are again stored in the local storage. Although the application communicates with the server and most of the gained data are stored there, there are still the situations when some data needs to be stored locally. This can happen when there is no internet connection or connection failure. If the data is stored locally, the request sent to the server is not necessary. For that purpose, the global data storage was created in the app. It uses React Context API, which represents the interface enabling the data transfer between components. The data can be stored in two ways, using Async Storage and Secure Storage. Async Storage is used in case of non-sensitive data. It is the React Native system maintained by the developer's community, providing the local unencrypted asynchronous key-value storage. Secure Storage enables to store the sensitive data. The platform iOS supports the iOS-Keychain Services storage for certificates or tokens. The platform Android stores the data in key-value form using Secure Shared Preferences with Android Keystore System encryption. For both systems implementation, it is necessary to use the extern library. In our work, it is Expo Secure Store.
So, the application works after establishing the communication via registration or login.The user can choose which role wants to use. In the case of standard user role, the user can be the sender and also the receiver of the package. The receiver does not need to be registered. By the sender sub-role, the registration is necessary. After registration and login, there is the item for new package delivery creation. The necessary information has to be filled to create successful order. The required data are package dimensions, address of the sender and the receiver. For filling the address, the Google Places Autocomplete component is used. It uses the Google Places API. After that, the new parcel can be seen on the map available in the application. There is also the possibility to see the available delivery persons in the surrounding area on the same map. After this process, the package is in the system waiting for the person which can deliver that. After accepting and picking up the package by delivery person, the parcel tracking is available in the app, until it is delivered to the destination address. In the case of delivery person, it is necessary to register into this role. After filling all the necessary data (name, means of transport, etc.), the person can participate in the process and the available parcels are visible for him. After accepting some parcel for delivering, the location of the delivery person is tracked and seen on the app's map. This functionality is enabled via the websockets. This websocket connection is created between the app and the server by choosing the package for delivery. The location is sent periodically every four seconds and has to be enabled on the delivery person's device. The information about the current latitude and longitude is sent to websocket server until the package is delivered.
Figure 1:Delivery order creation screen
Figure 2: Package delivery screen
#### 2.1.1 Testing
Regarding to the testing, it was realized in two phases. The first phase was aimed to functional and non-functional requirements, which were defined during the development. The testing process was iterative. It means, every added functionality was tested separately first. If the first test results were acceptable, the new functionality was also tested with the others, which had been already implemented before. If the first test was not acceptable, the review process was started and the designed functionality was refactored.
The second phase was aimed to users' testing. This means, the app was tested regarding to user interface and app functionality by real users. There were five metrics used for the experiment. The first metrics is the success of tasks fulfillment, which was achieved by the user. The second is the number of identified failures in the system observed by the user. The third is the time needed for tests execution. The forth represents the unnecessary steps realized during the testing in comparison with the ideal way. The last one is the questionnaire about the system usage based on SUS (System Usability Scale) method [7, 8]. There are 10 questions for ranking the application on a scale from 1 to 5. On a scale, 1 means strongly disagree, 2 is disagree, 3 is neutral, 4 means agree and 5 is strongly agree. According to [7, 8], there is a calculation and result can be ranked from A to F. A means the best result and F is the worst one. The system was tested by 5 users.
Regarding to the first metrics, four of the users fulfilled all tasks. One of them was not able to find and create the delivery person account.
Figure 3: Screen of courier location in real time
By second metrics, the first user identified two issues. One issue was related to the case, when wrong email address was filled from the device memory. The problem was that the field reacted just to the manual keyboard text insert. The other issue was the date field by delivery person registration, which did not have specified format type. The second user identified the issue with Package delivery screen, where the package information overlapped the map environment. The third user identified the issue with Pick up button on the Package pick up screen. The button did not react. The problem was by Android platform, caused by missing specified field in the header of sent request. The forth user found two issues. The first issue was the password change failure and the second one was the email-address overhanging the field. The last user identified one issue related to the delivery person account creation. The user was not able to create the account. He found the process complicated. The other metrics were comparing the time and steps needed for tasks' fulfillment. For some of the users, it was difficult to find the required functionalities on the first time. They had to click between screens more time. The last metrics was evaluated by mentioned SUS method. Regarding to that, three users evaluated the proposed system as excellent (A) and two of them evaluated that as good (B).
Figure 4: Success of tasks fulfillment
Figure 5: Identified failures in the system
Figure 8: Time needed for tasks fulfillment
Figure 6: Unnecessary steps needed for tasks fulfillment
Figure 7: System Usability Scale (SUS)
### Backend
The backend part of our system is designed in Python with Django modules. Django is expanded with DRF (Django REST Framework), which helps to support RESTful API. We use the Nginx server and PostgreSQL database with pgcrypto and postgis modules to protect and store the encrypted and geographical data. For working with the routes between the sender and the receiver, the Google Maps API is used. The service SendGrid is used for automatic emails. For authentication via tokens, we use the JWT (JSON Web Token) technology. The protocol WebSocket is used for the communication in the real time.
For the backend part, the functional and non-functional requirements were defined. The non-functional requirements represent the characteristics of the system. We defined the following:
* Speed: The server has to respond in very small time after receiving the request.
* Encrypted communication: The communication between server and client (frontend) has to be encrypted.
* Encrypted data: Sensitive data in the database, like names, emails, etc. has to be encrypted.
* Documentation: The server interface has to be well documented for the frontend implementation purposes.
* JSON Format: Communication between client and server has to be in JSON format.
From the functional requirements point of view, there are the following items:
* Registration: The registration data has to be sent to the server to create the user's identity.
* Login: After sending the user's login data to the server, it returns the token which is used for other requests from the same user.
* Password update: Password update in the case of forgotten one.
* Package delivery creation: Item creation for delivery.
* Package delivery display: The possibility to display the packages which are delivered for the receiver. The authentication is not necessary.
* Delivered package tracking: The possibility to track the package. The authentication is not needed.
* History of packages: The list of items, which were sent.
* Delivery person registration: Registration for the users, which want to participate like the delivery persons.
* Achievement for package delivery request: The delivery person can achieve the request for package delivery in his surrounding area.
* Acceptance of package delivery request: The delivery person can accept the request.
* Change of delivery package state: The delivery person is able to change the package state. The package is delivered after marking that by this person.
* Delivery person location: The delivery person can share his location in real time.
* Access to the delivery persons' location: It enables to access the delivery persons' locations in the real time without authentication.
* Access to the delivery persons' routes: It enables to access the routes of the delivery persons, which are actually delivering without authentication and filtering.
* Access to the statistics of sent packages: Authenticated user can access the data about the sent packages.
* Email sending: Application can inform the sender and the receiver about the package.
* Administrator: Application supports the administrator interface for editing the data in the database.
The backend part is designed for two roles, the client and the delivery person (courier). The client role supports two possibilities of usage, the sender and the receiver. The sender has to be
registered in the application. The data related to the sender are stored in the server using the AES [9] encrypted algorithm for private data encryption. For password encryption, the Argon2 algorithm [10] is used. After delivery request creation, the sender receives the unique code for tracking the delivered package. This code is also automatically sent to the receiver, which does not need to be registered in the app. If the receiver registers to the app, he can automatically see the delivered packages. The history of the sent parcels is available for the user. In case of delivery person role, the registration is needed. When the delivery person is active, the available packages are displayed for him according to the distance from his actual location. If the package cannot be delivered in some case, this should be communicated between the delivery person and the sender of the package.
The server part consists of data specifications, which are needed to be stored, and relations between them and APIs, which enable to work with the data. The logical model for our usage consists of various relations, like:
* Account: It represents the user's account. It contains the data about the user's role, admin and active user, and about the login data, email and password.
* Person: It contains the registration data of the users. It is also used by the delivery creation for identifying the user, which is not needed to be registered.
* Courier: It contains the information about the delivery person.
* Delivery: It is the main relation of the app. It contains the information about the delivery package state, route distance, expected delivery time, etc.
* Item: It represents the information about the delivered package, like size, weight, fragile, etc.
* Place: It represents the source and destination address of the delivered package.
* Route: It contains the data about the geographical locations of the delivery person.
After consideration, we specified more precisely some of the items in the relations Courier, Delivery and Item. By the Courier relation, we specified the vehicle type with three possibilities, small, medium, large. By weight of Item relation, there was a change with possibilities, light, medium, heavy. By delivery state of Delivery relation, there are five possibilities, like ready, assigned, delivering, delivered and undeliverable.
Figure 9: Logical model diagram of designed system
The next focus is the APIs of our server application, which are in fact the URL addresses with implemented HTTP method. Respecting the principles of REST architecture, the URL addresses represent the items. The following items are the example of our designed APIs used for communication with frontend part of the system.
Figure 10: Enumeration diagram of designed system
#### 2.2.1 Testing
The system was tested periodically during the development, using automated testing after each functionality implementation. Besides that, the tests related to functional and non-functional requirements were realized.
Regarding to the non-functional requirements, the following items were assessed:
* Speed: The average response time of the server is 0.4 seconds.
* Encrypted communication: For encryption, it is used the HTTPS protocol and SSL certificate.
* Encrypted data: The private data, stored in the database, are encrypted using AES algorithm. For the passwords, it is used Argon2 algorithm.
* Documentation: The server API is fully documented using specification Open API 3.0.
* JSON Format: The only API, which uses the different format from the JSON type, is the package delivery creation. This uses the format Form Data due to the possibility ofadding picture into the request.
To summarize the information above, the system fulfills the non-functional requirements.
From the functional requirements point of view, there are the following results:
* Registration: The standard registration process was realized by sending the request with registration data. The verification email was sent. The account creation was also checked. The server response took 0.7 seconds on average.
* Login: Using the access token gained in the previous test, there was a check of user authetification by sending the HTTP request with GET method. The server response with private user data took 0.5 seconds on average.
* Password update: The test was realized by using the user email address, created in the first test. Then we clicked on the link for the password change. The new password was
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**HTTP method** & **Final part of URL** & **Description** \\ \hline POST & api/accounts/ & New user creation \\ & & (registration) \\ \hline POST & api/accounts/verification\_email/ & Verification email re-sending \\ \hline GET & api/accounts/token/ & Authentication user code (token) getting (login) \\ \hline GET & api/accounts/me/ & Own account data getting \\ \hline PATCH & api/accounts/me/ & Own account data changing \\ \hline POST & api/accounts/reset\_password/ & Password resetting \\ \hline POST & api/deliveries/ & Delivery creation \\ \hline GET & api/deliveries/\(<\)id\(>\)/ & Delivery data based on ID getting \\ \hline GET & api/deliveries/ & Delivery based on history getting \\ \hline POST & api/couriers/ & New courier creation (registration) \\ \hline POST & api/deliveries/\(<\)id\(>\)/state/ & Delivery state change \\ \hline GET & api/couriers/closest\_delivery/ & Closest delivery getting \\ \hline GET & api/deliveries/statistics/ & Delivery statistics getting \\ \hline GET & api/routes/ & Routes statistics getting \\ \hline \end{tabular}
\end{table}
Table 1.Proposed APIs
filled and it was tested for login. The login was done also with the old password. The test was successful and the server response took 0.2 seconds on average.
* Package delivery creation: The new client was created with the receiver role. The delivered package was created for the new client. We checked, if the server responded with the expected data (data related to the package) and if the information email was already sent. The test took 0.5 seconds on average.
* Delivered package tracking: The request with package ID, created in the previous test, was sent. The authetification was not done. The first step was repeated but with the authetification. After that, we observed that the server sent data about delivered package for both rounds, and only in the second round, the receiver was identified. This was the expected system behavior. The server responded in 0.1 seconds on average.
* History of packages: The other two packages with different source and destination address were created. They were sent to the test receiver. The request GET was sent.We identified us as the user, which sent the packages. The step two was repeated, but with the receiver authetification. After that, we checked the items' lists from the both rounds. The server responded in 0.2 seconds on average.
* Delivery person registration: The other account for test delivery person was created. By sending the request, we identified us as the test delivery person. We checked the expected response. The process took 0.2 seconds on average.
* Achievement for package delivery request: We sent the request, where we identified us as the delivery person, created in the previous test. We checked if the server responded with the list of items, created in the previous tests. The items should be ordered according to the distance from the delivery person location. The step one was repeated with the different location. The list of items should be in different order. The test took 0.6 seconds on average.
* Acceptance of package delivery request: We identified us as the delivery person and accepted the package for delivering, created in the previous test, sending the request to the server. We checked if the server responded with the data related to the delivered package. The state of the package and the delivery person were also changed. The process took 0.5 seconds on average.
* Change of delivery package state: We identified us as the delivery person from the previous test and checked, if the accepted packages had the changed state. The response took 0.5 seconds on average.
* Delivery person location: Using the delivered package ID, we created the connection and sent the coordinates of the delivery person. We checked if the server responded with the added delivery person ID. The test was successful with the immediate response.
* Access to the package location: Using the delivered package ID, we established the connection without authetification. The other messages were sent using the connection established in the previous test. We checked if the new messages could be seen also in the new communication. We were not able to send messages via the other connection due to the missed authetification process. The server responded correctly.
* Access to the delivery persons' location: The previous test was repeated when the connection was established as global one. The delivery persons' locations were visible.
* Access to the delivery persons' routes: The request related to the delivery persons' routes was sent and checked. The routes of created package deliveries were seen. The filter functionality was checked, too. The server response took 0.7 seconds on average.
* Access to the statistics of sent packages: We sent the request and identified us as the test sender. We checked the expected response with three created items in the last five months. The response took 0.1 seconds on average.
* Email sending: As the automatic email was already checked by package delivery creation, we changed the delivered package state to delivered.After that, we checked if the email was sent to the sender. The test was successfully done.
International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.1, February 2023
- Administrator: The administrator part was realized as fully functional.
Regarding to the functional requirements, the tests were successfully done with optimal average times. Of course, the measured times can be different on the other running environment depending on performance.
## 3 Future Work
As we designed and tested our system as the prototype one, we plan to evaluate that by more complex testing. The tests were realized in the standard city environment. According to the data which are processed by backend, we plan to analyze the following city aspects, like:
- Traffic jams in the certain areas
- The most used means of transport for delivery (According to this, there can be prediction for infrastructure improvements, like chargers, parking places for cars, e-bikes, scooters, etc.)
- The owner of the means of transport (own, rented, shared)
- GPS and network coverage in the urban area
- Rush hours identification
- The most delivered package types (size, fragile, etc.)
- The most efficient user (private person, company)
- Etc.
We plan to make the API implementation for support various sensors (humidity, temperature, air quality), which do not require much power and do not send much data. These sensors can be mounted on users' means of transport and collect other data for processing.
The other step can be the standard improvement of our system, like security, effectiveness, database improvements for searching, whole system performance improvements, etc.
We can also extend the system to multiple servers which could be split into decentralized clusters. Implementation of this feature can improve the network stability, as the data will be stored at more places.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Functional requirement test** & **Average request time [s]** \\ \hline Registration & 0.7 \\ \hline Login & 0.5 \\ \hline Password update & 0.2 \\ \hline Package delivery creation & 0.5 \\ \hline Delivered package tracking & 0.1 \\ \hline History of packages & 0.2 \\ \hline Delivery person registration & 0.2 \\ \hline Achievement for package delivery request & 0.6 \\ \hline Acceptance of package delivery request & 0.5 \\ \hline Change of delivery package state & 0.5 \\ \hline Delivery person location & 0 \\ \hline Access to the package location & - \\ \hline Access to the delivery persons’ location & - \\ \hline Access to the delivery persons’ routes & 0.7 \\ \hline Access to the statistics of sent packages & 0.1 \\ \hline Email sending & - \\ \hline Administrator & - \\ \hline \end{tabular}
\end{table}
Table 2: Average request times for functional requirements
At the end we can try to make it open-source, as anyone can contribute and make this system better.
## 4 Conclusion
We designed the system, consisted of frontend and backend, for shared delivery process. Each part of the system was tested according the functional and non-functional requirements. The frontend part was also evaluated by five users, which tested the app from the user's point of view. During that, some technical failures were found and fixed. One of our testers documented also the issue related to the menu navigation, which he found complicated. We did not change the menu navigation. It has to be evaluated by more users. By backend part, there were the tests after each functionality implementation. If there was an issue, it was fixed. If unit test passed by the implemented functionality, the automated testing of whole system was executed. The time and the functional requirements were observed and documented. After the separate testing of both parts, there was also the common evaluation. As there were not so many participants in testing, we plan to do more complex evaluation process.
The main advantage of our project is the database system, which process the gained data for further purposes. According to the data analysis, there can be the improvements related to the city infrastructure and daily life.
## Acknowledgement
This publication has been written thanks to the support of the Operational Programme Integrated Infrastructure for the project: Research in the SANET network and possibilities of its further use and development (ITMS code: 313011W988), co-funded by the European Regional Development Fund (ERDF).
|
2303.07052
|
Controlling Fractional Difference Equations Using Feedback
|
One of the most popular methods of controlling dynamical systems is feedback.
It can be used without acquiring detailed knowledge of the underlying system.
In this work, we study the stability of fractional-order linear difference
equations under feedback. The stability results are derived for an arbitrary
feedback time $\tau$. We study the cases of $\tau=1$ and $\tau=2$ in further
detail. The extension to the stability of fixed points under feedback for
nonlinear fractional order difference equations with fixed points $ x_{*}=0$ is
also carried out.
|
Divya D. Joshi, Sachin Bhalekar, Prashant M. Gade
|
2023-03-13T12:17:58Z
|
http://arxiv.org/abs/2303.07052v1
|
# Controlling Fractional Difference Equations Using Feedback.
###### Abstract
One of the most popular methods of controlling dynamical systems is feedback. It can be used without acquiring detailed knowledge of the underlying system. In this work, we study the stability of fractional-order linear difference equations under feedback. The stability results are derived for an arbitrary feedback time \(\tau\). We study the cases of \(\tau=1\) and \(\tau=2\) in further detail. The extension to the stability of fixed points under feedback for nonlinear fractional order difference equations with fixed points \(x_{*}=0\) is also carried out.
## 1 Introduction
Differential Equations have been used for modeling various phenomena in natural sciences for a long time. This modeling helps us in understanding the physical phenomena and controlling them if necessary. The differential equations have been found useful in modeling plethora of systems. They range from the spreading of diseases [42], emotional self-regulation in romantic couples [44], economic development and growth [55] to tumor growth [18]. This modeling is inadequate in certain systems and generalization is required. Fractional differential equations are the generalization of differential equations for systems with memory. The order of fractional differential equations could be real or even complex. Even though fractional calculus has been around for more than 300 years, fractional differential equations are applied in real world situations only in the past few decades. We find the first mention of fractional derivatives by Leibniz and L'Hospital as early as 1695 but the very first definition of fractional derivative was introduced by Liouville and Riemann in the second half of the 19th century. Thereafter several eminent mathematicians like Caputo, Hadamard, Grunwald, Letnikov, Riesz, and others gave various definitions for fractional derivatives.
This development in the field of calculus opened the door for many other fields where fractional differential equations are an essential part of mathematical modeling.
The systems defined by fractional differential equations are said to be non-local. The reason is that the future depends on its entire history. Thus, systems governed by fractional differential equations have long-term memory. Hence, fractional differential equations are ideal models for systems where memory plays an important role. Such systems are found in diverse fields. In the field of material science, fractional differential equations are used to define viscoelastic materials and their order determines the amount of viscosity and elasticity present in the materials [39]. Fractional order epidemic models of various diseases like COVID-19 [56], Ebola [17], HIV [30], influenza A (H1N1) [20] have shown promising results that helped to understand, analyze and control the spread of these diseases. Seismological studies have shown that fractional order intensity measures for probabilistic seismic demand modeling applied to highway bridges have superior performance compared to traditionally used intensity measures. This improved efficiency and proficiency, at the same time, maintaining practicality and sufficiency [40] for seismological applications. Several books have been devoted to applications of fractional calculus in real-life applications [46] and recent applications of fractional calculus in the field of science and engineering can be found in [45].
Fractional order systems are different from integer order systems in several respects. However, they show phenomena observed in integer order systems such as chaotic or aperiodic behavior. In certain systems, chaos is not desired and several control schemes are designed to control the chaos. Two of the most popular control schemes are the Ott-Grebogi-Yorke method [34] and the Pyragas method [37]. For the first method, we need to know the stable and unstable manifold of the desired orbit. For fractional order systems, the presence of such manifolds itself is a matter of debate [10]. The method that can be used without any detailed knowledge of the systems is the feedback method suggested by Pyragas. We study the possibility of controlling fixed points using feedback and show that the method indeed works in the fractional case as well.
In the case of systems with delay \(\tau\), the value of the current state depends on its value \(\tau\) steps back. Such systems have applications both in modeling as well as control. Fractional differential equations with delay have found applications in control theory. Control theory deals with the control of dynamical systems, uses feedback in several cases and has wide applications. Controllers may be designed using a feedback mechanism. Here, the output of the system is fed back to the system through a controller to influence the behavior of the system and give the desired output.
Stability analysis of delay differential equation with control is carried out by Bhalekar [7]. Controllers have been designed for controlling the fractional ordered systems with input delay [41, 53, 54, 31]. The stability of the Cournot duopoly model with distributed time delays is discussed by Culda _et al._ in [13]. The existence of chaotic behavior, stability, and its synchronization of fractional ordered systems with delay have been studied for several systems like the Ikeda system [25], logistic system [47] and Chen system [14]. The ongoing scenario in the world demands newer and updated knowledge of the spreading of diseases and their controllability. As discussed earlier, fractional-order differential equations are used for modeling epidemiological problems. Certain realistic situations demand the introduction of a delay in resulting equations and could lead to better modeling and prediction. A rich dynamical behavior is obtained for the infection model of fractional order with delay [26].
The fractional differential equations with delay have been studied in various contexts. Rihan _et al._ investigated the fractional order model of the interactions of tumor cells and the immune systems with two different time delays. The stability of the solutions was observed to have improved and the model leads to various complex behaviors [38]. Alzahrani _et al._ studied the adverse effect of untimely or delayed reporting of infectious diseases more realistically by using fractional differential equations with delay [2].
Fractional difference equations are a relatively much less studied models. Difference equations are the discretized versions of differential equations. The study of fractional difference equations can be seen as an approach to studying fractional differential equations by the finite difference method. Furthermore, if an integer-order difference equation is generalized to a fractional-order one, then it is able to model the memory properties in the system. This is due to the non-local property of the fractional order difference operator. In this generalization, all the values of the system from an initial point are considered while evaluating the new value. These equations demand fewer computational resources and are simpler to code. The stability conditions for difference equations of fractional order and even complex order have already been obtained by [43, 9, 24]. Stability results of two-term fractional difference equations are proposed by Brandibur and Kaslik [11]. Stability conditions are crucial for studying the control and synchronization of the systems. For systems of complex order, we have numerically investigated a fractional difference equation along with a delay term which can be viewed as a controller [24]. We observed that the parameter range over which chaos is obtained is reduced on the introduction of delay term for the complex fractional Lozi map. This can be viewed as a control. One of the simplest control is a case of a fixed point where the system gives steady output. Though the above work indicates that feedback can be useful for control in fractional order maps, we need rigorous analytic conditions for practical applications.
The theory of fractional finite differences is initiated by Lubich [27] and Miller and Ross [28]. The topic is further developed by Atici and coworkers [4]. The stability analysis of these equations is presented in [12, 11]. The chaotic systems of these kinds are employed in the image processing by Abdeljawad _et al._[1]. Atici and Sengul introduced the fractional calculus of variations and derived the discrete Euler-Lagrange equation [5]. A tumor-growth in cancer is modelled by using the nabla-fractional difference equations in [3]. Ouannas, Batiha and Pham [35] proposed the theory and applications of chaotic difference equations of fractional order. These systems are recently used to model COVID-19 [16]. Fractional order Mandelbrot set and Julia sets are studied in [15].
This paper gives the stability analysis for stable fixed points for systems defined by fractional difference equations coupled with a delay term. The exact analysis is carried out for the linear system and is extended to the nonlinear maps. Stabilization or destabilization of fixed points of fractional order maps with feedback is studied. Because this is a discrete system, the control term can be added proportionally to the value at the previous time. This is unlike the differential equations studied above where the control term is within the integration and thus the control is also fractional order. This is a simpler and more practical case where the control at a given time depends on the value of variable \(\tau\) steps back and not the entire history. We obtain the stability bounds for this system. The analysis is essentially for linear systems. However, we observe that the same analysis gives equally good stability bounds for nonlinear maps with appropriate linearization near the zero fixed point. In this work, we have considered the \(h-\) difference operator while defining the system. For simplicity, we have taken \(h=1\). Therefore, \(\tau\) has integer values. Fractional values of \(\tau\) can be considered for \(h\neq 1\).
The plan of the paper is as follows. We give essential definitions followed by the model. We carry out stability analysis of the fractional order difference equations with delay. We show that the stability conditions can be expressed in an equivalent matrix form. We study the cases \(\tau=1\) and \(\tau=2\) in greater detail. The case of large \(\tau\) is also discussed briefly. Several examples are given for linear and nonlinear systems to corroborate our results with numerical evidence in various systems.
Preliminaries
In this section, we have given some basic definitions. Let \(h>0\), \(a\in\mathbb{R}\), \((h\mathbb{N})_{a}=\{a,a+h,a+2h,\ldots\}\) and \(\mathbb{N}_{\circ}=\{0,1,2,\ldots\}\).
**Definition 2.1**: _(see [29]) The Z-transform of a sequence \(\{y(n)\}_{n=0}^{\infty}\) is a complex function given by_
\[Y(z)=Z[y](z)=\sum_{k=0}^{\infty}y(k)z^{-k}\]
_where \(z\in\mathbb{C}\) is a complex number for which the series converges absolutely._
**Definition 2.2**: _(see [19, 6]) Let \(h>0,\;a\in\mathbb{R}\) and \((h\mathbb{N})_{a}=\{a,a+h,a+2h,\ldots\}\). For a function \(x:(h\mathbb{N})_{a}\rightarrow\mathbb{C}\), the forward h-difference operator is defined as_
\[(\Delta_{h}x)(t)=\frac{x(t+h)-x(t)}{h},\]
_where \(t\in(h\mathbb{N})_{a}\)._
Throughout this article, we take \(a=0\) and \(h=1\). We write \(\Delta\) for \(\Delta_{1}\). Now, we generalize the fractional order operators defined in [[29, 19, 6]].
**Definition 2.3**: _For a function \(x:(h\mathbb{N})_{a}\rightarrow\mathbb{C}\) the fractional h-sum of order \(\alpha=u+\iota v\in\mathbb{C},u>0\) is given by_
\[(_{a}\Delta_{h}^{-\alpha}x)(t)=\frac{h^{\alpha}}{\Gamma(\alpha)}\sum_{s=0}^{ n}\frac{\Gamma(\alpha+n-s)}{\Gamma(n-s+1)}x(a+sh),\]
_where, \(t=a+(\alpha+n)h,\;n\in\mathbb{N}_{\circ}\)._
For \(h=1\) and \(a=0\), we have
\[(\Delta^{-\alpha}x)(t) = \frac{1}{\Gamma(\alpha)}\sum_{s=0}^{n}\frac{\Gamma(\alpha+n-s)}{ \Gamma(n-s+1)}x(s)\] \[= \sum_{s=0}^{n}\left(\begin{array}{c}n-s+\alpha-1\\ n-s\end{array}\right)x(s).\]
Here, we used the generalized binomial coefficient
\[\left(\begin{array}{c}\mu\\ \eta\end{array}\right)=\frac{\Gamma(\mu+1)}{\Gamma(\eta+1)\Gamma(\mu-\eta+1)}, \;\mu,\eta\in\mathbb{C},\;\mathrm{Re}(\mu)>0,\;\mathrm{and}\;\mathrm{Re}(\eta )>0.\]
If \(n\in\mathbb{N}_{\circ}\) then
\[\left(\begin{array}{c}\mu\\ n\end{array}\right)=\frac{(\mu+1)}{n!\Gamma(\mu-n+1)}=\frac{\mu(\mu-1)\ldots( \mu-n-1)}{n!}.\]
**Definition 2.4**: _For \(n\in\mathbb{N}_{\circ}\) and \(\alpha=u+\iota v\in\mathbb{C},u>0,\) we define_
\[\tilde{\phi}_{\alpha}(n)=\left(\begin{array}{c}n+\alpha-1\\ n\end{array}\right)=(-1)^{n}\left(\begin{array}{c}-\alpha\\ n\end{array}\right).\]
**Note**: The convolution \(\tilde{\phi}_{\alpha}*x\) of the sequences \(\tilde{\phi}_{\alpha}\) and \(x\) is defined as
\[\left(\tilde{\phi}_{\alpha}*x\right)(n)=\sum_{s=0}^{n}\tilde{\phi}_{\alpha}(n-s )x(s)\]
\[\therefore(\Delta^{-\alpha}x)(n)=(\tilde{\phi}_{\alpha}*x)(n).\]
\[\therefore Z(\Delta^{-\alpha}x)(n) =Z\left(\tilde{\phi}(n)\right)Z(x(n))\] \[=(1-z^{-1})^{-\alpha}X(z),\]
where \(X\) is \(Z\) transform of \(x\).
**Property 2.1**: _(see [36]) The time-shifting property shows how a change in the discrete function's time domain alters the Z-domain._
\[Z[x(k-n)]=z^{-n}X(z)\]
**Proof**: From Definition 2.1 we have,
\[X(z)=\sum_{k=0}^{\infty}x(k)z^{-k}.\]
Consider \(k-n=m\) i.e., \(k=m+n\). Thus, we write the z-transform equation as
\[Z[x(k-n)]=\sum_{k=0}^{\infty}x(k-n)z^{-k}=\sum_{m=0}^{\infty}x(m)z^{-(m+n)}= \sum_{m=0}^{\infty}x(m)z^{-m}z^{-n}=z^{-n}\sum_{m=0}^{\infty}x(m)z^{-m}=z^{-n} X(z)\]
**Lemma 2.1**: _For \(\alpha\in\mathbb{C},\,\mbox{Re}(\alpha)>0\),_
\[Z(\tilde{\phi}_{\alpha}(t))=\frac{1}{(1-z^{-1})^{\alpha}}.\]
**Proof**: We have,
\[Z(\tilde{\phi}_{\alpha}(t)) = \sum_{j=0}^{\infty}\tilde{\phi}_{\alpha}(j)z^{-j}\] \[= \sum_{j=0}^{\infty}\left(\begin{array}{c}j+\alpha-1\\ j\end{array}\right)z^{-j}\] \[= \sum_{j=0}^{\infty}(-1)^{j}\left(\begin{array}{c}-\alpha\\ j\end{array}\right)z^{-j}\] \[= (1-z^{-1})^{-\alpha}.\]
by using Newton's generalization of the Binomial Theorem. [32, 33].
Model
Consider the fractional order linear difference equation
\[x(t)=x_{0}+\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(\alpha)\Gamma(t-j )}(a-1)x(j). \tag{1}\]
In this paper, we study the stability analysis of a fractional difference equation (1) coupled with a delay term.
### Modeling Equation
Introducing the delay term in the equation as \(bx(t-\tau)\), we get
\[x(t)=bx(t-\tau)+x_{0}+\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma( \alpha)\Gamma(t-j)}\left((a-1)x(j)\right), \tag{2}\]
where, \(b\in\mathbb{R}\) and \(a\in\mathbb{C}\).
\[\therefore x(t+1)-bx(t+1-\tau)=x_{0}+(a-1)(\tilde{\phi}_{\alpha}*x)(t).\]
Taking Z-transform on both sides, we get
\[zX(z)-zx_{0}-bz^{(1-\tau)}X(z) = \frac{x_{0}}{1-z^{-1}}+\frac{(a-1)}{(1-z^{-1})^{\alpha}}X(z).\] \[\therefore X(z)\left(z-bz^{(1-\tau)}-\frac{(a-1)}{(1-z^{-1})^{\alpha }}\right) = zx_{0}+\frac{x_{0}}{1-z^{-1}}, \tag{3}\]
where \(|z|<1\).
### Characteristic Equation
From (3), the characteristic equation of (2) is
\[\left(z(1-z^{-1})^{\alpha}-b(1-z^{-1})^{\alpha}z^{(1-\tau)}-(a-1)\right)=0, \tag{4}\]
where the condition \(|z|<1\) should be satisfied. Putting \(z=e^{(it)}\), we get,
\[e^{(it)}(1-e^{-it})^{\alpha}-b(1-e^{-it})^{\alpha}e^{(1-\tau)it} -(a-1) = 0,\] \[\mbox{i.e. }e^{(it)}(1-e^{-it})^{\alpha}-b(1-e^{-it})^{\alpha}e^{(1- \tau)it}+1 = a. \tag{5}\]
### Matrix Representation
Equation (2) can be represented equivalently as the following system
\[x(t) = x_{0}+\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(\alpha )\Gamma(t-j)}\left((a-1)x(j)\right)+by(t),y(t)=x(t-1)\mbox{ for }\tau=1. \tag{6}\] \[\therefore x(t+1) = x_{0}+(a-1)(\tilde{\phi}_{\alpha}*x)(t)+by(t+1).\]
Taking z-transform, we get
\[zX(z)-zx_{0}=\frac{x_{0}}{1-z^{-1}}+\frac{(a-1)}{(1-z^{-1})^{\alpha}}X(z)+b(zY(z)- zy_{0}),Y(z)=\frac{X(z)}{z}+x(-1).\]
\[\therefore z(1-z^{-1})^{\alpha}X(z)-(a-1)X(z)-bz(1-z^{-1})^{\alpha}Y(z)=x_{0}(1-z^ {-1})^{\alpha-1}+z(1-z^{-1})^{\alpha}x_{0}-bzy_{0}(1-z^{-1})^{\alpha},\]
\[zY(z)-X(z)=zx(-1).\]
\[\therefore\begin{bmatrix}z(1-z^{-1})^{\alpha}-(a-1)&-bz(1-z^{-1})^{\alpha}\\ -1&z\end{bmatrix}\begin{bmatrix}X(z)\\ Y(z)\end{bmatrix}=0.\]
\[\therefore\begin{bmatrix}z(1-z^{-1})^{\alpha}-(a-1)&-bz(1-z^{-1})^{\alpha}\\ -1&z\end{bmatrix}=0.\]
\[\therefore\begin{bmatrix}z(1-z^{-1})^{\alpha}-(a-1)&-bz(1-z^{-1})^{\alpha}\\ -1&z\end{bmatrix}=0.\]
\[\therefore\begin{bmatrix}z(1-z^{-1})^{\alpha}-(a-1)&0&-bz(1-z^{-1})^{\alpha}\\ -1&(z-1)+1&0\\ 0&-1&(z-1)+1\end{bmatrix}=0.\]
We can generalize and write \(\tau+1\) dimensional determinant for delay \(\tau\) as follows.
\[\begin{vmatrix}z(1-z^{-1})^{\alpha}-(a-1)&0&0&\ldots&-bz(1-z^{-1})^{\alpha}\\ -1&(z-1)+1&0&\ldots&0\\ 0&-1&(z-1)+1&\ldots&0\\ \vdots&\ddots&\ddots&\ldots&\vdots\\ 0&\ldots&-1&(z-1)+1&0\\ 0&0&\ldots&-1&(z-1)+1\end{vmatrix}=0.\]
### Boundary curve
Separating real and imaginary parts of equation (5), we get
\[Re(a)=2^{\alpha}\left(\sin\left(\frac{t}{2}\right)\right)^{\alpha}\left(\cos \left(\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right)-b\cos\left( \frac{\alpha\pi}{2}+t\left(1-\tau-\frac{\alpha}{2}\right)\right)\right)+1,\]
\[Im(a)=2^{\alpha}\left(\sin\left(\frac{t}{2}\right)\right)^{\alpha}\left(\sin \left(\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right)-b\sin\left( \frac{\alpha\pi}{2}+t\left(1-\tau-\frac{\alpha}{2}\right)\right)\right).\]
Therefore, the parametric representation of the boundary condition is
\[\begin{split}\beta(t)=&\{2^{\alpha}\left(\sin \left(\frac{t}{2}\right)\right)^{\alpha}\left(\cos\left(\frac{\alpha\pi}{2}+t \left(1-\frac{\alpha}{2}\right)\right)-b\cos\left(\frac{\alpha\pi}{2}+t\left( 1-\tau-\frac{\alpha}{2}\right)\right)\right)+1,\\ & 2^{\alpha}\left(\sin\left(\frac{t}{2}\right)\right)^{\alpha}\left( \sin\left(\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right)-b\sin \left(\frac{\alpha\pi}{2}+t\left(1-\tau-\frac{\alpha}{2}\right)\right)\right) \},\end{split} \tag{7}\]
for \(t\in[0,2\pi]\) in the complex plane. If the complex number \(a\) lies inside this anticlockwise oriented simple closed curve \(\beta(t)\) then the system (2) will be asymptotically stable.
### Stability Analysis with \(\tau=1\)
**Theorem 3.1**: _Consider the delayed fractional order equation (2) with \(\tau=1\). Let \(b=g_{1}(\alpha),b=g_{2}(\alpha)\) and \(b=g_{4}(\alpha)\) be the bifurcation curves as shown in Figure 1 in \(b\alpha\)-plane which are the branches of the implicit curve_
\[g(b,\alpha) = b\alpha\cos\left(\frac{1}{2}(1+\alpha)\left(\pi-\arctan\left( \frac{R}{S}\right)\right)\right) \tag{8}\] \[-(-1+\alpha)\cos\left(\frac{1}{2}\left(\pi(1+\alpha)-(-1+\alpha) \arctan\left(\frac{R}{S}\right)\right)\right)\] \[+\sin\left(\frac{1}{2}\left(\pi\alpha-(-3+\alpha)\arctan\left( \frac{R}{S}\right)\right)\right),\]
_where,_
\[R=\frac{-1+\alpha+b\alpha-b\alpha^{2}-\sqrt{(1+b\alpha)^{2}(1-2\alpha+4b \alpha+\alpha^{2})}}{b\alpha},\]
Figure 1: bifurcation regions in \(b\alpha\)-plane for \(\tau=1\) with representative stability diagrams. Here we have magnified the range on the y-axis for representation purposes. (NOT TO SCALE)
\[S = -\frac{\sqrt{2}}{b\alpha}(-(1+2(-1+b)b^{2}\alpha^{3}+b^{2}\alpha^{4} +\alpha^{2}-3b^{2}\alpha^{2}-2\alpha+2b\alpha\] \[+(1-\alpha-b\alpha+b\alpha^{2})(\sqrt{(1+b\alpha^{2})(1-2\alpha+4b \alpha+\alpha^{2})})))^{\frac{1}{2}}.\]
_Furthermore, define the curve \(b=g_{3}(\alpha)\) as the straight line \(b=-1\) in the \(b\alpha\)-plane._
We have the following stability results:
1. If \(b\in(-\infty,g_{4}(\alpha))\cup(g_{1}(\alpha),\infty)\), then the system (2) is unstable.
2. If \(g_{4}(\alpha)<b<g_{3}(\alpha)\), then the boundary curve (7) produces three disjoint regions; two of which are stable and one is unstable.
3. If \(g_{3}(\alpha)<b<g_{2}(\alpha)\), then the boundary curve (7) generates a bounded region which is a stable region for the system.
4. If \(g_{2}(\alpha)<b<g_{1}(\alpha)\), then the single stable region in case 3 gets divided into two regions; one is stable and the other is unstable.
5. If \(b=g_{2}(\alpha)\), then the curve (7) is a smooth curve and the inside part is stable.
6. If \(b=g_{3}(\alpha)\), then \(\beta(0)=\beta(\pi)\) and the stable region gets divided into two parts.
7. If \(b=g_{1}(\alpha)\) or \(b=g_{4}(\alpha)\), then the curve \(\beta\) will have cusps.
**Proof:** The boundary curve \(\beta(t)\) defined by (7) is closed because the initial point \(\beta(0)\) has the same value as that of final point \(\beta(2\pi)\). If \(\beta(t)\) is simple and its orientation is anticlockwise (positive) then the bounded part inside this curve is the stable region for the system (2). As the parameter values change, the simple closed curve \(\beta(t)\) transforms to the non-simple curve i.e. a curve containing multiple points. Therefore, the region bounded by the branch of the non-simple curve \(\beta(t)\) with anticlockwise orientation is the required stable region for the system (2).
It is observed that the formation of cusps in \(\beta(t)\) is responsible for the generation of multiple points (self-intersections). Thus, the parameter values at which cusps are produced in \(\beta(t)\) are the points of bifurcation. At the cuspidal point \(\beta(t_{0})\), the curve \(\beta(t)=(x(t),y(t))\) cannot have the derivative \(\beta^{\prime}(t)=\frac{y^{\prime}(t)}{x^{\prime}(t)}\). Here, \(x(t)\) and \(y(t)\) are the coordinates of the curve \(\beta(t)\). In this case, \(\lim_{t\to t_{0}}\beta^{\prime}(t)\) can take either \(\frac{0}{0}\) or \(\frac{\infty}{\infty}\) form. It is observed that at initial point \(t=0\), this limit takes \(\frac{\infty}{\infty}\) form. Therefore, the initial point \(\beta(0)\) is responsible for the bifurcations in some cases.
To check another possibility (i.e. \(\frac{0}{0}\) form), we solve the equations \(x^{\prime}(t)=0\) and \(y^{\prime}(t)=0\) simultaneously. We eliminate \(t\) between these equations and get a relation in the parameters \(b\) and \(\alpha\). This is carried out by squaring and adding the equations \(x^{\prime}(t)=0\) and \(y^{\prime}(t)=0\). Now, we get the value of \(t\) as functions of \(b\) and \(\alpha\). Substituting this \(t\) value in the equation \(y^{\prime}(t)=0\), we get the implicit expression (8).
This curve \(g(b,\alpha)=0\) has 3 branches viz \(b=g_{1}(\alpha),b=g_{2}(\alpha)\) and \(b=g_{4}(\alpha)\) as shown in Figure 1. Note that, \(b=g_{2}(\alpha)\) is the straight line \(b=1\) in the \(b\alpha\)-plane.
Furthermore, the curve \(\beta(t)\) intersects itself at \(\beta(0)\) and \(\beta(\pi)\) as the parameter \(b\) passes through the value \(-1\). At this intersection point \(\beta(0)=\beta(\pi)\), \(\beta^{\prime}(t)\) does not exist at \(t=0,\pi\) and we have a bifurcation apart from cusps. Thus, \(b=-1\) is also a bifurcation curve in the \(b-\alpha\) plane. We denote it by \(b=g_{3}(\alpha)\).
If the \(b\) value is above the curve \(b=g_{1}(\alpha)\), the orientation of the boundary curve \(\beta(t)\) becomes clockwise and hence the system becomes unstable for the values of '\(a\)' in the bounded as well as unbounded regions described by \(\beta(t)\).
As \(b\) passes through the curve \(b=g_{1}(\alpha)\), the \(\beta(t)\) becomes multiple curves and generates stable and unstable bounded regions as shown in Figure 1. If we keep on decreasing the value of
\(b\) the unstable part becomes smaller and vanishes at \(b=1\). Note that \(\beta\) is smooth at \(t=0\) for \(g_{1}(\alpha)<b\leq g_{2}(\alpha)\). If \(g_{2}(\alpha)<b<g_{3}(\alpha)\) then the \(\beta(t)\) is a simple closed curve with an anticlockwise orientation. Therefore, the region bounded by \(\beta(t)\) is the stable region and the unbounded part of the complex plane is the unstable region.
As we take the values of \(b\) between the curves \(b=g_{3}(\alpha)\) and \(b=g_{4}(\alpha)\), \(\beta(t)\) produces two stable and one unstable bounded regions as shown in the Figure 1. The size of the unstable region increases if we decrease the value of \(b\).
At the bifurcation curve \(b=g_{4}(\alpha)\), the stable regions vanish completely and the \(\beta(t)\) becomes a simple closed curve with clockwise orientation. Thus, the system is completely unstable for \(b<g_{3}(\alpha)\). This proves the result.
### Illustrative Examples
In this section, we provide corroborating evidence for stability results obtained in the above section for \(\tau=1\) and various values of \(b\). We study various regions shown in Figure 1.
**Example 3.1**: _Let us begin with the region that lies above \(g_{1}(\alpha)\). We take \(b=18.3\) and \(\alpha=0.1\) (Figure 1(a)). Here, we observe a non-smooth curve \(\beta(t)\) with cusps on both ends. The number \(a=-2\) lies inside the unstable region according to Figure 1(b) and \(a=(0.4+0.4\iota)\) which lies outside the curve, is unstable as well (Figure 1(c)). The system is unstable for any value of parameter \(a\)._
**Example 3.2**: _Now consider the region between \(b=g_{1}(\alpha)\) and \(b=g_{2}(\alpha)\). Consider \(b=2\) and \(\alpha=0.5\). Here, the stability curve is divided into two parts (see Figure 2(a)). Consider a system (2) with \(a=-1.2\). It is inside the stable region and we get a stable trajectory as seen in Figure 2(b) For \(a=0.2\), which lies in the unstable region we get an unstable solution (Figure 2(c)). The solutions for the values of \(a\) lying outside the stability curve were checked and found unstable._
**Example 3.3**: _On the boundary \(b=g_{2}(\alpha)\) which is described by \(b=1\) for any \(\alpha\). Consider the stability curve for \(\alpha=0.4\) and \(b=1\)(see Figure 3(a)). It generates a single region. The quantity
Figure 2: Example 1
\(a=0.5\) inside the curve leads to a stable solution. For \(a=-0.7\iota\) outside the curve, an unstable solution is observed._
**Example 3.4**: _For \(b\) in the region between \(b=g_{2}(\alpha)\) and \(b=g_{3}(\alpha)\), a single stable region (Figure 4(a)) is observed. For \(\alpha=0.3\) and \(b=0.1\), the value \(a=0.3\) is inside this region and gives a stable solution (Figure 4(b)). On the other hand, \(a=-0.4\) lies outside the stable region and leads to an unstable solution (Figure 4(c))._
**Example 3.5**: _Examples with values of \(b\) on the boundary curves give limiting cases. For example \(b=g_{2}(\alpha)=1\) is the limiting value of \(b\) at which a single stable region is observed and for in the next region, we have one stable and one unstable region inside the stability curve. ( See Example 3.3, 3.2). Similarly for \(b=g_{3}(\alpha)=-1\) the values for \(t=0\) and \(t=\pi\) touch each on the real axis and we have two stable regions. For \(b<g_{3}(\alpha)=-1\), we have 3 regions one of which is unstable. Consider the stability curve for \(\alpha=0.7\) and \(b=-1\) (see Figure 5(a)). Stable trajectories are observed for \(a=1.1+0.5\iota\) and \(a=1.3-0.8\iota\) (Figure 5(b) and 5(c) respectively). We checked the stability of the parameter values lying outside the curve and verified that they are unstable._
**Example 3.6**: _Now we consider region lying between \(b=g_{3}(\alpha)\) and \(b=g_{4}(\alpha)\). The stability curve for \(\alpha=0.2\) and \(b=-3\) is sketched as shown in Figure 6(a). The curve has three regions, two of which look identical due to symmetry with respect to the real axis. The system with \(a=(4+1.3\iota)\) has a stable solution (see Figure 6(b)). (Similarly, \(a=(4-1.3\iota)\) leads to a stable trajectory.) Thus these two regions have stable solutions. On the other hand, for \(a\) in the third region (say, \(a=2\)), we observe unstable trajectories (see Figure 6(c)). It was checked that the parameter values \(a\) lying outside the stability curve lead to unstable solutions._
**Example 3.7**: _The last region lies below the curve \(b=g_{4}(\alpha)\). Consider \(\alpha=0.5\) and \(b=-2.2\), the curve \(\beta\) (see Figure 7(a)) encloses a single region. There are no stable values of \(a\) inside or outside the region. Consider \(a=1.5\) and \(a=2.8\). Both values lead to unstable trajectories (see Figure 7(b) and 7(c), respectively)._
Figure 3: Example 2
## 4 Stability result for \(\tau=2\)
**Theorem 4.1**: _Now consider the delayed fractional order equation (2) with \(\tau=2\). The dynamics is richer and leads to five boundary curves that demarcate qualitatively different behaviors. Let \(b=g_{j}(\alpha)\), \(j=1,2,\cdots,5\) be the bifurcation curves in the \(b\alpha\)-plane (cf. Figure 9) which are the branches of the implicit curve._
\[\begin{split} g(b,\alpha)=&\cos\left(\frac{1}{2} \left(\pi\alpha-(-3+\alpha)K\right)\right)-b(1+\alpha)\cos\left(\frac{1}{2} \left(\pi\alpha-(3+\alpha)K\right)\right)+b\sin\left(\frac{1}{2}(1+\alpha) \left(\pi-K\right)\right)\\ &+(\alpha-1)\sin\left(\frac{1}{2}\left(\pi(1+\alpha)-(-1+\alpha) K\right)\right),\end{split} \tag{9}\]
_where, \(K=\arccos\left(\frac{-2+2\alpha-\alpha^{2}+b^{2}(2+2\alpha+\alpha^{2})}{2(-1+ \alpha+b^{2}(1+\alpha))}\right)\). For any \(\alpha\in(0,1)\), \(-1=g_{5}(\alpha)<g_{4}(\alpha)<g_{3}(\alpha)<g_{2}(\alpha)<g_{1}(\alpha)=1\)._
The stability results for the system (2) with \(\tau=2\) are as follows:
1. If \(b\in(-\infty,g_{5}(\alpha))\cup(g_{1}(\alpha),\infty)\), then the system is unstable.
2. If \(g_{5}(\alpha)<b<g_{4}(\alpha)\), then the boundary curve \(\beta(t)\) generates four regions; of which the outer three are unstable and the central one is stable.
3. If \(g_{4}(\alpha)<b<g_{3}(\alpha)\), then the number of unstable regions reduces to one while the number of the stable region remains the same, i.e. one. The two unstable regions described in Case 2 merge with the stable region and hence the size of the stable region, in this case, is larger than that of in Case 2.
4. If \(g_{3}(\alpha)<b<g_{2}(\alpha)\), then the region produced by the boundary curve is a single closed curve which is stable.
5. If \(g_{2}(\alpha)<b<g_{1}(\alpha)\), then the boundary curve produces three regions, one stable and two
Figure 4: Example 3
unstable.
Now we consider the boundary cases:
1. If \(b=g_{5}(\alpha)=-1\), then the curve \(\beta(t)\) has three regions and all the regions are unstable.
2. If \(b=g_{4}(\alpha)\), then the curve \(\beta(t)\) has two regions, a stable region with two cusps on the right side and an unstable region.
3. If \(b=g_{3}(\alpha)\), then the curve \(\beta(t)\) has a single stable region with a cusp on the left side.
4. If \(b=g_{2}(\alpha)\), then the curve \(\beta(t)\) has a single region that is stable and has two cusps.
5. If \(b=g_{1}(\alpha)=1\), then the curve \(\beta(t)\) has two regions and both regions are unstable.
**Proof:**
As discussed in the proof of Theorem 3.1, as the parameter values change the simple closed curve \(\beta(t)=(x(t),y(t))\) possesses multiple points and generates different bounded regions, in this case also. Among these, the regions with the anticlockwise oriented boundary are the stable ones. The multiple points are generated through the cusps.
To find the conditions for the \(\frac{0}{0}\) form of \(\lim_{t\to t_{0}}\beta^{\prime}(t)\), we solve \(x^{\prime}(t)=0\) and \(y^{\prime}(t)=0\) simultaneously. Squaring and adding these equations, we get
\[t=\arccos\left(\frac{2(\alpha-1)-\alpha^{2}+b^{2}(\alpha^{2}+2\alpha+2)}{2(b^ {2}(1+\alpha)+\alpha-1)}\right)\]
Substituting this value in the equation \(x^{\prime}(t)=0\), we get the expression for the existence of cusps in \(\beta(t)\) as equation (9). This implicit curve \(g(b,\alpha)\) has 5 branches namely \(b=g_{j}(\alpha)\), \(j=1,2,\cdots,5\)
Figure 5: Example 4
as shown in Figure 9. These bifurcation curves produce 6 different regions labeled as A, B, C, D, E, and F of \(b-\alpha\) plane (see Figure 9). We list our observations below:
If the point \((b,\alpha)\) belongs to the region A _i.e._, if \(b>g_{1}(\alpha)\), the boundary curve \(\beta(t)\) is a simple closed curve with a clockwise orientation. Thus, the regions inside as well as outside \(\beta(t)\) are unstable.
In the region B, we have \(g_{2}(\alpha)<b<g_{1}(\alpha)\). Here, \(\beta(t)\) becomes a multiple curve and has two unstable and one stable bounded region. As \(b\) decreases in B, the range of stable region goes on increasing.
If \(g_{3}(\alpha)<b<g_{2}(\alpha)\) (i.e. region C), \(\beta(t)\) becomes a simple closed curve with anticlockwise orientation. Thus the inside part of \(\beta(t)\) is the stable region and the outside part is unstable. As the value of \(b\) goes on decreasing, the stable region gets stretched horizontally and a cusp is formed for \(b=g_{3}(\alpha)\).
Figure 6: Example 5
As \(b\) is further reduced _i.e._, when \(g_{4}(\alpha)<b<g_{3}(\alpha)\) (i.e. region D), the cusp on the boundary curve \(\beta(t)\) evolves into a loop-shaped structure of clockwise orientation making \(\beta\) a multiple curve with one stable and one unstable region. As the value of \(b\) moves closer towards \(g_{4}(\alpha)\), the area of unstable regions increases. At the bifurcation curve \(b=g_{4}(\alpha)\), two cusps are observed on the boundary curve \(\beta\) on the right side.
In region E (\(g_{5}(\alpha)<b<g_{4}(\alpha)\)), both of the cusps on the right of \(\beta\) evolve into two unstable regions along with the previous unstable region on the left side. Thus, the boundary curve \(\beta(t)\) has 3 unstable regions and one stable region. Reducing the value of \(b\) further results in the reduction of the size of the stable region.
When \(b\) reaches the bifurcation curve \(b=g_{5}(\alpha)\), the stable region disappears completely, leaving the boundary curve \(\beta(t)\) with 3 unstable regions. These regions are unstable as a result of their clockwise orientation.
In region F, _i.e._ for \(b<g_{5}(\alpha)\) the boundary curve has four bounded unstable regions with a clockwise orientation. Thus, below the bifurcation curve \(g_{5}(\alpha)\), the system becomes completely unstable for all values of \(a\).
As in the case of \(\tau=1\), we have checked all the above results numerically.
## 5 Asymptotic limit of large delay
We observe that we can stabilize a larger range of \(a\) values with \(\tau=1\) compared to \(\tau=2\). The reason is that there are more routes to instability for larger \(\tau\). We now consider the system (10) for higher \(\tau\) values. Let us take \(\tau=20,21,40\) and \(41\) for \(\alpha=0.2\). The boundary curves \(\beta\) have
Figure 7: Example 6
been plotted for the values of \(b=-0.5\) and \(b=0.5\) in the figures 10 (a) and (b) respectively. In both figures, the stable region is in the center and is surrounded by the spring-like structure. This spring-like structure has a clockwise orientation and thus it is unstable. On the real axis, the upper bound is one. The lower bound is indicated intersection of the stability curve with the real axis. The lower bound of real stable \(a\) is at \(\beta(\pi)\) when \(\tau\) is even and \(b\) is positive or \(\tau\) is odd and \(b\) is negative. However, as we increase \(\tau\), the lower bound approaches \(\beta(\pi)\). For a given \(\tau\), the stable region depends slightly on whether \(\tau\) is even or odd and the sign of \(b\). This dependence reduces as we increase \(\tau\). The upper bound is fixed at \(t=0\) for the boundary curve \(\beta(t)\), the lower bound approaches \(\beta(\pi)\). We find that for large \(\tau\), the stable range of \(a\) does not depend on the sign of \(b\) and is given by \(a=1+2^{\alpha}(|b|-1)\).
The \(b-a\) curves plotted in the figures 15 and 16 can be used to determine the values of feedback coefficient \(b\) to get the maximum stable range. As \(\tau\) increases the \(b-a\) curve reaches a limiting region. As mentioned above, the stable \(b-a\) region is given by \(a=1\) and \(a=1+2^{\alpha}(|b|-1)\) as \(\tau\to\infty\).
In a control system, the feedback term can be selected according to the requirements of the systems. We see that the maximum range of asymptotically stable fixed points can be obtained in the case of \(\tau=1\). So, one can use \(\tau=1\) to maximize the range of asymptotically stable fixed points of the system. Large delay can be used if the system requires enhanced chaos.
Figure 8: Example 7
Figure 9: bifurcation regions in \(b\alpha\)-plane for \(\tau=2\) with representative stability diagrams
## 6 Efficacy of control for large negative multipliers
The range of stable real value of \(a\) ranges from \(1-2^{\alpha}\) to \(1\) for system (1). The real range for integer order difference equation is between -1 and 1. There is a reduction in the real range for fractional order maps. Now \(a=-7\) is outside stable range for any \(0<\alpha\leq 1\). The stability of system (2) depends on the feedback coefficient '\(b\)' and the delay '\(\tau\)'. For \(b=0\), we have system (1) without feedback. Figure 11 shows stability curves for system (2) with \(\alpha=0.25,\tau=1,b=0\) (in black) and \(b=6\) (in red). The point \(a=-7\) lies outside the black stability curve. It is enclosed within small stable region of the red curve. We analyze the stability of this point in both cases. We iterate the systems for \(t=500\) time-steps. For \(b=0\) case, the trajectory diverges and the solution is unstable (see Figure 12(a)). For \(b=6\) case, the trajectory asymptotically converges to zero and the solution is stable(see Figure 12(b)). Thus large negative multipliers can be stabilized with feedback for \(\tau=1\).
For the system (2) with \(\tau=2\), the effect is less dramatic. Figure 13 shows the stability curves for systems (2) with \(\alpha=0.5,\tau=2,b=0\) (in black) and \(b=-0.6\) (in blue). In the first case \(b=0\), the point \(a=-1.1\) lies outside the black stability curve and has an unsta
Figure 11: Stability regions for system (2) \(\alpha=0.25\).
Figure 10: Boundary curves are shown for the system (7) with \(\alpha=0.2\), \(\tau=20,21,40,41\), (a) \(b=-0.5\) and (b) \(b=0.5\).
14(a)). In second case \(b=-0.6\), the point \(a=-1.1\) lies inside the blue stability curve and has a stable solution as seen in Figure 14(b).
Thus, it is possible to stabilize a larger range of parameter values for a fractional order system by adding a feedback term with delay (system (2)). The case of \(\tau=1\) gives the maximum range of stability with appropriate \(b\) and is recommended for control.
## 7 Nonlinear Maps
Consider a fractional order map with a feedback control
\[x(t)=x_{0}+bx(t-\tau)+\frac{1}{\Gamma(\alpha)}\sum_{j=1}^{t}\frac{\Gamma(t-j+ \alpha)}{\Gamma(t-j+1)}[f(x(j-1))-x(j-1)]. \tag{10}\]
**Definition 7.1**: _A steady-state solution of (10) is called an equilibrium point. Thus, if \(x_{*}\) is an equilibrium point of (10) then \(x_{0}=x_{*}\) implies \(x(t)=x_{*}\) for all \(t=1,2,\cdots\)._
Figure 12: Stability analysis for system (2) with \(\alpha=0.25\), \(\tau=1\) and \(a=-7\).
Figure 13: Stability regions for system (2) \(\alpha=0.5\).
Note that, the only equilibrium point of (10) is \(x_{*}=0\) when we have \(f(0)=0\). Therefore, we consider the systems (10) with equilibrium point \(x_{*}=0\).
In a neighborhood of the point \(x_{*}=0\), we have \(f(x)\approx f(x_{*})+(x-x_{*})f^{\prime}(x_{*})=ax\), where \(a=f^{\prime}(0)\). Thus, the local stability properties of the nonlinear system (10) at \(x_{*}=0\) are the same as those of the linear system (2).
We consider only real values of \(a\) and \(b\), in this case. Therefore, the stable region given by the curve \(\beta(t)\) defined by (7) is bounded by the curves \(a=1\), \(a=-2^{\alpha}\left(1-b(-1)^{\tau}\right)\) and the parametric curve \(a(t)=2^{\alpha}\left(\sin\left(\frac{t}{2}\right)\right)^{\alpha}\left(\cos \left(\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right)-\sin\left( \frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right)\cot\left(\frac{ \alpha\pi}{2}+t\left(1-\tau-\frac{\alpha}{2}\right)\right)\right)+1\),
\(b(t)=\frac{\sin\left(\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right) \right)}{\sin\left(\frac{\alpha\pi}{2}+t\left(1-\tau-\frac{\alpha}{2}\right) \right)}\), \(t\in[0,2\pi]\) in the \(b-a\) plane. We denote these curves as \(b-a\) curves.
Thus, the system (10) can be controlled if the point \((b,a)\) with \(a=f^{\prime}(0)\) lies inside the region bounded by these curves. We consider the nonlinear systems (10) of fractional order \(\alpha\) and delay \(\tau\). The fractional logistic map in this model is defined by the equation (10) with \(f(x)=\lambda x(1-x)\), where \(\lambda\) is a parameter and \(a=\lambda(1-2x_{*})=\lambda\).
Similarly, the fractional cubic map is (10) with \(f(x)=\beta x^{3}+(1-\beta)x\), where \(a=3\beta x_{*}^{2}+(1-\beta)=1-\beta\).
Figure 15 shows the stability regions of all these systems for the \(\tau=1\) for various values of \(\alpha\).
We iterate these maps for \(T\) time-steps where \(T\) is large and the equilibrium or asymptotic equilibrium point is assumed to be stable if the convergence is obtained within \(\delta\). We know the boundaries of the stable fixed point in the \(b-a\)-plane a for linear system. For \(a=f^{\prime}(0)\), we observe that the \((b,a)\) values for stable fixed point lie in the analytically obtained bounds for both \(\tau=1\) and \(\tau=2\). For \(\tau=1\), we have plotted \((b,a)\) values in Figure 15 for \(\alpha=0.25,0.5,0.75\). It is observed for all the stable zero fixed points of the logistic, and cubic maps lie within the stable region defined by the \(b-a\) curves.
We find that this \(b-a\) region indeed encloses the stability region and gives precise bounds. Thus if the stable equilibrium point is known, the required strength of feedback for which the equilibrium point is stabilized can be found. The uncontrolled case corresponds to \(b=0\) and if we want to stabilize (or destabilize) the equilibrium point an appropriate value of \(b\) can be used and desired range of stability can be obtained. We have studied \(\alpha=0.25,0.5,0.75\) and \(\tau=1\). For \(\tau=1\), the stability region decreases in size with an increasing \(\alpha\).
Figure 16 confirms that the stable fixed point of the fractional maps lies within the curve enclosed by the \(\alpha\) dependent \(b-a\) curves. Similarly, for \(\tau=2\), stability regions can be obtained for various values of \(\alpha\). The stable regions for the zero fixed point for logistic, and cubic map with fractional order \(\alpha=0.5\) and \(\tau=2\) lie within the stable region derived for linear case (see Figure
Figure 14: Stability analysis for system (2) with \(\alpha=0.5\), \(\tau=2\) and \(a=-1.1\).
16).
Now we consider the case of large \(\tau\). We consider the fractional logistic map defined by the equation (10) with \(f(x)=\lambda x(1-x)\), where \(f^{\prime}(0)=\lambda=a\). We iterate this map for \(T=6\times 10^{4}\) and the equilibrium point is assumed to be asymptotically stable if convergence is obtained within \(\delta=10^{-7}\). Here, we consider \(\alpha=0.75\) and \(\tau=100\). The triangular stability region encloses the asymptotically stable fixed points for the fractional logistic map. This is the generalized stable region for large \(\tau\). From the Figure 17 it is clear that the range of stable region does not increase for any nonzero value of \(b\). It decreases with larger \(|b|\). Thus, the large delay is not useful in stabilizing the fixed point for the fractional system (10) and the best results are obtained with \(\tau=1\). However, as mentioned above, larger \(\tau\) can be used to destabilize the stable system.
The control term in this model is a delay term with coefficient '\(b\)'. In figure 18 (a) and (c), an increment in the range of nonzero stable fixed points can be seen. We observe that the proper selection of control parameter \(b\) results in an extension of the range of the stable fixed points of the system. The maximum range of the nonzero stable fixed points can be found with the help of the \(b-a\) curve for that particular \(\alpha\) value the of fractional system. The range of the stability of nonzero fixed points is extended in presence of feedback. Preliminary numerical investigations indicate that the above conditions are necessary but not sufficient for the stability of nonzero fixed points. Further analytic investigations are needed to find the stability of nonzero fixed points. Figures 18 (b) and (d) show the time series of the system.
Figure 15: Stable region for fractional systems with \(\tau=1\). The stable fixed points of fractional systems lie within the region enclosed by the \(b-a\) curve. We discard \(T\) time-steps and the convergence is within \(\delta\). For \(\alpha=0.25\), \(T=6\times 10^{4}\) and \(\delta=10^{-9}\) for logistic map and \(T=4\times 10^{4}\) and \(\delta-5^{-8}\) for cubic map. In all other cases \(T=2\times 10^{4}\) and \(\delta=10^{-5}\).
## 8 Discussion and Conclusion
Fractional order systems with delay have been investigated in many contexts. In neural networks, numerous works have been done. In [48], stability conditions, and the existence of Hopf bifurcation for fractional order BAM neural network (FOBAMNN) with delay have been established. In [22], FOBAMNN with four delays has been turned into two delays and a correlation between stability and delay term has been studied. Investigation of fractional order neural network with multiple leakage delay shows that both fractional order and time delay are very important in controlling the transient behaviors of the FONN devised in [21]. The stability of fractional order triangle multi-delayed neural networks has been studied in [49]. A comparative study of integer order and fractional order delayed BAM neural network shows increased stability region [51]. In [50], sufficient conditions for different delays for ensuring stability and generation of Hopf bifurcation have been demonstrated. Stability and bifurcation of an isovalent version of a fractional-order stage-structured predator-prey system have been investigated in [52]. Global asymptotic stabilization of fractional-order memristor-based neural networks (FMNNs) with time delay can be achieved by adjusting two groups of parameters as illustrated in [23]. The analysis of fractional order Bloch equation with delay shows behaviors ranging from damped oscillations to oscillations with increasing amplitude
Figure 16: Stable region for fractional systems with \(\alpha=0.5\), \(\tau=2\). The stable fixed points of the fractional systems lie within the region enclosed by the \(b-a\) curve. We discard \(T\) time-steps and the convergence is within \(\delta\). For logistic map \(T=10^{5}\) and \(\delta=10^{-9}\) and for cubic map \(T=5\times 10^{4}\) and \(\delta=5\times 10^{-7}\).
Figure 17: Stability region in \(b-a\) plane for the system (10) with \(\alpha=0.75\) and large \(\tau\). The asymptotic stable fixed points of the fractional logistic map defined by (10) with \(\alpha=0.75\) and \(\tau=100\) lie inside the generalized stability region.
for various values of delay [8]. In this work, we carry out basic investigations in the context of fractional order diffference equations.
Control of chaos in dynamical systems is an important aspect of studies in the theory of dynamical systems from viewpoint of applications. In integer order differential and difference equations, it is a well-studied problem both experimentally and theoretically. One of the simplest control schemes is feedback and the Pyragas method is one such method with feedback delay control to stabilize the chaotic systems. In this work, we studied systems defined by fractional difference equations coupled with a delay term. The delay term acts as a control in this system. We give analytic conditions for the stability of the fixed point of these systems for the arbitrary delay. More detailed analysis is carried out for \(\tau=1\) and \(\tau=2\) followed by analysis for asymptotic limit. A detailed analysis is carried out for \(\tau=1\) and \(\tau=2\). We give all bifurcation curves. They are given by \(g(b,\alpha)\). For real maps, we give the generalized stable regions for \(\tau=1\), \(\tau=2\) and in the asymptotic limit. Using \(g(b,\alpha)\), we obtain the upper and lower bound of \(b\) for particular \(\alpha\) for which the stable regions exist. The stability region of a system depends on the fractional order \(\alpha\), delay
Figure 18: Figure (a) shows the bifurcation diagram for a logistic map with \(\tau=1\) and \(b=1.1,\alpha=0.5\) where only a stable fixed point is observed. The bifurcation diagram without control is also shown for reference. Figure (c) shows bifurcation diagram for \(\tau=2\) and \(b=-0.58,\alpha=0.5\). The bifurcation diagram without control is also shown for reference. The range over which a stable fixed point is observed in either case. Figure (b) shows the stabilization of the unstable chaotic state for \(\tau=1\), \(\alpha=0.5\), and \(\lambda=3.3\) to the stabilized fixed point. Similarly, figure (d) shows the stabilization of the unstable chaotic state to stabilized fixed point for \(\tau=2\), \(\alpha=0.5\), and \(\lambda=3.23\).
\(\tau\), and the control parameter \(b\). For nonlinear map \(f(x)\), the zero fixed point with slope '\(a=f^{\prime}(0)\)' in the same \(b-a\) range. The boundary curve (7) may enclose stable and/or unstable region/s for multiple combinations of \(\alpha\), \(\tau\) and \(b\). The criterion for finding out the stable and unstable region is also given using the orientation of the curve. Finally, we have studied nonlinear systems with fixed point \(x_{*}=0\). Thus the results apply to a broad range of systems and this is a practical scheme for control. Our analysis can be used to stabilize/destabilize the fractional order difference system by introducing the appropriate delay term. The chaotic systems can be stabilized by selecting the delay \(\tau=1\) whereas the stable system can be made unstable/chaotic by selecting a larger value for \(\tau\).
## 9 Acknowledgement
PMG and DDJ thank DST-SERB for financial assistance (Ref. CRG/2020/003993).
|
2305.15900
|
Polarization Independent Grating in GaN-on-Sapphire Photonic Integrated
Circuit
|
In this work, we report the realization of a polarization-insensitive grating
coupler, single-mode waveguide, and ring resonator in the GaN-on-Sapphire
platform. We provide a detailed demonstration of the material characterization,
device simulation, and experimental results. We achieve a grating coupler
efficiency of -5.2 dB/coupler with a 1dB and 3dB bandwidth of 40 nm and 80 nm,
respectively. We measure a single-mode waveguide loss of -6 dB/cm. The losses
measured here are the lowest in a GaN-on-Sapphire photonic circuit. This
demonstration provides opportunities for the development of on-chip linear and
non-linear optical processes using the GaN-on-Sapphire platform. To the best of
our knowledge, this is the first demonstration of an integrated photonic device
using a GaN HEMT stack with 2D electron gas.
|
Suraj, Shashwat Rathkanthiwar, Srinivasan Raghavan, Shankar Kumar Selvaraja
|
2023-05-25T09:56:16Z
|
http://arxiv.org/abs/2305.15900v1
|
# Polarization Independent Grating in GaN-on-Sapphire Photonic Integrated Circuit
###### Abstract
In this work, we report the realization of a polarization-insensitive grating coupler, single-mode waveguide, and ring resonator in the GaN-on-Sapphire platform. We provide a detailed demonstration of the material characterization, device simulation, and experimental results. We achieve a grating coupler efficiency of -5.2 dB/coupler with a 1dB and 3dB bandwidth of 40 nm and 80 nm, respectively. We measure a single-mode waveguide loss of -6 dB/cm. The losses measured here are the lowest in a GaN-on-Sapphire photonic circuit. This demonstration provides opportunities for the development of on-chip linear and non-linear optical processes using the GaN-on-Sapphire platform. To the best of our knowledge, this is the first demonstration of an integrated photonic device using a GaN HEMT stack with 2D electron gas.
oe 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
## 1 Introduction
Gallium Nitride (GaN) has increasingly become promising in the field of photonics due to broadband transparency and the ability to grow high optical quality wafer-scale thin films. GaN with a typical bandgap of\(\approx\)3.4 eV [1], can be used for photonic applications in the UV region such as UV photodetectors, LEDs, distributed feedback (DFB) lasers [2, 3, 4, 5, 6, 7]. Further, the ease of bandgap engineering in GaN by alloying with other III-V nitrides such as InN and AlN [8, 9], makes it applicable in UV, visible, to infrared photonics. Most of these applications utilize the bulk property of the GaN or its alloys. However, there are different ways to tailor GaN properties apart from alloying, which include doping [10], quantum well-based bandgap engineering, and application of external electric field (Franz-Keldysh effect and Quantum confined stark effect (QCSE)) [11, 12]. The inter-sub-band transition in quantum wells can be exploited to achieve a broader operating range from UV-IR. The GaN bandgap engineering enables control over the device functionality post-fabrication, leading to the realization of efficient active photonic devices.
The interesting non-linear properties such as Pockels and Kerr effect can help in fabricating GaN-based efficient optical modulators. The presence of a non-centrosymmetric structure of the GaN crystal enables the material to generate a second-order non-linear-optical response, such as a second harmonic generation from the bulk crystal. The presence of non-linear susceptibility \(\chi(2)\) of the order of lithium niobate (value approx. 20 pm/V) provides a platform that can be used to realize frequency conversion over the transparency window of GaN [13, 14, 15, 16]. Recently, there has been a demonstration of four-wave mixing in GaN/AlGaN stack [17]. The advantage of implementing the non-linear optical activities at the thin film is that the power and the footprint required to achieve the desired non-linear activity reduces due to increased optical intensity [18]. Despite so many advantages of GaN, there is not adequate work on grating coupler for GaN waveguides that efficiently couples light from free space to achieve a photonic integrated circuit, especially at communication wavelengths (1550 nm) with one report on free-standing GaN grating [19].
Efficient on-chip integrated photonic devices require epitaxially grown GaN films that are
substrate specific. However, epitaxial growth of GaN-on-Si (100) faces two major challenges, a large lattice constant mismatch (>17%) and the difference in the coefficient of thermal expansion (\(\approx\)116%) between GaN and Si. The lattice mismatch leads to defective film, and high thermal mismatch would result in film cracks [20]. This lattice mismatch could be reduced by using buffer layers, however, the thermal mismatch is still an issue for thicker GaN-film growth. Substrates such as Sapphire and silicon carbide provide an excellent growth surface to grow GaN with a lattice mismatch of 16% and 3.5%, and a minimal thermal mismatch of 25%. By tuning the growth parameters of the buffer layer, thicker GaN-film growth is possible for realizing photonic devices.
In this paper, we present GaN-on-Sapphire as a substrate for integrated photonics applications. In particular, we present high electron mobility (HEM) stack that can be used to realize high-frequency electro-optic devices. We demonstrate GaN waveguide, grating fiber-chip coupler, and ring resonator as building block devices for a GaN-based photonic circuit. Furthermore, we present a polarization-independent vertical fiber-chip coupler using one-dimensional grating for the first time. We validate the dual-polarization excitation using the spectral response of a microring resonator.
## 2 GaN film growth and characterization
Sapphire is chosen as the substrate to grow GaN of desired thickness. AlN is chosen as the intermediate layer between sapphire and GaN. AlN has a lattice mismatch of about 13.3% [21, 22]
Figure 1: (a),(b) The rocking \(\omega\) scan of the (002) and (102) of the thin-film GaN. (c) The schematic of the high-electron mobility 2DEG stack used for the fabrication of waveguides and (d) the refractive index profile of the stack.
with Sapphire and hence gives a step transition from Sapphire to a GaN layer. Metal-organic chemical vapour deposition (MOCVD) was used to grow AlN and GaN on a sapphire substrate. Trimethylaluminum, Trimethylgalliumand, and NH\({}_{3}\) are used as Ga, Al, and N precursors with H\({}_{2}\) and N\({}_{2}\) as the carrier gases. The deposition and optimization of AlN and GaN are comprehensively studied in [23]. A 500 nm of AlN buffer layer is grown on Sapphire. GaN was deposited in two steps, initially 500 nm followed by 150 nm. The double-step growth leads to an epitaxial GaN film with sub-1 nm roughness with minimal defects [23]. The 2D electron gas (2DEG) layer is formed between the GaN and the AlGaN Layers, which is grown on top of GaN. The 2DEG can be exploited to realize electro-optic devices. Finally, 25 nm of GaN is grown on top of AlGaN as a passivation cap layer. Fig.1 shows the characterization of the grown GaN on Sapphire with the seed layer and the resultant stack. The XRD \(\omega\) scan data, as shown in Fig.1(a) & (b), confirm epitaxial GaN growth. Small surface roughness leads to reduced scattering loss. The material stack used for photonic device fabrication is shown in Fig.1(c). Fig.1(d) shows the index profile along the stack.
## 3 Design and Simulation
The material optimization is followed by waveguide design and simulation to determine the designed dimension for light confinement and propagation. In section 3.1, we discuss the effect of waveguide width and etch depth on the modal characteristics at 1550 nm. Section 3.2 presents a detailed grating light-chip coupler design and analysis. The grating coupler simulation is performed to achieve maximum fiber-chip coupling to the GaN waveguide.
Figure 2: (a)The schematic of the proposed GaN waveguide design.Waveguide dispersion with variation in the waveguide (b) Width (with shoulder height = 0 nm); (c) shoulder height (at width = 760 nm);(d) Mode field confinement at 760 nm waveguide width and 100 nm waveguide height.
### Waveguide design
As evident from the index profile in Fig.1(d), an index contrast of 0.25 (with AlN buffer layer) would enable light confinement in the GaN layer. The schematic of the proposed GaN waveguide design on a Sapphire substrate showing a pictorial depiction of field confinement in the waveguide region is presented in Fig. 2(a). We use Ansys Lumerical MODE solution to optimize the waveguide parameters to obtain fundamental mode operation. The waveguide dispersion for fundamental TE and TM are obtained for variable width, and shoulder height is shown in Fig.2(b & c). For a width of 900 nm (shoulder height = 0 nm) and a shoulder height of 150 nm (width = 760 nm) of the waveguide, we observe a mode crossing between fundamental TE and TM mode. Fig.2(b & c) indicate that engineering the waveguide parameters could lead to a low birefringence between the fundamental TE and TM modes that can enable polarization-independent operations in a GaN waveguide. Fig.2(d) shows the field confinement of the TE fundamental mode for a 760 nm width waveguide with a 100 nm shoulder height. Figure 3(a) and (b) shows the mode field confinement of the fundamental TE and TM mode in the waveguide. As is evident from Fig.3(c) and (d) that more than 70 % of the power is confined in the waveguide for a waveguide width of more than 760 nm and shoulder height of less than 100 nm for both TE and TM modes. In Fig.3(d), the waveguide starts to become multi-moded, and power is distributed across the shoulder leading to a reduction in power confinement in the specified area.
### Grating coupler design simulation
On-chip coupling can be performed in two ways; edge coupling and vertical/grating coupling. The devices need to be diced with optical quality chip-edge for high-efficiency coupling in edge coupling. Since sapphire is a hard material, polishing is a challenge. Furthermore, the alignment tolerance and ease of performing measurement is a challenge. In contrast, a grating coupler is relatively simple to couple light into a waveguide without any additional post-processing. A detailed design procedure is presented in [24]. Simulations were performed to determine the
Figure 3: Simulated power confinement with a waveguide dimension of 760 nm width and a shoulder height of 100 nm for (a)TM mode and; (b) TE mode; Variation of power confinement for TE and TM mode with varying (c) waveguide width (with shoulder height = 0 nm) and (d) shoulder height(at width = 760 nm).
feasibility of the grating coupler on the proposed GaN stack. Fig.4(a) shows the schematic of the structure used to perform the mode simulations for the GaN gratings and Table 1 lists the refractive indices used for the various materials in the stack. Fig.4(b) shows the coupling efficiency(CE) for a TE and TM mode using a Gaussian illumination. For the GaN gratings, the TM mode CE is \(\approx\) 30% while the TE mode CE is \(\approx\) 5% indicating the polarization-sensitive nature of the grating coupler shown in (Fig.4(b)).
One of the major challenges in coupling light into GaN is the low-index contrast as well as the absence of a bottom reflecting surface, such as the bottom substrate in a silicon-on-insulator. Since the light extraction is directly proportional to the light diffraction efficiency, the grating strength can be improved by adding a high-index overlay such as silicon [25]. we choose amorphous silicon (a-Si) as the overlay grating material to simulate gratings on the GaN waveguide. The modified proposed design using Si grating on GaN waveguide to improve CE is shown in Fig.4(c).
Implementation of Si-overlay grating increases the index contrast to 1.25 (360% improvement from GaN-only grating) at 1550 nm wavelength, which would lead to increased efficiency of light coupling into the GaN waveguide. Si grating coupler parameters such as periodicity (\(\Lambda\)), etch depth(t), duty cycle(D), and the angle of incidence(\(\theta\)) were optimized using Ansys Lumerical FDTD for both TE and TM polarization. Fig.5(a-d) shows the variation of the CE of TE mode with respect to the grating parameters, and Fig.6(a-d) shows the variation for a TM mode. The maximum efficiency for coupling achieved for a TE gaussian source with Si grating is approximately 60 %, while for a TM gaussian source, it is 50 %. The grating parameters for the maximum CE for a TE source are a grating period of 725 nm, an overlay height of 300 nm,
\begin{table}
\begin{tabular}{|c|c|c|} \hline Sl. No & Material & Refractive Index (at 1550 nm) \\ \hline
1 & Sapphire & 1.76 \\ \hline
2 & AlN & 1.96 \\ \hline
3 & GaN & 2.23 \\ \hline
4 & Silicon & 3.43 \\ \hline \end{tabular}
\end{table}
Table 1: Refractive Index used in the simulation
Figure 4: (a) Schematic for GaN grating;(b) Coupling efficiency(CE) of GaN grating; (c) Schematic for a-Si overlay grating.
and a duty cycle of 52.4% with a source angle of 2\({}^{\circ}\). The grating parameter for maximum CE for TM gaussian is a period of 750 nm, an overlay height of 260 nm, and a duty cycle of 48% with an angle of 5\({}^{\circ}\). As is observed in Fig.5(a & d) and Fig.6(a & d), there is a red shift in the spectrum with increasing \(\Lambda\) and duty cycle (D) for both TE and TM mode. However, we find that the coupler wavelength and efficiency are tolerant to the overlay thickness evident in Fig.5(b) and Fig.6(b). Though the spectral shift due to incident angle is small, the coupler efficiency is sensitive to incident angle variation as seen in Fig.5(c) and Fig.6(c). The variation of TE grating CE with respect to the periodicity, etch depth, angle of incidence, and duty cycle are 1.04% / nm, 0.36% / nm, 5.62% / degree, and 1.2% / (change in D). The corresponding sensitivity for TM mode is 1.01% / nm, 0.30% / nm, 4.51% / degree, 1.16% / (change in D).
The results in Fig.6 indicate a maximum CE of \(\approx\)48% for TM mode at a period of 750 nm, a grating height of 260 nm with a duty cycle of 48% at 5\({}^{\circ}\)incident angle. For the same set of parameters, we obtained a CE of \(\approx\)48% for a TE-polarized light. This suggests GaN waveguide with an engineered Si grating coupler could enable polarization-independent coupling characteristics. This is evident from Fig.7.
Further, we simulate the coupling loss mechanism constituting the reflection, transmission, and substrate leakage of the proposed waveguide-grating coupler design for the optimized
Figure 5: Variation of Si grating coupler CE with (a) grating period(t=267 nm,\(\theta\)= 2\({}^{\circ}\), grating width = 400 nm), (b) grating height(\(\Lambda\)=744 nm,\(\theta\)= 2\({}^{\circ}\),D = 54%), (c) source angle(\(\Lambda\)=744 nm,t= 267 nm,D = 54%); and (d) Duty cycle(\(\Lambda\)=744 nm,\(\theta\)= 2\({}^{\circ}\),t = 267 nm) for TE mode.
grating parameters. Fig.8(a,b) show the in-coupling and Fig.8(c,d) show the out-coupling power distribution. As seen in Fig.8(a-d), substrate leakage is a major source of power loss for both TE and TM mode.
## 4 Test Device fabrication
Fig.9 shows the schematic of the process flow used to fabricate GaN device. The GaN stack grown using MOCVD is coated with 235 nm of a-Si deposition. The deposition is followed by 15 nm of doped a-Si to reduce charging while performing electron-beam (e-beam) lithography. The pattering of gratings and GaN waveguide is done using e-beam lithography and dry etch process. 500 \(\mu m\) long patch waveguides of 10 \(\mu m\) width and grating in and out couplers with various periods are fabricated as coupler test devices. We have also fabricated a microring resonator with a bend radius of 84 \(\mu m\) and a coupling gap of 400 nm.
Fig.10(a) and (b) show the field emission scanning electron microscopy(FESEM) images of the fabricated a-Si grating and GaN ring. Fig.10(c) shows the schematic obtained post-AFM characterization. The fabricated dimensions were measured to be different from the desired design dimension due to process variation. The targeted parameters were a grating period of 750 nm, a grating height of 260 nm, and a duty cycle of 48%. The fabricated device had a grating
Figure 6: Variation of Si grating coupler CE with (a) grating period(t=230 nm,\(\theta\)= 5\({}^{\circ}\), grating width = 400 nm), (b) grating height(\(\Lambda\)=744 nm,\(\theta\)= 5\({}^{\circ}\), D = 54%), (c) source angle(\(\Lambda\)=744 nm,t= 230 nm,D = 54%); and (d) Duty cycle(\(\Lambda\)=744 nm,\(\theta\)= 5\({}^{\circ}\),t = 230 nm) for TM mode.
Figure 8: In-coupling simulation (\(\Lambda\)=750 nm,\(\theta\)= 5\({}^{\circ}\), D = 48%,t=260 nm) for (a) TM gaussian source and (b) TE gaussian source; out-coupling simulation (\(\Lambda\)=750 nm,\(\theta\)= 5\({}^{\circ}\), D = 48%,t=260 nm) for (c) TM gaussian source and (d) TE Gaussian source.
Figure 7: Simulated transmission characteristic of Si grating coupler demonstrating polarization independent characteristic (\(\Lambda\)=750 nm,\(\theta\)= 5\({}^{\circ}\), D = 48%,t=260 nm).
period of 740 nm, a grating height of 150 nm, and a waveguide etch depth of 570 nm instead of 650 nm. We observed that the roughness of the top GaN layer was around 20 nm. This is primarily due to dry etch selectivity between a-Si and GaN. The roughness could be improved by using a liner layer such as silicon dioxide or improved selectivity between GaN and a-Si.
## 5 Characterization results summary
Fig.11 shows the characterization of the Si grating coupler, tapered and spiral waveguides using a tunable laser source. Fig.11(a) shows the result for a patch waveguide. We measure a CE of -5.2 dB from the fabricated device for a period of 756 nm. Fig.11(b) depicts the optical loss for a 1.75 cm long spiral waveguide. A fibre-to-fibre loss of -30.4 dB is measured. Fig.11(c) shows the transmission of the Si grating coupler, tapered and spiral waveguide. The transmission of the Si grating coupler is shown in Fig.11(a). The transmission of the Si grating coupler is shown in Fig.11(b). The transmission of the Si grating coupler is shown in Fig.11(a). The transmission of the Si grating coupler is shown in Fig.11(b). The transmission of the Si grating coupler is shown in Fig.11(a). The transmission of the Si grating coupler is shown in Fig.11(a). The transmission of the Si grating coupler is shown in Fig.11(b). The transmission of the Si grating coupler is shown in Fig.11(a). The transmission of the Si grating coupler is shown in Fig.
quantifies a propagation loss of 6.26 dB/cm for a single-mode 800 nm wide and 570 nm etched waveguide. We attribute the waveguide loss to the surface roughness resulting from a-Si etch and the corresponding side-wall roughness. The surface roughness induced by the a-Si is the primary cause of high propagation loss. The output characteristics of the fabricated ring resonator are presented in Fig.12(a). Due to small birefringence and similar TE and TM grating behavior, ring response was used to confirm the polarization-independent behavior. The Q-factor for the TE resonant mode obtained is 11,000, while for TM is 3,000.
The polarization-insensitive behavior of the gratings is further corroborated by simulating the waveguide dispersion, and group index variation of the propagating modes. The measured group index obtained from the two resonant modes is n\({}_{g}\)=2.57 and 2.54 as shown in Fig.12(a). Fig.12(b) shows the waveguide dispersion with varying waveguide width with an etch depth of 570 nm. The propagation mode cut-off obtained for the fabricated device is 1.82. We observe for a width of 1.3 um, the waveguide supports a TE mode with a group index of 2.55 and TM mode with a group index of 2.54 that are comparable with the measured ones(2.57 and 2.54 respectively) as seen in Fig.12(c). Further, we confirm the multimodal nature of the fabricated rings by simulating their output response at different widths. Fig.12(d) shows that beyond 0.8 um, the ring-resonators showed multimodal behavior (multiple resonance dips) evident from their output characteristic. The multimodal feature, the ring responses, along with the measured group indices confirms the polarization-independent characteristic of the grating coupler fabricated on the GaN-sapphire platform.
Figure 12: (a) Measured Ring response; Simulated (b)waveguide dispersion; (c) waveguide group index variation; (d)ring characteristic for varying waveguide width.
Conclusion
In summary, for the first time, we demonstrated an efficient grating coupler in GaN waveguide. We presented a detailed simulation and fabrication process. We achieved a CE of -5.2 dB/coupler and a waveguide loss of \(\approx\)6 dB/cm. A 1dB bandwidth of 40 nm and a 3dB bandwidth of over 80 nm were obtained for the grating coupler. In addition, we employed a microring resonator, with a q-factor of 11,000 and 3,000 for TE and TM polarization, to demonstrate dual-polarisation coupling by the grating coupler demonstrating polarization-independent coupling by a single one-dimensional grating. To the best of our knowledge, this is the first such demonstration. The results presented are promising, with the scope to improve the performance through an optimized device fabrication process. This would open opportunities for using GaN as a waveguide material for linear and non-linear processing.
## 7 Acknowledgement
We acknowledge funding from the Ministry of Education, Government of India, for supporting facilities at the Centre for Nanoscience and Engineering (CeNSE), Indian Institute of Science, Bangalore. SKS thanks Professor Ramakrishna Rao chair fellowship.
## 8 Disclosures
The authors declare no conflicts of interest.
## 9 Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2308.02258
|
Segregation disrupts the Arrhenius behavior of an isomerization reaction
|
Co-existence of phase segregation and \emph{interconversion} or
\emph{isomerization} reaction among molecular species leads to fascinating
structure formation in biological and chemical world. Using Monte Carlo
simulations of the prototype Ising model, we explore the chemical kinetics of
such a system consisting of a binary mixture of \emph{isomers}. Our results
reveal that even though the two concerned processes are individually Arrhenius
in nature, the Arrhenius behavior of the \emph{isomerization} reaction gets
significantly disrupted due to an interplay of the nonconserved dynamics of the
reaction and the conserved diffusive dynamics of phase segregation. The
approach used here can be potentially adapted to understand reaction kinetics
of more complex reactions.
|
Shubham Thwal, Suman Majumder
|
2023-08-04T11:34:51Z
|
http://arxiv.org/abs/2308.02258v1
|
# Segregation disrupts the Arrhenius behavior of an isomerization reaction
###### Abstract
Co-existence of phase segregation and _interconversion_ or _isomerization_ reaction among molecular species leads to fascinating structure formation in biological and chemical world. Using Monte Carlo simulations of the prototype Ising model, we explore the chemical kinetics of such a system consisting of a binary mixture of _isomers_. Our results reveal that even though the two concerned processes are individually Arrhenius in nature, the Arrhenius behavior of the _isomerization_ reaction gets significantly disrupted due to an interplay of the nonconserved dynamics of the reaction and the conserved diffusive dynamics of phase segregation. The approach used here can be potentially adapted to understand reaction kinetics of more complex reactions.
The phenomenon of existence of two or more molecular species having the same chemical formula but different properties is referred to as _isomerism_[1]. Structural chirality is one of the reasons behind such _isomerism_, giving rise to _optical isomers_ or _enantiomers_[1; 2]. For instance, L-Glucose, unlike its _enantiomer_ D-glucose, is not an energy source for living organism, as it cannot be phosphorylated during _glycolysis_. The final product of synthesis of such molecular species is often comprised of a mixture of its _enantiomers_. Depending on the kind of interactions among themselves the _enantiomeric_ components of such mixtures can spontaneously or inductively segregate from each other [3; 4]. Simultaneously, either naturally or owing to an external drive, the _enantiomers_ may undergo an _isomerization_ or _interconversion_ reaction leading to an _enantiac_-selective production or phase amplification of one of them [5; 6; 7; 8; 9; 10; 11; 12]. Such _enantiac_-selective processes are ubiquitous in nature as well, e.g., amino acid residues of naturally occurring proteins are mostly L-_enantiomers_[13]. In the light of the above discussion, it is crucial to have a microscopic understanding by unravelling the physical laws governing such a phenomenon of segregation of molecular species undergoing _isomerization_ reaction [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24].
Segregation in _enantiomeric_ mixtures can be easily understood by simply translating the concepts of kinetics of phase segregation, which has been extensively studied in the past [25; 26], and recently developed further in more complex and realistic scenarios [27; 28; 29]. Similarly, the _interconversion_ reaction among _isomers_ can be captured under the essence of _phase ordering_ of ferromagnets [30]. In _phase ordering_ typically one ends up in a state where majority of the magnetic dipoles point in the same direction, following a quench from high-temperature disordered state to a temperature below the Curie point. Both kinetics of phase segregation in a binary mixture and _phase ordering_ can essentially be modeled by using simple lattice models, e.g., the nearest neighbor Ising model. Although both adaptations of these models produce equivalent thermodynamics, fundamentally their dynamics are different. Combining the two approaches results in an appropriate model for exploring the effect of _isomerization_ reaction during _enantiomeric_ phase segregation in solids using state of the art Monte Carlo (MC) simulations [14; 15; 21; 31]. Recent interests in this regard have shifted toward modelling reactions in solutions using molecular dynamics (MD) simulations [32], and forceful _interconversion_ reaction that preserves the respective initial composition of the _enantiomeric_ components [33]. These attempts have successfully explored novel mesoscopic steady-state structures mimicking microphase segregation observed in chemical and biological world. However, answers to some of the fundamental questions, viz., effect of segregation on the reaction kinetics, are still unexplored.
In this letter, we study the kinetics of an _isomerization_ reaction of the following type
\[A_{1}\rightleftharpoons A_{2}, \tag{1}\]
where the two _isomers_\(A_{1}\) and \(A_{2}\) are also undergoing segregation from each other. If there is no segregation, the rate constant \(k\) of such a simple _isomerization_ reaction, as a function of temperature \(T\), obeys the Arrhenius behavior given as
\[\ln k=\ln A-\frac{E_{a}}{R}\left(\frac{1}{T}\right), \tag{2}\]
where \(A\) is a pre-exponential constant, \(E_{a}\) is the activation energy, and \(R\) is the universal gas constant. Our results from MC simulations of the prototype Ising model mimicking the system described above, reveal that at high reaction probability the Arrhenius behavior is maintained. However, as the reaction probability decreases and segregation dominates, a significant deviation from the Arrhenius behavior is observed.
We have chosen a square lattice model where on each site \(i\) there sits an Ising spin \(S_{i}=+1(\mathrm{or}\ -1)\) that corresponds to species \(A_{1}(\mathrm{or}\ A_{2})\). The interaction energy between the spins are given by the conventional Ising Hamiltonian
\[\mathcal{H}=-J\sum_{\langle ij\rangle}S_{i}S_{j}, \tag{3}\]
where \(\langle ij\rangle\) indicates that only nearest neighbors can interact with each other and \(J\) is the corresponding interaction strength. We apply periodic boundary conditions in all possible directions to eradicate any surface effects. The model exhibits an order-disorder transition with a critical temperature \(T_{c}=[2/\ln(1+\sqrt{2})]J/k_{B}\), where \(k_{B}\) is the Boltzmann constant [34]. From now onward the unit of temperature is \(J/k_{B}\), and for convenience we have set \(J=k_{B}=1\). In order to capture the essence of a segregating mixture of _isomers_ undergoing _isomerization_ reaction, we have introduced both Kawasaki spin-exchange dynamics and Glauber spin-flip dynamics [35; 36; 37; 38]. In Kawasaki exchange, interchange of positions between a randomly chosen pair of nearest-neighbor spins is attempted, facilitating segregation of species. Such a MC move replicates atomic diffusion, and the resultant dynamics is conserved as it keeps individual compositions of the species unaltered. On other hand, in a Glauber spin-flip move, an attempt is made to flip a randomly chosen spin, thus mimicking the _interconversion_ or _isomerization_ reaction. We consider the forward and backward reactions in (1) to be equally likely. The spin-flip move is nonconserved as it changes the individual composition of the species. Both moves are accepted according to the standard Metropolis criterion [37; 38]. We start with a _racemic_ mixture of the _isomers_, i.e., equal proportion of \(A_{1}\) and \(A_{2}\) are uniformly distributed on the lattice, and then in the simulation we set the temperature to \(T<T_{c}\). At each MC step, the Glauber move is attempted with a probability \(p_{r}\)
while the Kawasaki exchange attempt is executed with a probability \(1-p_{r}\). We choose one MC sweep (MCS) as the unit of time, which refers to \(L^{2}\) attempted MC moves. We perform all our simulations on a square lattice of linear size \(L=32\) having \(L^{2}=1024\)_isomers_, at different \(T\) for a range of \(p_{r}\in[10^{-4},5\times 10^{-1}]\).
Segregation of molecular species in combination with the _isomerization_ reaction leads to pattern formation, as pertinent to the individual dynamics associated with the two processes. Typical representative time evolution snapshots at \(T=0.6T_{c}\) are presented in Fig. 1, for different \(p_{r}\). For the highest \(p_{r}=10^{-1}\), the patterns are similar to what is observed for a system with purely nonconserved spin-flip dynamics, i.e., in _phase ordering_[39; 40]. The _isomerization_ reaction seems to have finished faster as \(p_{r}\) increases. This rationalizes the difference in the set of times for which the snapshots are presented for different \(p_{r}\) in Fig. 1. As \(p_{r}\) decreases, the snapshots at intermediate times appear to have more bicontinuous morphologies. For all cases, although at different times, eventually the system approaches a final morphology where the system has one of the _isomers_ as majority. For \(p_{r}=10^{-4}\), such a stage is reached at a much longer time \(\approx 5\times 10^{7}\) MCS, making it computationally expensive. Hence, we refrain ourselves from simulating larger lattices than \(L=32\). Nevertheless, for investigating the reaction kinetics this size is sufficient as will be seen subsequently. For evolution snapshots at other temperatures see Figs. 1 and 2 in the Supporting Information (SI). From there it is apparent that at high temperature (\(T=0.8T_{c}\)) for smaller \(p_{r}\) at first the system segregates to a slab-like morphology, and then evolves further due to the _isomerization_ reaction. However, at low temperature (\(T=0.4T_{c}\)) even for \(p_{r}=10^{-4}\), the morphologies resemble more of what is shown in Fig. 1.
Since the main objective is to study the reaction kinetics, we have to extract the rate constant \(k\). As a first step, we need to monitor the progress of the reaction until it finishes, i.e., when one of the molecular species becomes almost negligible compared to the other. For that we calculate the concentration difference of the two species
\[\chi(t)=\frac{|N_{A_{1}}(t)-N_{A_{2}}(t)|}{N_{A_{1}}(t)+N_{A_{2}}(t)}, \tag{4}\]
where \(N_{A_{1}}(t)\) and \(N_{A_{2}}(t)\) are, respectively, the number of molecules of \(A_{1}\) and \(A_{2}\), at a time \(t\). The denominator in Eq. (4), \(N_{A_{1}}(t)+N_{A_{2}}(t)=L^{2}\) is the total number of molecules present in the system. By construction, at \(t=0\) for a _racemic_ mixture \(\chi(0)=0\), and at large \(t\) when the reaction is completed \(\chi(t)\approx 1\), thus making \(\chi(t)\) an useful parameter to monitor the progress of the reaction. In the main frames of Figs. 2(a)-(d), we present the corresponding data at different \(T\) for four choices of \(p_{r}\), as indicated. The linear-log scale is used to make the initial regimes properly visible. Apparently, for all \(p_{r}\) the data show an initial transient regime when \(\chi(t)\) remains almost constant, followed by a steep increase before finally settling at a value \(\approx 1\) indicating completion of the reaction. However, one could notice that the transient regime broadens as \(p_{r}\) decreases indicating a dominance of segregation over the reaction. Noticeable is also the presence of a third regime where \(\chi(t)\) again attains almost a plateau
Figure 1: **Pattern formation due to isomerization reaction and segregation**. Typical snapshots depicting time evolution of a binary mixture of _isomers_, simultaneously undergoing _isomerization_ reaction and segregation, following a quench from a homogeneous phase above \(T_{c}\) to a temperature \(T=0.6T_{c}\). The results are obtained from simulations on a square lattice of linear size \(L=32\). Different rows are for different values of the reaction probability \(p_{r}\), as indicated. Contrasting colors correspond to different species.
before finally approaching unity. For \(p_{r}=10^{-4}\), at high \(T\) this plateau vanishes (see Figs. 3-6 in the SI for individual plots at different \(T\)). The upper insets of Figs. 2 showing the same data on a double-log scale unravel significant differences in the time dependence of \(\chi(t)\) for different \(p_{r}\). For \(p_{r}=10^{-1}\) and \(10^{-2}\), data for all \(T\) is consistent with a power-law \(\chi(t)\sim t^{1/2}\). In a ferromagnetic system \(\chi(t)\equiv|m(t)|\). There, during _phase ordering_ the absolute magnetization \(|m(t)|\) obeys the same power-law \(|m(t)|\sim t^{1/2}\) in space dimension \(d=2\)[40]. For lower \(p_{r}\), data for \(\chi(t)\) deviate from the \(\sim t^{1/2}\) behavior, particularly at high \(T\). This is owing to the dominance of segregation at low \(p_{r}\), making the growth of \(\chi(t)\) much slower with a power-law exponent of \(1/3\), reminiscent of the Lifshitz-Slyozov exponent [41] observed for the time dependence of the characteristic length during phase segregation [42; 43; 44].
Next, we extract the reaction-completion time \(\tau_{r}\) from \(\chi(t)\), as \(\chi(t=\tau_{r})=h\), where we choose \(h=0.9\)[45]. Histogram of the extracted \(\tau_{r}\) (see Figs. 7-10 in the SI for histograms at different \(T\) for four values of \(p_{r}\)) shows non-uniform localized patterns for \(p_{r}\geq 10^{-2}\) with an exponential behavior at high \(T\) as presented in the lower insets of Figs. 2(a) and (b), respectively, for \(p_{r}=10^{-1}\) and \(10^{-2}\), at \(T=0.8T_{c}\). The dashed lines there represent best fits using \(f(\tau_{r})=100\exp(-\lambda\tau_{r})\), with decay constants \(\lambda=550\) and \(60\), respectively, for \(p_{r}=10^{-1}\) and \(10^{-2}\), implying a slower decay as \(p_{r}\) decreases. For even lower \(p_{r}\), the exponential nature is lost and the histogram appears to flatten out, as shown for \(p_{r}=10^{-4}\) in the lower inset of Fig. 2(d).
From the extracted \(\tau_{r}\) we calculate the rate constant \(k\) of the isomerization reaction as \(k=\tau_{r}^{-1}\). In Figs. 3 we show the temperature dependence of \(k\), by plotting \(-\ln\langle k\rangle\) as a function of \(1/T\). For \(p_{r}\geq 10^{-2}\), the data show a linear nature confirming the Arrhenius behavior depicted in Eq. (2). The dashed lines in Figs. 3(a)-(c) represent respective best fits obtained using the ansatz in Eq. (2). The obtained activation energies are \(E_{a}\in[3.46,3.75]\) with a mean of \(\langle E_{a}\rangle=3.65(18)\). For all \(p_{r}\geq 10^{-2}\), fits using \(E_{a}=3.65\) in Eq. (2) also work reasonably well, indicating possibly a \(p_{r}\)-independent activation energy. For \(p_{r}<10^{-2}\), the data do not appear to be linear anymore, and in fact for \(p_{r}=10^{-4}\) it becomes almost flat. This implies that the dominance of diffusive segregation dynamics disrupts the Arrhenius behavior of the _isomerization_ reaction, even though segregation itself is an Arrhenius process [46].
Figure 2: **Progress of the reaction**. Linear-log plots for time dependence of the concentration difference \(\chi(t)\) of the two _isomers_, at different temperatures for (a) \(p_{r}=10^{-1}\), (b) \(p_{r}=10^{-2}\), (c) \(p_{r}=10^{-3}\), and (d) \(p_{r}=10^{-4}\). The data presented are averaged over \(80\) independent time evolutions obtained by using different random number seeds in the MC simulations. The upper insets show the same plots on double-log scale. There the dashed black lines represent a power law \(\sim t^{1/2}\). The red dashed line in (d) represents another power-law \(\sim t^{1/3}\). The lower insets show representatives of histograms of the extracted reaction completion time \(\tau_{r}\), at \(T=0.8T_{c}\). In (a) and (b) the dashed lines represent a function of the form \(f(\tau_{r})=100\exp(-\lambda\tau_{r})\), where \(\lambda=550\) and \(60\), respectively.
To investigate the phenomenon of an interplay of two Arrhenius processes leading to a non-Arrhenius behavior, we probe the segregation using the time evolution of the order parameter
\[\psi(t)=\frac{1}{L^{2}}\sum_{i}\frac{|n_{A_{1}}^{\square}-n_{A_{2}}^{\square}|}{n _{A_{1}}^{\square}+n_{A_{2}}^{\square}}, \tag{5}\]
where the \(\sum\) is over all lattice sites and \(n_{A_{1}}^{\square}\) (or \(n_{A_{2}}^{\square}\)) is the number of \(A_{1}\) (or \(A_{2}\)) molecules in a sub lattice of size \(\ell\times\ell\) with \(\ell=1\) around a site \(i\) of the parent square lattice. By construction \(\psi\) for a segregated system is higher (\(\approx 1\)) than a homogeneous one. For a purely segregating system, the time when \(\psi\) approaches \(1\) provides a measure of the associated relaxation time \(\tau_{s}\), and a plot of \(-\ln\langle\tau_{s}^{-1}\rangle\) against \(1/T\) confirms the Arrhenius behavior (see Fig. 11 in the SI).
For a system where reaction is happening along with segregation, time dependence of \(\psi(t)\) alone will capture the effect of both processes. Hence, to understand the interplay between the segregation and the reaction, in Figs. 4 we plot the time dependence of \(\psi(t)\), characterising the segregation, with \(\chi(t)\), reflecting the temporal progress of the reaction, at different \(T\) for four values of \(p_{r}\). For \(p_{r}=10^{-1}\), shown in Fig. 4(a), \(\psi(t)\) increases monotonously with \(\chi(t)\), and the spread of data points over time for different \(T\) appear to be quite condensed, suggesting no trend as a function of \(T\). Typical configurations having \(\psi(t)\approx 0.85\), a value that corresponds to an almost completely segregated state for a purely segregating system, are also shown in Fig. 4(a) for a high and low \(T\). None of them represent a completely segregated morphology, suggesting that both the dynamics affect the system concurrently. However, the reaction has progressed slightly further for \(T=0.8T_{c}\) with \(\chi(t)=0.1\) compared to \(\chi(t)=0.08\) at \(T=0.4T_{c}\). This difference eventually gets manifested in the form of an Arrhenius behavior, expected for a simple _isomerization_ reaction.
As \(p_{r}\) decreases the data for different \(T\) look more dispersed with a certain \(T\) dependent trend, as guided by the green arrows in Figs. 4(c) and (d). There, one can notice that at the beginning \(\psi(t)\) increases sharply while no significant change in \(\chi(t)\) is observed, implying that initially during the evolution segregation dynamics dominate. The effect is more pronounced for \(p_{r}=10^{-4}\) at high \(T\). This could be further appreciated from the almost completely segregated morphology of the configuration representing a system having \(\psi(t)=0.85\) at \(T=0.8T_{c}\), for \(p_{r}=10^{-4}\), shown in Fig. 4(d). The value of \(\chi(t)=0.02\) at this instance indicates that the progress of the reaction is negligible. Since segregation itself is an Arrhenius process, the system reaches such a state much faster at high \(T\). However, after attaining such a morphology not only the segregation dynamics almost seizes, the activation energy \(E_{a}\) of the reaction also increases, temporarily halting the entire evolution of the system. This is analogous to the phenomenon of dynamic freezing due to emergence of metastable slab-like configurations during _phase ordering_ of a ferromagnet [39; 47; 48]. On the other hand, at \(T=0.4T_{c}\) the value of \(\chi(t)=0.06\) when \(\psi(t)=0.85\), suggests that the reaction has progressed further compared to \(T=0.8T_{c}\). Corresponding typical configuration at \(T=0.4T_{c}\), shown in Fig. 4(d), also does not represent a segregated morphology. Thus, in this case the reaction can easily proceed even further toward its completion. Hence, at low \(T\) and low \(p_{r}\), although the system always encounter a simultaneous occurrence of segregation and reaction, it never gets trapped in a completely segregated state, making the reaction completion time comparable with the one at high \(T\). Overall it implies that for low \(p_{r}\), the activation energy \(E_{a}\) of the _isomerization_ reaction is not \(T\) independent, and rather it depends irregularly on \(T\), which in turn gets manifested in the form of a non-Arrhenius behavior of the _isomerization_ reaction.
To summarize, we have presented results on chemical kinetics of a simple _isomerization_ reaction when it competes with a segregation process among the _isomers_. Results from our MC simulations of a model constructed using the nearest neighbor Ising model with two competing dynamics, reveal that the Arrhenius behavior of the reaction gets disrupted as segregation
Figure 3: **Temperature dependence of the reaction rate constant \(k\). Plots of \(-\ln\langle k\rangle\) against \(1/T\) to verify the presence of Arrhenius behavior for different \(p_{r}\). The dashed straight lines in (a)-(c) are fits using Eq. (2), and the dashed lines in (d)-(f) are just connecting the data points. Here, the symbol \(\langle\dots\rangle\) indicates an average over \(80\) independent simulation runs.**
dynamics dominate over the reaction dynamics, even though segregation itself is an Arrhenius process. We have rationalized this observation by virtue of a phenomenological argument that at high temperature and low reaction probability, the segregation of _isomers_ reaches completion leading to an almost completely segregated morphology, and thereby raising the activation energy of the reaction making it difficult for the system to evolve further. These findings shall provoke an experimental verification, which to the best of our understanding can be done without much hassle.
As a next step it would be worth exploring aging and related dynamical scaling [49] in a system with mixed dynamics such as presented here. Note that the results presented here are for solid phase reactions. Thus, as a future endeavor, it would be intriguing to consider similar reactions in solution phase by performing MD simulations of a fluid system [50]. Furthermore, based on the model presented here, one can construct similar models for reactions of higher complexity, and subsequently may also invoke the role of a catalyst. For that use of a multi-species model like the Potts model is required [39].
###### Acknowledgements.
This work was funded by the Science and Engineering Research Board (SERB), Govt. of India for a Ramanujan Fellowship (file no. RJF/2021/000044).
|
2310.17379
|
YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object
Detection
|
Vehicle perception systems strive to achieve comprehensive and rapid visual
interpretation of their surroundings for improved safety and navigation. We
introduce YOLO-BEV, an efficient framework that harnesses a unique surrounding
cameras setup to generate a 2D bird's-eye view of the vehicular environment. By
strategically positioning eight cameras, each at a 45-degree interval, our
system captures and integrates imagery into a coherent 3x3 grid format, leaving
the center blank, providing an enriched spatial representation that facilitates
efficient processing. In our approach, we employ YOLO's detection mechanism,
favoring its inherent advantages of swift response and compact model structure.
Instead of leveraging the conventional YOLO detection head, we augment it with
a custom-designed detection head, translating the panoramically captured data
into a unified bird's-eye view map of ego car. Preliminary results validate the
feasibility of YOLO-BEV in real-time vehicular perception tasks. With its
streamlined architecture and potential for rapid deployment due to minimized
parameters, YOLO-BEV poses as a promising tool that may reshape future
perspectives in autonomous driving systems.
|
Chang Liu, Liguo Zhou, Yanliang Huang, Alois Knoll
|
2023-10-26T13:16:27Z
|
http://arxiv.org/abs/2310.17379v1
|
# YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object Detection
###### Abstract
Vehicle perception systems strive to achieve comprehensive and rapid visual interpretation of their surroundings for improved safety and navigation. We introduce YOLO-BEV, an efficient framework that harnesses a unique surrounding cameras setup to generate a 2D bird's-eye view of the vehicular environment. By strategically positioning eight cameras, each at a 45-degree interval, our system captures and integrates imagery into a coherent 3x3 grid format, leaving the center blank, providing an enriched spatial representation that facilitates efficient processing. In our approach, we employ YOLO's detection mechanism, favoring its inherent advantages of swift response and compact model structure. Instead of leveraging the conventional YOLO detection head, we augment it with a custom-designed detection head, translating the panoranically captured data into a unified bird's-eye view map of ego car. Preliminary results validate the feasibility of YOLO-BEV in real-time vehicular perception tasks. With its streamlined architecture and potential for rapid deployment due to minimized parameters, YOLO-BEV poses as a promising tool that may reshape future perspectives in autonomous driving systems.
Vehicular Perception, Bird's-Eye View, YOLO, Surrounding Cameras.
## I Introduction
Autonomous driving systems represent a transformative shift in transportation, mobility, and road safety. The primary challenge for these systems lies in their ability to perceive and understand the environment effectively. Presently, mainstream research in the industry focuses on two main types of perception technologies: sensor-fusion solutions that integrate both LiDAR and radar with cameras [1], and vision-only systems that rely solely on cameras [2]. While the fusion of sensor and vision-based technologies can offer robust perception, the approach often comes with increased cost and potential environmental challenges, making it less feasible for large-scale deployments.
In contrast, vision-based systems, which rely solely on camera setups for environmental perception, are emerging as not only a cost-effective alternative but also a method that aligns more closely with sustainable development goals. Consequently, vision-based solutions are increasingly being considered as a possible ultimate direction for the entire autonomous driving sector. One burgeoning research focus within this vision-based paradigm centers on generating a bird's-eye view (BEV) of the surrounding environment as a means of enhancing vehicular perception. Traditional methods of generating such a view often suffer from limited scope due to constrained camera angles, thereby inhibiting the vehicle's spatial awareness capabilities essential for real-time decision-making. As illustrated in Figure 1, taken from a real-world example from the Motional website [3], generating a bird's-eye view (BEV) based purely on vision has increasingly become a focal point of modern research. Such methodologies aim to advance the future of autonomous driving by providing an enriched context for environmental perception, thereby facilitating real-time decision-making in complex scenarios. Following the generation of bird's-eye view maps as depicted in Figure 2, a typical use-case in autonomous driving involves leveraging these BEVs for path planning [4]. Here, the foundational layout of the road, including lane markings and other static elements, is often pre-measured high-definition map. This static information serves as the substrate upon which dynamic elements--such as cars, pedestrians, and other objects--are overlaid, thereby providing the necessary context for real-time navigational decisions. Our work aims to break free from this limitation by pioneering an approach that establishes a direct spatial correspondence between the vehicle's various camera locations and the BEV map. Specifically, the position of each image in the BEV map is determined by the vantage point from which it was captured; for example, an image taken from the front of the vehicle is mapped to the top part of the BEV, while an image from the rear finds its place at the bottom. In this way, we ensure a coherent and intuitive spatial representation of the environment around the vehicle.
Leveraging this innovative framework, we introduce YOLO-BEV, a novel perception system specifically engineered to transform this spatially correlated multi-camera data into a
Fig. 1: An example from the Motional website showing a vehicle equipped with multiple cameras capturing the surrounding environment. The visual data, displayed with 3D bounding boxes identifying cars, pedestrians, and other objects, is transformed into a bird’s-eye view (BEV) to enhance perception. Image source: [https://www.youtube.com/watch?v=vQPUa7t2gU](https://www.youtube.com/watch?v=vQPUa7t2gU).
|
2308.15724
|
Background Debiased SAR Target Recognition via Causal Interventional
Regularizer
|
Recent studies have utilized deep learning (DL) techniques to automatically
extract features from synthetic aperture radar (SAR) images, which shows great
promise for enhancing the performance of SAR automatic target recognition
(ATR). However, our research reveals a previously overlooked issue: SAR images
to be recognized include not only the foreground (i.e., the target), but also a
certain size of the background area. When a DL-model is trained exclusively on
foreground data, its recognition performance is significantly superior to a
model trained on original data that includes both foreground and background.
This suggests that the presence of background impedes the ability of the
DL-model to learn additional semantic information about the target. To address
this issue, we construct a structural causal model (SCM) that incorporates the
background as a confounder. Based on the constructed SCM, we propose a causal
intervention based regularization method to eliminate the negative impact of
background on feature semantic learning and achieve background debiased
SAR-ATR. The proposed causal interventional regularizer can be integrated into
any existing DL-based SAR-ATR models to mitigate the impact of background
interference on the feature extraction and recognition accuracy. Experimental
results on the Moving and Stationary Target Acquisition and Recognition (MSTAR)
dataset indicate that the proposed method can enhance the efficiency of
existing DL-based methods in a plug-and-play manner.
|
Hongwei Dong, Fangzhou Han, Lingyu Si, Wenwen Qiang, Lamei Zhang
|
2023-08-30T02:56:55Z
|
http://arxiv.org/abs/2308.15724v1
|
# Background Debiased SAR Target Recognition via Causal Interventional Regularizer
###### Abstract
Recent studies have utilized deep learning (DL) techniques to automatically extract features from synthetic aperture radar (SAR) images, which shows great promise for enhancing the performance of SAR automatic target recognition (ATR). However, our research reveals a previously overlooked issue: SAR images to be recognized include not only the foreground (i.e., the target), but also a certain size of the background area. When a DL-model is trained exclusively on foreground data, its recognition performance is significantly superior to a model trained on original data that includes both foreground and background. This suggests that the presence of background impedes the ability of the DL-model to learn additional semantic information about the target. To address this issue, we construct a structural causal model (SCM) that incorporates the background as a confounder. Based on the constructed SCM, we propose a causal intervention based regularization method to eliminate the negative impact of background on feature semantic
learning and achieve background debiased SAR-ATR. The proposed causal interventional regularizer can be integrated into any existing DL-based SAR-ATR models to mitigate the impact of background interference on the feature extraction and recognition accuracy. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset indicate that the proposed method can enhance the efficiency of existing DL-based methods in a plug-and-play manner.
keywords: Deep learning, Causal inference, Synthetic aperture radar, Automatic target recognition, Background debias +
Footnote †: journal: Journal of Computer Vision
## 1 Introduction
Synthetic aperture radar (SAR) is an active remote sensor that enables all-weather and day-and-night detection, making it a crucial component of Earth observation systems (Moreira et al., 2013). One of the primary objectives of SAR image data processing is automatic target recognition (ATR) (El-Darymli et al., 2016). Achieving this goal requires automated and intelligent methods that can accurately recognize targets by extracting discriminative features from SAR images, which remains a challenging task (Kechagias-Stamatis and Aouf, 2021).
Deep learning (DL) overcomes the limitations of traditional manual feature engineering and delivers superior performance in a data-driven manner, making it a powerful solution to address the aforementioned challenge (LeCun et al., 2015). Researchers have made initial forays into DL-based SAR-ATR methods in recent years (Chen et al., 2016), but further refinement is necessary before they can be effectively applied to real-world scenarios due
to their relatively short development time (Zhu et al., 2021).
The general structure of SAR-ATR systems consists of three phases: detection, discrimination, and recognition. In related studies, the first two phases, commonly known as the focus-of-attention (FOA), are employed to locate the region of interest (ROI) that may contain targets, without necessitating any human intervention. Both traditional (El-Darymli et al., 2013) and DL-based approaches (Zou et al., 2022) have demonstrated satisfactory FOA performance.
Using the results of FOA as input, the SAR-ATR method is designed to infer the classes of targets that may be included in the ROI. In this phase, the discriminability of extracted features (i.e., the feature with larger inter-class distance and smaller intra-class distance is more discriminative) is crucial for recognition performance. Various traditional methods have been proposed to improve the feature discriminability, such as hand-crafted feature extractors (Amrani et al., 2018), attributed scattering centers (Li and Du, 2019), low-rank matrix factorization (Zhang et al., 2018). However, DL-based methods offer substantial advantages in terms of performance compared to traditional approaches. Chen et al. (2016) and Ding et al. (2016) conduct pioneering work that utilize DL-models for SAR image feature extraction and target recognition, setting the foundation for further advancements. Subsequently, a significant portion of research has focused on refining the model architecture to enhance the performance of SAR-ATR, including multi-stream (Pei et al., 2017), attention mechanism (Zhang et al., 2020), capsule structure (Ren et al., 2021), vision transformer (Wang et al., 2022), etc. Given that SAR image processing often encounters the issue of speckle noise, some stud
ies have proposed methods for extracting the speckle-noise-invariant features (Kwak et al., 2019; Lei et al., 2021). Another aspect of the research focuses on adversarial learning, which involves using generative models for data augmentation (Sun et al., 2019) or performing adversarial attacks on existing models (Peng et al., 2022; Du and Zhang, 2021) to enhance the robustness of their feature extraction. In addition, research on DL-based methods for environments with a limited number of training samples (Fu et al., 2022; Wen et al., 2021) and edge devices (Wang et al., 2021) is gradually emerging.
The aforementioned studies have advanced the capability of SAR-ATR from a methodological perspective, but have overlooked the issue of background interference. As shown in Fig. 1, the ROI includes not only the foreground (i.e., target), but also a certain size of the background. Therefore, the following question arises:
* _When the foreground and background of two SAR images are different, do the DL-model extracted features belong to the foreground, the
Figure 1: Typical targets in SAR images. (a) Ship target (Huang et al., 2018). (b) Vehicle target (AFRL and DARPA, 2020). (c) Aircraft target (Sun et al., 2022).
background, or a mixture of both?_
A satisfactory answer to this question is that the DL-model extracts semantic information solely from the foreground, as the background is irrelevant for target recognition. In this way, the discriminability of extracted features can be maximized. However, in reality, the situation may differ. Firstly, it should be noted that eliminating the background in ROI can be challenging due to the irregular shape and non-standardized size of non-cooperative targets (Huang et al., 2018; AFRL and DARPA, 2020; Sun et al., 2022). Therefore, the input of SAR-ATR typically includes both foreground and background. Then, we conduct a motivating experiment to demonstrate that the features extracted by DL-models are a mixture of foreground and background elements.
Specifically, we employ two DL-based models, i.e., VGG16 (Simonyan and Zisserman, 2014) and ResNet18 (He et al., 2016), and evaluate their performance on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset (AFRL and DARPA, 2020). This dataset is constructed for scientific research purposes and contains only cooperative vehicle targets that possess similar sizes, roughly measuring 40\(\times\)40. Based on this observation, we evaluate the performance of two DL-models under four distinct crop conditions: preserving only the central 40\(\times\)40, 64\(\times\)64, 88\(\times\)88, and 128\(\times\)128 image crops, while masking the remaining areas. The accuracy and feature visualizations of different experimental setups are shown in Fig. 2.
As shown in Fig. 2(a), the accuracy of SAR-ATR progressively decreases as the image background intensifies. Moreover, Fig. 2(b)-(d) demonstrate that the inclusion of background reduces the distance between dif
ferent classes in the latent space, thereby diminishing the discriminability
Figure 2: Results of the motivating experiment for four image crop methods. (a) Recognition accuracy of different center-crops. (b)-(c) Feature visualization of \(64\times 64\), \(88\times 88\) and \(128\times 128\) center-crops using t-SNE (Van der Maaten and Hinton, 2008).
of extracted features. This phenomenon is reasonable from a causal perspective since there is no semantic correlation between the foreground and background. Therefore, the elements belonging to the background in the extracted features cannot have a positive impact on target recognition. Both the experimental and theoretical analyses indicate that the background is likely to interfere with the feature extraction process, which should focus on the foreground, leading to biased feature learning objectives and the deterioration of feature quality.
To address this issue, we utilize a structural causal model (SCM) (Pearl, 2009) to depict the causal relationships among the variables in the SAR-ATR process, including the input, DL-model, foreground, background, and prediction. In the proposed SCM, the background is set as the confounder, which is consistent with the motivating experimental results. By analyzing this SCM, we obtain a theoretical solution for treating background interference. This guides us to represent the theoretical background elimination process as a causal interventional regularizer. As a result, we can achieve background debiased feature learning, support better feature discriminability, and subsequently obtain satisfactory SAR-ATR performance. In summary, the main contributions of this paper are three-fold:
1. We highlight a critical but overlooked issue that the background exerts an adverse influence on the performance of SAR-ATR, and offer a remedy based on the perspective of causal inference.
2. We construct a SCM based on the process of SAR image feature learning to depict the causal relationships among the input, DL-model, foreground, background, and prediction. This provides a theoretical expla
nation and solution for background interference.
3. Guided by the constructed SCM, we propose a causal intervention based regularization method. This method can be utilized to effectively mitigate the adverse effects of background on feature learning of the DL-model in a plug-and-play manner, leading to significant enhancement in the performance of SAR-ATR.
The remainder of this paper is organized as follows. Section 2 briefly reviews related works. Section 3 describes the proposed method. Experiment results and analyses are presented in Section 4. Finally, Section 5 concludes this paper.
## 2 Related Works
This section provides a comprehensive review of the research literature related to this paper, including DL-based SAR-ATR methods, background related methods, and causality theory.
### General Paradigm of DL-based SAR-ATR
Convolutional neural network (CNN) is one of the most significant DL techniques and has achieved state-of-the-art results in various image processing applications, particularly image recognition. The fundamental prototype of CNN, proposed by LeCun et al. (1989), is characterized by desirable properties such as weight sharing and local connection. Krizhevsky et al. (2012) have made significant contributions to the development of current CNN models by using ReLU activation, local response normalization, and GPU training to enhance model capacity and efficiency. CNN is typically regarded as an
end-to-end methodology for feature extraction and classification, providing automated feature engineering that is more powerful than traditional hand-crafted features and kernel methods.
The seminal studies of DL-based SAR-ATR method are conducted by Chen et al. (2016) and Ding et al. (2016). These studies utilize stacked CNN models composed of convolution, activation, pooling, and fully-connected layers to extract abstract but discriminative deep features from SAR images to achieve promising performance in target recognition. Despite considerable research in this area and some progress being made, the background debiased DL-based SAR-ATR approach has not been explored so far, which is the objective of this paper.
### Background Debias Related Studies in SAR-ATR
Several studies have analyzed the effect of background on SAR-ATR (Papson and Narayanan, 2012; Cui et al., 2005; Lombardo et al., 2001). Empirical evidence provides by Zhou et al. (2019) demonstrates the significant impact of background replacement on the performance of SAR-ATR. Moreover, Belloni et al. (2020) discuss the role of target, shadow, and clutter in recognition to gain insight into the decision-making process of DL-models.
While there is currently no research exploring background bias from a methodological perspective, the impact of this issue has been observed in the implementation of many SAR-ATR methods. In related studies, the negative effects caused by the background are mitigated by employing two main techniques, i.e., image pre-processing and data augmentation. The image pre-processing method (Feng et al., 2022) involves a multi-stage algorithm that begins by detecting the edges of the target, allowing for the segmen
tation of the target region from the background. Then, chip recognition is performed to improve performance. On the other hand, the data augmentation method (Ding et al., 2016; Chen et al., 2016) is based on sampling a large number of chips from the original image with the same foreground but slightly different backgrounds. These operations prompt the DL-model to learn background-irrelevant features.
However, both of the techniques mentioned above have obvious drawbacks: Firstly, the image pre-processing method not only risks losing target details but also has a high level of complexity. Secondly, the data augmentation method relies on a strong prior, i.e., all targets to be recognized have a fixed and known size. This approach can be challenging when dealing with the recognition problem of non-cooperative targets. In this paper, we address this issue from a causal inference perspective and propose an end-to-end method with a plug-and-play capability.
### Causality Theory
Causality has become as a significant subfield of statistics, with its own conceptual framework, descriptive language, and methodological system. A comprehensive review of the existing literature in (Scholkopf, 2022) highlights how causality theory has been instrumental in the development of machine learning by addressing the deficiencies of current approaches.
SCM (Pearl, 2009) is a crucial tool in causality theory as it can effectively depict the interdependencies between factors involved in a process using a graph structure. By scrutinizing the SCM, every conceivable alteration that may take place within a process and the pluralistic consequences arising from these alterations may be identified. With the help of SCM, the distinction
between correlation and causality can be explored, leading to the current causal inference framework (Glymour et al., 2016). This framework decomposes the problem into three levels: correlation, intervention, and counterfactual, corresponding to observation, action, and imagination, respectively. Intervention, which involves altering part of the variable generation mechanism while preserving the remainder, is a critical operation for determining and quantifying causality. Scientific research commonly employs intervention, such as through randomized controlled experiments, to establish the presence and direction of causality.
## 3 Methodology
This section presents a detailed introduction to the proposed method for causal intervention based regularization. First, we introduce the construction and analyses of the SAR-ATR oriented SCM. Next, we provide a comprehensive overview of the proposed method, including its model architecture and loss function.
In contrast to many variants in this field that prioritize model refinement, the proposed method focuses on a background debiased feature learning process. This approach eliminates interference from the background in DL-based SAR-ATR models, guided by causal intervention. This process significantly enhances the discriminability of the extracted features and resulting in superior recognition performance.
### Causal Analysis
To illustrate the impact of background in the process of SAR-ATR, we use the SCM as a basis for our analysis. The SCM is a directed acyclic graph in
which nodes represent variables and directed edges indicate causality between them.
As shown in Fig. 3, we refer to the input SAR images as \(\mathbf{X}\), the feature extractor of the DL-based SAR-ATR model as \(\mathbf{X_{s}}\), and the features extracted by \(\mathbf{X_{s}}\) as \((\mathbf{X_{f}};\mathbf{X_{b}})\), where \(\mathbf{X_{f}}\) corresponds to foreground features
Figure 3: Illustration of the constructed SCM. It depict the causal relationships among the variables in the SAR-ATR process, including the input, DL-model, foreground, background, and prediction. The top part uses examples to demonstrate the conceptual causality. The bottom part is the abstracted SCM for DL-based SAR-ATR.
and \(\mathbf{X_{b}}\) corresponds to background features. The recognition of the label \(\mathbf{Y}\) is supported by both \(\mathbf{X_{f}}\) and \(\mathbf{X_{b}}\). In the following, we provide a detailed description of the proposed SCM and the underlying principles behind its construction.
\(\mathbf{X}\rightarrow\mathbf{X_{s}}\) indicates that the DL-model \(\mathbf{X_{s}}\) is used to extract features from the input SAR images \(\mathbf{X}\). \(\mathbf{X_{s}}\rightarrow\mathbf{X_{b}}\) indicates that background features \(\mathbf{X_{b}}\) is extracted by the model \(\mathbf{X_{s}}\). \(\mathbf{X_{s}}\rightarrow\mathbf{X_{f}}\) indicates that \(\mathbf{X_{f}}\) is extracted by \(\mathbf{X_{s}}\). \(\mathbf{X_{b}}\rightarrow\mathbf{Y}\): This link indicates that recognition of input SAR images relies on the information in the background features \(\mathbf{X_{b}}\), since all DL-model extracted features are used for the prediction of \(\mathbf{Y}\). \(\mathbf{X_{f}}\rightarrow\mathbf{Y}\): This link represents that the label of an input sample is determined by its foreground features, which is the true causality.
As previously analyzed, the ideal DL-model should capture the true causality between \(\mathbf{X_{f}}\) and \(\mathbf{Y}\). Therefore, we expect the prediction of the input SAR image to be driven primarily by the foreground features, rather than the background features. However, as shown in the constructed SCM, the conventional DL-model fails to capture such causality because the prediction \(\mathbf{Y}\) is not only determined by \(\mathbf{X_{f}}\) via the link \(\mathbf{X_{f}}\rightarrow\mathbf{Y}\), but also by the spurious correlation via the link \(\mathbf{X_{f}}\leftarrow\mathbf{X_{s}}\rightarrow\mathbf{X_{b}}\rightarrow\mathbf{Y}\). It means that the DL-based feature extractor \(\mathbf{X_{s}}\) generates the background features \(\mathbf{X_{b}}\) from the background of \(\mathbf{X}\). These features provide part of the semantics that are utilized when recognizing targets.
The above causalities can be reflected in the motivating experiment corresponding to Fig. 2: The performance of DL-based SAR-ATR models trained on input SAR images that include both foreground and background is signif
icantly lower compared to models trained on background-removed data. To capture the true causality while suppressing the spurious correlation, we perform backdoor adjustment by intervening on \(\mathbf{X_{f}}\) using the _do_\((\cdot)\) operation. This allows us to express the causality between \(\mathbf{X_{f}}\) and \(\mathbf{Y}\) as \(P(\mathbf{Y}|\textit{do}(\mathbf{X_{f}}))\), rather than relying on \(P(\mathbf{Y}|\mathbf{X_{f}})\).
It is widely recognized that the channels of the extracted features from any DL-based feature extractor are closely related to the underlying feature semantics (Zhou et al., 2016). Then we can simply assume that the output of the last network layer of \(\mathbf{X_{s}}\), i.e., \((\mathbf{X_{f}};\mathbf{X_{b}})\), can be mapped into a series of feature semantics, denoted as \((\mathbf{X_{f}};\mathbf{X_{b}})\mapsto\{F_{i}\}_{i=1}^{i=n}\), where \(F_{i}\in\mathbb{R}^{n_{c}}\) represents a stratification of feature semantics. Considering that the confounder \(\mathbf{X_{b}}\) is contained in \((\mathbf{X_{f}};\mathbf{X_{b}})\) and \(F\) is the feature semantics mapped from \((\mathbf{X_{f}};\mathbf{X_{b}})\), it follows that a portion of each \(F_{i}\) (belonging to the foreground) is informative for target recognition, while another portion (belonging to the background) occupies the position that should belong to the target information. Therefore, the goal of backdoor adjustment should be to emphasize the foreground-related portions, and to suppress the background-related portions of \(F\) that are irrelevant and harmful to target recognition. In this way, the true causality between \(\mathbf{X_{f}}\) and \(\mathbf{Y}\) can be excavated. Formally, the backdoor adjustment based _do_\((\cdot)\) operation can be expressed as follows:
\[P(\mathbf{Y}|\textit{do}(\mathbf{X_{f}}))=\sum_{i=1}^{n}P(\mathbf{Y}|\mathbf{X_{f}},F_{i})P(F_ {i}). \tag{1}\]
This equation presents a theoretical solution to eliminate background interference. In the following subsection, we will delve into the specifics of implementing the terms described in (1), and we will also detail our proposed
method guided by this theoretical solution.
### Causal Interventional Regularizer
In this subsection, we embody the theoretical background interference elimination process as a causal interventional regularizer.
Before introducing the proposed method, we note that in conventional DL-based models, the functional implementation of \(P(\mathbf{Y}|\mathbf{X_{f}})\) is provided by a supervised learning process with the cross-entropy loss function, as follows:
\[P(\mathbf{Y}|\mathbf{X_{f}})=-\sum_{k=1}^{K}\mathbb{I}(\mathbf{Y}==k)\log\frac{\exp c_{k} ((\mathbf{X_{f}};\mathbf{X_{b}}))}{\sum_{j=1}^{K}\exp c_{j}((\mathbf{X_{f}};\mathbf{X_{b}}))} \tag{2}\]
where \(k\) represents the \(k\)th class, \(c\) is the mapping: \(\mathbb{R}^{n_{c}}\rightarrow\mathbb{R}^{K}\) from the last network layer to the softmax classification layer, and \(j\) is the \(j\)th dimension of the prediction probability.
To embody the backdoor adjustment process described in (1), the first step is to provide detailed functional implementations for \(P(\mathbf{Y}|\mathbf{X_{f}},F_{i})\) and \(P(F_{i})\). Considering that \(F_{i}\) contains portions belonging to both foreground and background, generating large weights for the foreground-related channels and small weights for the background-related channels of \(F_{i}\) is necessary. By re-weighting \(F_{i}\) based on these weights, we can effectively address the need for background interference elimination.
Based on the above analyses, when a weight vector \(\alpha_{i}=[\alpha_{i,1},\alpha_{i,2},\cdots,\alpha_{i,n_{c}}]^{\text{T}}\) is given, the functional implementation for the first term of (1) can be expressed as:
\[P(\mathbf{Y}|\mathbf{X_{f}},F_{i})=-\sum_{k=1}^{K}\mathbb{I}(\mathbf{Y}==k)\log\frac{\exp c _{k}(F_{i}\odot\alpha_{i})}{\sum_{j=1}^{K}\exp c_{j}(F_{i}\odot\alpha_{i})} \tag{3}\]
where \(F_{i}\odot\alpha_{i}=[F_{i,1}\cdot\alpha_{i,1},F_{i,2}\cdot\alpha_{i,2},\cdots,F_{ i,n_{c}}\cdot\alpha_{i,n_{c}}]\). Therefore, given a weight matrix \(A=[\alpha_{1},\alpha_{2},\cdots,\alpha_{n}]\), and let \(P(F_{i})=1/n\), the overall implementation of the backdoor adjustment based \(\mathit{do}(\cdot)\) operation can be expressed as:
\[P(\mathbf{Y}|\mathit{do}(\mathbf{X_{f}}))=-\sum_{i=1}^{n}(\sum_{k=1}^{K}\mathbb{I}( \mathbf{Y}==k)\log\frac{\exp c_{k}(F_{i}\odot\alpha_{i})\times\frac{1}{n}}{\sum_{j =1}^{K}\exp c_{j}(F_{i}\odot\alpha_{i})}). \tag{4}\]
The aforementioned approach shifts our focus from providing functional implementations for (1) to the task of generating the weight matrix \(A\) required in (4). To accomplish this, we use a learnable network to generate the required weight matrix. This idea inspires us to propose a novel regularization method with plug-and-play capability based on causal intervention. The DL-model that incorporates the proposed causal interventional regularizer is
Figure 4: Illustration of the proposed causal intervention based regularization method. It is comprised of two modules, i.e., the feature extraction module and the semantic activation module. The former is a conventional DL-model for extracting the foreground and background features from the input SAR images. The latter can be seen as a causal interventional regularizer that is used to intervene on the obtained feature semantics, thus achieving the background debiased feature learning. In this figure, the instances and labels of training samples, network architectures, and intermediate outputs are marked by red, blue, and green boxes, respectively.
comprised of two modules, i.e., the feature extraction module and the semantic activation module. The unified framework for this approach is depicted in Fig. 4. In the following, we give detailed descriptions of the two modules and the functional learning objective.
#### 3.2.1 Feature Extraction Module
When integrating the proposed method with DL-based SAR-ATR models, a conventional DL-based feature extractor is utilized as the backbone to extract _task-specific features_ from the SAR images containing both foreground and background. Therefore, the backbone of this module corresponds to \(\mathbf{X_{s}}\) in Fig. 3, while its output corresponds to the feature semantics of \((\mathbf{X_{f}};\mathbf{X_{b}})\).
We denote the training set as \(D_{\text{train}}=\{X^{i},Y^{i}\}_{i=1}^{i=N}\), where \(X^{i}\in\mathbb{R}^{w\times h\times c}\) represents the instance with a size of \(w\times h\times c\), and \(Y^{i}\in\mathbb{R}\) represents the label in this set. The forward propagation of the feature extraction module can be expressed as \(f_{\text{sem}}(X^{i})\), where \(f_{\text{sem}}(\cdot)\) is the mapping of the backbone of this module, parameterized by \(W_{\text{sem}}\). It is used to map the input to the semantic space, i.e., \(f_{\text{sem}}:\mathbb{R}^{w\times h\times c}\mapsto\mathbb{R}^{n\times n_{c}}\).
The proposed method has a plug-and-play capability, as we do not impose any restrictions on the feature extractor. This is evident from the definition of \(f_{\text{sem}}(\cdot)\), allowing any DL-model to be used to build the feature extraction module.
#### 3.2.2 Semantic Activation Module
After obtaining the preliminary feature semantics \(F\) using the feature extraction module, another backbone is utilized by this module to extract the _semantic-activation-specific features_ and generate the weight matrix in
(4). Subsequently, the process of re-weighting from the preliminary to the background debiased feature semantics, i.e., \(F_{i}\odot\alpha_{i}\), can be performed by intervening the feature semantics.
The forward propagation of the semantic activation module can be expressed as \(f_{\text{sem}}(X^{i})\odot f_{\text{sam}}(X^{i})\), where \(f_{\text{sam}}(\cdot)\) is the mapping of the backbone of this module, parameterized by \(W_{\text{sam}}\). It is used to map the input to the semantic activation space, i.e., \(f_{\text{sam}}:\mathbb{R}^{w\times h\times c}\mapsto\mathbb{R}^{n\times n_{c}}\), which activates the feature semantics belonging to the foreground while suppressing those belonging to the background, thereby excavating the true causality.
#### 3.2.3 Overall Learning Objective
The classification layer, parameterized by \(W_{\text{cls}}\), is used to map the preliminary and background debiased feature semantics to prediction probabilities. Then the forward prediction and loss back-propagation of our proposed framework can be supported. Following the definitions in (2) to (4), for the training set \(D_{\text{train}}\), the learning objective for the feature extraction module can be expressed as the following cross-entropy loss:
\[\begin{split}& L_{\text{ce}}(X,Y,W_{\text{sem}},W_{\text{cls}})=\\ &-\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}\mathbb{I}(Y^{i}==k)\log \frac{\exp c_{k}(f_{\text{sem}}(X^{i}))}{\sum_{j=1}^{K}\exp c_{j}(f_{\text{sem }}(X^{i}))}.\end{split} \tag{5}\]
Besides, the \(t\)th learning objective for the semantic activation module can be expressed as the following causal interventional regularizer loss:
\[\begin{split}& L_{\text{cr}}^{t}(X,Y,W_{\text{sem}},W_{\text{ sam}},W_{\text{cls}})=\\ &-\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}\mathbb{I}(Y^{i}==k)\log \frac{\exp c_{k}(f_{\text{sem}}^{t}(X^{i})\odot f_{\text{sam}}^{t}(X^{i})) \times\frac{1}{n}}{\sum_{j=1}^{K}\exp c_{j}(f_{\text{sem}}^{t}(X^{i})\odot f _{\text{sam}}^{t}(X^{i}))},\end{split} \tag{6}\]
and the overall learning objective for this module is the sum of \(n\) times calculation for (6). Based on (5) and (6), we can obtain the overall learning objective of the unified framework shown in Fig. 4:
\[L_{\text{total}}=L_{\text{ce}}+\lambda\sum_{t=1}^{n}L_{\text{cr}}^{t} \tag{7}\]
where \(\lambda\) is the hyperparameter to control the trade-off. The two terms of (7) correspond to the \(P(\mathbf{Y}|\mathbf{X_{f}})\) and \(P(\mathbf{Y}|\textit{do}(\mathbf{X_{f}}))\) in the theoretical causal analysis, which reflects the integration of the conventional DL-model with our proposed causal interventional regularizer.
It is important to note that in a conventional DL-based SAR-ATR model, the loss function only includes the \(L_{\text{ce}}\) term. Therefore, the proposed method can be seen as a way to regularize the conventional model using the causal inference induced \(L_{\text{cr}}\) term. That is why we refer to the proposed method as _causal interventional regularizer_ in this paper.
The training process of the proposed method is outlined as Algorithm 1.
```
[MISSING_PAGE_POST]
```
0: Model parameter initialization \(W_{\text{sem}}\), \(W_{\text{sam}}\), and \(W_{\text{cls}}\); Epoch \(I\); Mini-batch size \(B\); Learning rate \(\eta\); Hyperparameter \(\lambda\); Training and validation sets \(D_{\text{train}}\) and \(D_{\text{val}}\).
1:for epoch in \(I\): do
2:for batch in \(B\): do
3: Forward propagation by \(f_{\text{sem}}\), \(f_{\text{sam}}\), and \(c\) with trainable parameters of \(W_{\text{sem}}\), \(W_{\text{sam}}\), and \(W_{\text{cls}}\)
4: # update the classification layer
5:\(W_{\text{cls}}\gets W_{\text{cls}}-\eta\nabla_{W_{\text{cls}}}L_{\text{total}}\)
6: # update the semantic activation module
7:\(W_{\text{sam}}\gets W_{\text{sam}}-\eta\nabla_{W_{\text{sam}}}L_{\text{ total}}\)
8: # update the feature extraction module
9:\(W_{\text{sem}}\gets W_{\text{sem}}-\eta\nabla_{W_{\text{sem}}}L_{\text{ total}}\)
10:endfor
11:endfor Return:
```
**Algorithm 1** Training the Background Debiased SAR-ATR Model
**Return:**
The DL-model with the highest validation accuracy.
ture received funding from the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, as an integral component of the MSTAR program (AFRL and DARPA, 2020). The dataset released to the public comprise ten distinct ground target classes, namely armored personnel carrier types BMP-2, BRDM-2, BTR-60, and BTR-70; tanks T-62 and T-72; rocket launcher 2S1; air defense unit ZSU-234; truck ZIL-131; and bulldozer D7. The data is collected using an X-band SAR sensor, in a spotlight mode with a resolution of 1-ft and full aspect coverage (spanning 0\({}^{\circ}\) to 360\({}^{\circ}\)).
The MSTAR benchmark dataset is widely emplo
Figure 5: Ten types of military targets in the MSTAR dataset in the optical and SAR formats. (a) 2S1. (b) BMP-2. (c) BRDM-2. (d) BTR-60. (e) BTR-70. (f) D7. (g) T-62. (h) T-72. (i) ZIL-131. (j) ZUS-234.
performance of SAR-ATR methods. Fig. 5 illustrates the SAR images of ten target classes contained in the MSTAR dataset, captured at similar aspect angles. In addition, it displays their corresponding optical images. In our experiments, involved SAR-ATR methods are thoroughly evaluated under standard operating conditions (SOC) as well as extended operating conditions (EOC) to ensure their appropriate evaluation. SOC primarily includes utilizing serial numbers and target configurations in the testing set that align with those in the training set, however, with varying aspects and depression angles. On the other hand, EOC involves substantial differences between the training and testing sets, highlighting notable variations in depression angle, target articulation, and version variants.
### Experimental Settings
The effectiveness of the proposed method is demonstrated through comparisons with notable alternatives, including VGG16 (Simonyan and Zisserman, 2014), ResNet18 (He et al., 2016), and A-ConvNet (Chen et al., 2016). The former two are the predominant DL-models utilized in image recognition, whereas A-ConvNet holds extensive application in SAR-ATR domain.
In the experiments, we use a VGG16 model as the backbone of the semantic activation module. The patch size of input images is limited to 128\(\times\)128. The size of obtained feature map from backbone is 4\(\times\)4\(\times\)512. When integrating the proposed method with the comparison methods, they will be used as the feature extractor in the feature extraction module in the proposed framework, and a fully-connected layer is attached behind them to generate the feature semantics.
Sophisticated data pre-processing and augmentation techniques are fre
quently employed in related studies for better recognition performance. Nevertheless, to illustrate the inherent target recognition proficiency of the DL-models, we refrain from carrying out any pre-processing activities, such as de-speckle, on the SAR images to be recognized in the experimental setup. Furthermore, to accentuate the background debias capability of the proposed method, we do not perform any data augmentation for VGG16 and ResNet18 models, but retain the original data augmentation operation of A-ConvNet, which involves randomly sampling a substantial quantity of \(88\times 88\) patches from the original \(128\times 128\) SAR images. This data augmentation method aims to mitigate the negative impact of background on the feature learning of targets. The aforementioned settings facilitate a straightforward comparison between the effectiveness of the proposed and the data augmentation methods for the issue of background debias.
The mini-batch stochastic gradient descent based method is used for model optimization, with a batch size of 64, and a training epoch of 100. In order to accelerate the optimization process and obtain more precise approximate solutions, the Adam optimizer is employed with the learning rate of 0.01. Upon completion of the training process, the model with the highest validation accuracy is preserved.
The experiments of this study employ a NVIDIA RTX A6000 GPU, and the program is implemented using the PyTorch DL framework.
### Experiments under SOC
In the experimental setup of SOC, which is listed in Table 1, the involved methods are evaluated on the ten-class recognition problem.
It must be noted that the target serial number is identical in both the
training and testing sets, although they vary in their azimuth and depression
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Class & Training Set (17\({}^{\circ}\)) & Testing Set (15\({}^{\circ}\)) \\ \hline
2S1 & 299 & 274 \\ \hline BMP-2 & 233 & 196 \\ \hline BRDM-2 & 298 & 274 \\ \hline BTR-60 & 256 & 195 \\ \hline BTR-70 & 233 & 196 \\ \hline D7 & 299 & 274 \\ \hline T-62 & 299 & 273 \\ \hline T-72 & 232 & 196 \\ \hline ZIL-131 & 299 & 274 \\ \hline ZIL-234 & 299 & 274 \\ \hline \end{tabular}
\end{table}
Table 1: Number of Training and Testing samples for the SOC Experimental Setup
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & CNN & VGG16 & ResNet18 & A-ConvNet \\ \hline & 91.483 & 92.783 & 95.685 & 98.376 \\ \cline{2-5} Accuracy(\%) & CNN+Ours & VGG16+Ours & ResNet18+Ours & A-ConvNet+Ours \\ \cline{2-5} & 95.637 & 98.557 & 98.215 & 99.228 \\ \hline \end{tabular}
\end{table}
Table 2: The comparison of Recognition Accuracy (%) under SOC
angles. The training images are acquired at a depression angle of \(17^{\circ}\), whereas the testing images are captured with a depression angle of \(15^{\circ}\).
Table 2 shows the comparison of the testing accuracy of CNN, VGG16, ResNet18, A-Convnet before and after adding the proposed causal interventional regularizer. In addition, the comparison of confusion matrices before and after background debias for the VGG16 model is shown in Fig. 6.
As can be seen from the experimental results, for the three DL-models without data augmentation, they also show significant improvements in the testing accuracy after adding the proposed causal interventional regularizer. Among them, background debiased VGG16 model even outperforms the A-ConvNet model, which expands the original training set 1681 times by a random clipping based data augmentation method. Specifically, after the causal-driven regularizer is inserted, the involved four DL-models have 4.154%, 5.774%, 2.530%, and 0.852% improvement in the recognition accu
Figure 6: The comparison of confusion matrices under SOC. (a) Results of VGG16. (b) Results of background debiased VGG16.
racy, respectively. The greatly improved SAR-ATR performance shows that even if there are many background areas in the ROI that interfere with target semantic learning, the conventional DL-based models can still effectively focus on the extraction of foreground information after adding the proposed causal interventional regularizer. This demonstrates the satisfactory background debiased capability of the proposed method. In addition, since the proposal is a plug-and-play method, there is no need to make any modifications to the original model, which greatly improves the generalizability of various DL-based recognition models in the field of SAR-ATR.
### Experiments under EOC
It is widely acknowledged that SAR images are highly susceptible to variations in depression angles. Therefore, the robustness of SAR-ATR methods to the changes in depression angle is important. To this end, we assess the proposed method in terms of large depression angle changes (indicated by EOC-1).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Target Type} & \multicolumn{2}{c|}{Training Set} & \multicolumn{2}{c|}{Testing Set} \\ \hline Class & Serial No. & Depression & No. Samples & Depression & No. Samples \\ \hline
2S1 & B01 & \(17^{\circ}\) & 299 & \(30^{\circ}\) & 288 \\ \hline BRDM-2 & E71 & \(17^{\circ}\) & 298 & \(30^{\circ}\) & 287 \\ \hline T-72 & A64 & \(17^{\circ}\) & 299 & \(30^{\circ}\) & 288 \\ \hline ZSU-234 & D08 & \(17^{\circ}\) & 299 & \(30^{\circ}\) & 288 \\ \hline \end{tabular}
\end{table}
Table 3: Details of EOC-1 Experimental Setup
Table 3 enumerates that among the targets in the MSTAR dataset, only four classes (2S1, BRDM-2, T-72, and ZSU-234) possess targets obtained at a depression angle of 30\({}^{\circ}\). Consequently, we evaluate the involved DL-models on these samples, wherein the corresponding training set is the data of these four classes of targets in SOC, i.e., the same targets obtained at a depression angle of 17\({}^{\circ}\). The large disparity in depression angle may result in a dissimilar depiction of same targets in identical postures, thereby augmenting the difficulty of recognition.
The EOC-1 dataset is classified using two models: the conventional VGG16 model and the VGG16 model incorporating the proposed causal interventional regularizer. The confusion matrix for large depression angle variations is presented in Fig. 7. The experimental results indicate that the VGG16 model achieves the accuracy of 96.351% while our proposed model has a higher accuracy of 98.436%. This suggests that the proposed method is capable of improving the robustness of DL-based SAR-ATR models even under challenging conditions such as large depression angles.
In another EOC testing scenario, i.e., EOC-2, the assessment of SAR-ATR methods revolves around variations in target configuration. The objective of this validation is to evaluate the performance of the model to recognize different variants of the same target. Specifically, the training set encompasses BMP-2, BRDM-2, BTR-70 and T-72 at depression angles of 15\({}^{\circ}\) and 17\({}^{\circ}\), whereas the testing set comprises of diversified versions of T-72. The variance in target representation between the two sets makes it arduous for the testing samples to be recognized as T-72 targets, thus intensifying the task challenge. Details of EOC-2 experimental setup are shown in Table 4.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Partition & Class & Serial No. & Depression & No. Samples \\ \hline \multirow{6}{*}{Training Set} & BMP-2 & C21 & \(17^{\circ}\) & 233 \\ \cline{2-5} & BRDM-2 & E71 & \(17^{\circ}\) & 298 \\ \cline{2-5} & BTR-70 & C71 & \(17^{\circ}\) & 233 \\ \cline{2-5} & T-72 & 132 & \(17^{\circ}\) & 232 \\ \hline \multirow{6}{*}{Testing Set} & & S7 & \(15^{\circ}/17^{\circ}\) & 419 \\ \cline{2-5} & & A32 & \(15^{\circ}/17^{\circ}\) & 572 \\ \cline{2-5} & T-72 & A62 & \(15^{\circ}/17^{\circ}\) & 573 \\ \cline{2-5} & & A63 & \(15^{\circ}/17^{\circ}\) & 573 \\ \cline{2-5} & & A64 & \(15^{\circ}/17^{\circ}\) & 573 \\ \hline \end{tabular}
\end{table}
Table 4: Details of EOC-2 Experimental Setup
Figure 7: The comparison of confusion matrices under EOC-1. (a) Results of VGG16. (b) Results of background debiased VGG16.
Experimental results under EOC-2 can be seen from Table 5. Similar to the previous experiments, we choose VGG16 as a representative of the conventional DL-models to test its change in recognition accuracy before and after the addition of the proposed causal interventional regularizer.
It is relatively obvious from the experimental results that the incorporation of the proposed method improves the accuracy of the conventional DL-model for the recognition of all variant classes of T-72. In particular, the recognition accuracy of the serial number A32 is improved by 7.517%, which is a considerable improvement. The above results effectively verify the effectiveness of the proposed method for background debias when recognizing variants of the same target.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Serial No. & No. Samples & VGG16 & VGG16+Ours \\ \hline A32 & 572 & 88.462 & 95.979 \\ \hline A62 & 573 & 95.637 & 97.906 \\ \hline A63 & 573 & 94.939 & 96.684 \\ \hline A64 & 573 & 92.845 & 95.637 \\ \hline S7 & 419 & 79.714 & 80.191 \\ \hline Total & 2710 & 90.923 & 94.022 \\ \hline \end{tabular}
\end{table}
Table 5: The comparison of Recognition Accuracy (%) under EOC-2
### Hyperparameter Analysis
To comprehensively investigate the impact of hyperparameter on the SAR-ATR accuracy of the proposed method, we employ the VGG16 model with the proposed regularization method and conduct experiments under SOC. Our primary focus is on assessing the significance of the hyperparameter \(\lambda\) in (7) on recognition accuracy while moderating the proportion of the causal interventional regularizer induced loss \(L_{\text{cr}}\) in the overall loss \(L_{\text{total}}\), which denotes the intervention degree of the proposed method in the feature extraction of conventional DL-model. We achieve this goal by setting the value of \(\lambda\) to \(\{0.001,0.01,0.1,0.5,1.0\}\) and conduct comparative experiments. The results are presented in Fig. 8.
As demonstrated in Fig. 8, the selection of hyperparameter \(\lambda\) is critical in promoting a desirable background debias effect. Notably, the highest accuracy is attainable when \(\lambda\) is set to 0.1; deviating from this value, whether by setting it too high or low, can result in a significant decrease in recognition accuracy.
## 5 Conclusion
In this paper, we present the first attempt to investigate the potential of background debias on DL-based SAR-ATR models from a methodological perspective. Specifically, we propose a novel causal intervention based regularization method to eliminate the interference of background on the extraction of target semantic information. The proposed method is developed based on the analysis of the SAR-ATR oriented SCM, which depicts the causal relationships among the input, DL-model, foreground, background,
and prediction.
We model the background in the SAR image to be recognized as a confounder for the feature extraction of DL-models, and eliminate it through backdoor adjustment based causal intervention. This theoretical solution for eliminating background interference is then transformed into a regularization term for conventional DL-models, resulting in background debiased feature learning. This method extracts more target semantics and improves the discriminability of the extracted features, while also being easy to implement, low complexity, widely adaptable, and having plug-and-play capability.
Experiments on the MSTAR dataset demonstrate that our proposed method
Figure 8: The effect of hyperparameter \(\lambda\) on the recognition accuracy.
offers advantages over conventional approaches, as evidenced by improved recognition performance when combined with any conventional DL-model. Moving forward, we plan to explore the combination of causal inference and diverse tasks such as SAR target detection and change detection. We will also explore other learning pipelines, including self-supervised learning, few-shot learning, and model compression.
## CRediT authorship contribution statement
H.D. and F.H. were responsible for the conceptualization, design and development of the methods, writing programmes, running experiments, analyzing the results, and writing the paper. L.S., W.Q. and L.Z. were responsible for investigation, funding requests and supervision. All authors reviewed the manuscript.
## Conflict of interest
The authors declare that they have no conflict of interest.
## Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (62271172), in part by the China Postdoctoral Science Foundation (2023M733615).
|
2303.17527
|
Diatomic molecules of alkali-metal and alkaline-earth-metal atoms:
interaction potentials, dipole moments, and polarizabilities
|
Ultracold diatomic molecules find application in quantum studies ranging from
controlled chemistry and precision measurement physics to quantum many-body
simulation and potentially quantum computing. Accurate knowledge of molecular
properties is required to guide and explain ongoing experiments. Here, in an
extensive and comparative study, we theoretically investigate the electronic
properties of the ground-state diatomic molecules composed of alkali-metal (Li,
Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. We
study 78 hetero- and homonuclear diatomic combinations, including 21
alkali-metal molecules in the $X^1\Sigma^+$ and $a^3\Sigma^+$ electronic
states, 36 alkali-metal--alkaline-earth-metal molecules in the $X^2\Sigma^+$
electronic state, and 21 alkaline-earth-metal molecules in the $X^1\Sigma^+$
electronic state. We calculate potential energy curves, permanent electric
dipole moments, and polarizabilities using the hierarchy of coupled cluster
methods upto CCSDTQ with large Gaussian basis sets and small-core relativistic
energy-consistent pseudopotentials. We collect and analyze corresponding
spectroscopic constants. We estimate computational uncertainties and compare
the present values with previous experimental and theoretical data to establish
a new theoretical benchmark. The presented results should be useful for further
application of the studied molecules in modern ultracold physics and chemistry
experiments.
|
Hela Ladjimi, Michał Tomza
|
2023-03-30T16:49:31Z
|
http://arxiv.org/abs/2303.17527v2
|
Diatomic molecules of alkali-metal and alkaline-earth-metal atoms: interaction potentials, dipole moments, and polarizabilities
###### Abstract
Ultracold diatomic molecules find application in quantum studies ranging from controlled chemistry and precision measurement physics to quantum many-body simulation and potentially quantum computing. Accurate knowledge of molecular properties is required to guide and explain ongoing experiments. Here, in an extensive and comparative study, we theoretically investigate the electronic properties of the ground-state diatomic molecules composed of alkali-metal (Li, Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. We study 78 hetero- and homonuclear diatomic combinations, including 21 alkali-metal molecules in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states, 36 alkali-metal-alkaline-earth-metal molecules in the \(X^{2}\Sigma^{+}\) electronic state, and 21 alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) electronic state. We calculate potential energy curves, permanent electric dipole moments, and polarizabilities using the hierarchy of coupled cluster methods upto CCSDTQ with large Gaussian basis sets and small-core relativistic energy-consistent pseudopotentials. We collect and analyze corresponding spectroscopic constants. We estimate computational uncertainties and compare the present values with previous experimental and theoretical data to establish a new theoretical benchmark. The presented results should be useful for further application of the studied molecules in modern ultracold physics and chemistry experiments.
## I Introduction
Cold and ultracold molecules offer exciting prospects for fundamental quantum physics and physical chemistry studies as well as for new quantum technology developments [1]. Within the past two decades, ultracold polar molecules have been established as great candidates for a plethora of applications ranging from ultracold controlled chemistry [2; 3] and precision measurements of fundamental constants [4; 5] to quantum simulation of many-body systems [6; 7] and quantum computation [8; 9]. One of the unique features of ultracold polar molecules is the possibility of controlling their states and intermolecular dipolar interactions with an external electric field [10]. The achieved exquisite control of both molecular internal quantum states and external motion, enabled by ultralow temperatures [2], propels further intensive theoretical and experimental investigations.
Ultracold molecules can be produced either directly using laser [11; 12; 13], evaporative [14; 15; 16; 17], or sympathetic [18] cooling or indirectly by associating pre-cooled atoms [19; 20]. However, ultracold diatomic molecules of alkali-metal and alkaline-earth-metal atoms have only been produced using indirect methods. Fortunately, alkali-metal and alkaline-earth-metal atoms can be favorably laser cooled to ultralow temperatures. Ultracold gases of alkali-metal molecules such as KRb [21; 22], RbCs [23; 24], NaK [25; 26; 27; 28], NaRb [29], and NaCs [30] in the ground rovibrational level of the singlet electronic state were produced from ultracold atoms by magnetoassociation using magnetic Feshbach resonances [19; 31] followed by an optical stabilization using the Stimulate Adiabatic Raman Passage (STIRAP) [20; 32]. A similar scheme was used to obtain ultracold gases of Rb\({}_{2}\)[33], Cs\({}_{2}\)[34; 35], NaLi [36], and Li\({}_{2}\)[37] molecules in the ground rovibrational level of the lowest triplet electronic state. Alternatively, molecules such as RbCs [38], Cs\({}_{2}\)[39], LiCs [40], and recently Sr\({}_{2}\)[41] were produced in the rovibrational ground state using all-optical photoassociation schemes. Degenerate Fermi gases of polar KRb [42] and NaK [43] molecules were formed. The Bose-Einstein condensation of weakly bound Li\({}_{2}\), K\({}_{2}\), and Cs\({}_{2}\) Feshbach molecules was realized [44; 45; 46], while the condensation of deeply-bound ground-state molecules remains one of the most important goals in the field. First steps toward producing ultracold SrRb [47] and SrLi [48] molecules using narrow magnetic Feshbach resonances were achieved. Finally, the formation of ultracold NaCs [49; 50; 51; 52], Rb\({}_{2}\)[53], and RbCs [54] molecules was possible at the single-molecule level using optical tweezers [55].
Ultracold quantum-controlled chemical reactions were studied in pioneering experiments with KRb molecules [56; 57; 58; 59; 60; 61; 62], followed by investigations involving Rb\({}_{2}\)[63], NaRb [64; 65; 66], NaCs [67; 68], and NaK [66; 69] species. Suppressing chemical reactivity and short-range losses was realized by shielding with electric and microwave fields for KRb [70] and NaK [16] molecules, respectively. Feshbach resonances controlled with a magnetic field were demonstrated in ultracold NaK+K [71; 72], NaLi+Na [73; 74], and NaLi+NaLi [75] collisions and used to create ultracold weakly-bound NaK\({}_{2}\) triatomic molecules from a NaK+K mixture [76; 77]. Ultracold KRb molecules were trapped in a three-dimensional optical lattice, and dipolar spin-exchange interactions between lattice-confined molecules were observed [78]. The first realization of a molecular quantum gas microscope with NaRb molecules [79; 80] was also reported. Long-lived coherence of molecular qubits based on ultracold NaK [81] and RbCs [82] molecules was shown. Ultracold KRb molecules were employed for precision measurements of the variation of the electron-to-proton mass ratio [83], while ultracold Sr\({}_{2}\) molecules in an optical lattice were established as a molecular clock for metrology and probing the fundamental laws of nature [84; 85; 86].
Molecular formation and application described above would not be possible without preceding detailed experimental spectroscopic studies and extensive theoretical _ab initio_ electronic structure calculations of underlying molecular properties. Different applications require data at different levels of accuracy. Accurate measurements can ultimately, in most cases, provide more accurate results than theoretical computations. Nevertheless, _ab initio_ quantum-chemical calculations of potential energy curves, permanent and transition electric dipole moments, and fine and hyperfine couplings, used next in rovibrational and scattering calculations, are often essential to guide and explain experimental efforts.
In this work, we use state-of-the-art _ab initio_ electronic structure methods to calculate the potential energy curves, permanent electric dipole moments, and static electric dipole polarizabilities for all the hetero- and homonuclear diatomic molecules composed of alkali-metal (Li, Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. We employ the hierarchy of coupled cluster methods upto CCSDTQ with large Gaussian basis sets and small-core relativistic energy-consistent pseudopotentials. We study a large number of 78 hetero- and homonuclear diatomic combinations, including 21 alkali-metal molecules in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states, 36 alkali-metal-alkaline-earth-metal molecules in the \(X^{2}\Sigma^{+}\) electronic state, and 21 alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) electronic state. We also analyze the convergence and accuracy of our calculations with the size of the orbital basis sets and the quality of the wave functions. In this way, we establish a new theoretical benchmark.
A significant portion of the molecules under investigation has already been studied experimentally or theoretically. Notably, alkali-metal dimers are one of the most extensively studied classes of molecules. However, alkali-metal-alkaline-earth-metal and alkaline-earth-metal molecules have received less attention, particularly in experimental studies, with a large number of such combinations without any experimental
\begin{table}
\begin{tabular}{l l l} Molecule & Experiment & Theory \\ \hline LiBe & [273] & [98, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284] \\ NaBe & - & [280, 283, 285] \\ KBe & - & [283, 286, 287, 288] \\ RbBe & - & [283, 288, 289] \\ CsBe & - & [288, 289] \\ FrBe & - & - \\ LiMg & [290, 291] & [98, 274, 280, 281, 283, 292, 293, 294, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 300, 307, 308, 309, 300, 302, 303, 304, 306, 307, 308, 309, 300, 303, 309, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 300, 308, 301, 309, 301, 303, 302, 307, 308, 301, 309, 303, 308, 301, 302, 303, 307, 308, 309, 310, 311, 312, 313, 314, 315] \\ KCa & - & [309] \\ FrCa & - & - \\ LiSr & [313] & [281, 281, 283, 293, 306, 307, 308, 314, 315] \\ NaSr & - & [283, 307, 308, 314, 316] \\ KSr & [317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 320, 321, 323, 324, 326, 327, 328, 329, 320, 323, 325, 327, 329, 328, 329, 321, 323, 326, 327, 328, 329, 320, 324, 327, 328, 329, 330, 329, 331, 332, 333, 334, 335, 336, 337, 338, 339, 332, 333, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 402, 404, 407, 409, 403, 404, 408, 409, 404, 405, 406, 407, 409, 408, 409, 400, 407, 408, 409, 409, 400, 409, 400, 40, 401, 402, 403, 404, 406, 407, 409, 408, 409, 400, 409, 400, 402, 404, 403, 405, 406, 407, 408, 409, 409, 400, 409, 400, 401, 402, 404, 409, 400, 403, 404, 404, 405, 406, 407, 409, 408, 409, 400, 409, 400, 401, 403, 402, 404, 405, 407, 409, 408, 409, 400, 409, 400, 401, 402, 403, 404, 406, 407, 409, 400, 408, 409, 401, 409, 400, 402, 404, 403, 404, 405, 407, 409, 408, 409, 400, 409, 401, 402, 404, 409, 400, 402, 404, 406, 409, 401, 403, 407, 408, 409, 400, 409, 401, 402, 404, 403, 404, 404, 405, 406, 407, 409, 408, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 400, 402, 404, 403, 405, 406, 407, 408, 409, 400, 407, 408, 409, 400, 409, 401, 402, 404, 403, 404, 409, 400, 404, 401, 404, 405, 406, 407, 408, 409, 400, 409, 401, 403, 409, 400, 402, 404, 404, 405, 407, 409, 401, 402, 405, 408, 409, 400, 409, 400, 403, 404, 406, 409, 400, 404, 407, 408, 409, 400, 409, 400, 400, 401, 402, 404, 403, 404, 405, 406, 407, 409, 407, 408, 409, 409, 400, 400, 409, 400, 409, 400, 400, 401, 402, 404, 409, 401, 403, 404, 406, 407, 408, 409, 400, 409, 400, 401, 402, 403, 404, 407, 409, 400, 408, 409, 400, 401, 403, 404, 409, 400, 401, 402, 404, 405, 407, 409, 408, 409, 400, 400, 409, 400, 409, 410, 400, 409, 411, 403, 409, 411, 410, 411, 411, 412, 413, 414, 415, 416, 417, 418, 419, 419, 420, 421, 422, 423, 424, 424, 425, 426, 427, 428, 429, 430, 420, 426, 427, 429, 431, 420, 427, 429, 44, 431, 421, 428, 429, 430, 420, 420, 426, 429, 431, 421, 421, 423, 424, 425, 424, 426, 427, 426, 429, 432, 427, 428, 429, 433, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 430, 439, 444, 430, 431, 430, 432, 434, 436,
data. In the following sections, we will compare our results with the most recent and accurate experimental and theoretical values. Because of the large number of studied molecules, refereeing and comparing to all previous theoretical results is not feasible. Therefore, we have collected references to previous experimental and theoretical works on the potential energy curves of the alkali-metal-alkali-metal, alkali-metal-alkaline-earth-metal, and alkaline-earth-metal-alkaline-earth-metal diatomic molecules in their ground electronic states in Tables 1, 2, and 3, respectively.
Experimental works in this field can be categorized into several classes, including: 1) laser photoionization spectroscopy, 2) laser-induced fluorescence spectroscopy, 3) polarisation labeling spectroscopy in hot vapors or beams, 4) spectroscopy of molecules on helium nanodroplets, and 5) highly accurate measurements in ultracold atomic or molecular gases in traps. Previous theoretical works can also be classified into several groups, including: 1) the oldest results, often without including the electron correlation, 2) the results with large-core pseudopotentials and the full configuration interaction method for valence electrons, 3) the results with small-core (scalar-relativistic) pseudopotentials and truncated coupled cluster or multireference configuration interactions methods, and 4) all-electron calculations often with relativistic Hamiltonians.
The main advancements of the present work lie in the following:
* the coherent calculation and comparison of three classes of experimentally relevant molecules at the consistent level of theory,
* the computation of electronic properties of some molecules, such as those containing francium and radium, for the first time,
* the investigation of permanent electric dipole moments of heteronuclear alkaline-earth molecules, for the first time,
* the use of large recently developed Gaussian basis sets [246],
* the inclusion of full triple and quadruple excitations in the coupled cluster method to obtain potential energy curves, resulting in the description of valence electrons at the full configuration interaction level for all molecules,
* the publication of all results in a numerical form in the Supplemental Material [409].
For completeness, it is worth mentioning the existence of the studies of isoelectronic diatomic molecules containing: 1) alkaline-earth-metal-like ytterbium atom [410, 411, 412], including its combinations with alkali-metal [413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426] or alkaline-earth-metal [422, 427] atom, 2) alkaline-earth-metal-like zinc, cadmium, or mercury atom with alkali-metal or alkaline-earth-metal atom [428, 429], and 3) alkali-metal-like copper, silver, or gold atom with alkali-metal or alkaline-earth-metal atom [430, 431, 432]. Finally, diatomic molecular ions of alkali-metal and alkaline-earth-metal atoms have also been investigated extensively (see, e.g., Ref. [433] and references therein). Such systems are, however, out of the scope of this investigation.
The structure of the paper is the following. In Section II, we describe the employed computational methods. In Section III, we present and discuss the obtained results. In section IV, we provide a summary and outlook.
## II Computational Methods
We adopt and employ the computational scheme based on the composite approach to calculate potential energy curves in the Born-Oppenheimer approximation for the relevant molecular electronic states, which we established and tested in several previous studies on different classes of molecules containing alkali-metal and alkaline-earth-metal atoms [429, 430, 121]. The final interaction energies, \(V_{\text{int}}(R)\), as a function of the internuclear distance \(R\), are calculated as a sum of different contributions
\[\begin{split} V_{\text{int}}(R)=& V_{\text{CCSDT}}^{ \text{upupCVSZ+bt}}(R)+\delta V_{\text{CCSDT}}^{\text{upVVYZ}}(R)\\ &+\delta V_{\text{CCSDT,val}}^{\text{upVYZ}}(R)\,,\end{split} \tag{1}\]
where the leading part, \(V_{\text{CCSDT}}^{\text{upVVYZ+bt}}(R)\), is obtained with the closed-shell (for the \(X^{1}\Sigma^{+}\) states) or spin-restricted open-shell (for the \(X^{2}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) states) coupled cluster method
\begin{table}
\begin{tabular}{l l l} \hline \hline Molecule & Experiment & Theory \\ \hline Be\({}_{2}\) & [333, 334, 335, 336, 337] & [338, 339, 340, 341, 342, 343, 344, 345, 346, 347] \\ BeMg & - & [345, 363, 364, 365, 366, 367] \\ BeCa & - & [345] \\ BeSr & - & [345] \\ BeBa & - & [345] \\ BeRa & - & - \\ Mg\({}_{2}\) & [300, 368, 369, 370, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 382, 388, 383, 389, 391, 392, 393, 394, 395, 396, 397, 398, 399, 399, 400, 401, 402, 403, 403, 404, 405, 406, 407, 408, 409, 402, 403, 404, 405, 407, 406, 408, 409, 403, 404, 408, 409, 404, 402, 405, 409, 400, 403, 404, 406, 407, 408, 409, 400, 401, 403, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 403, 409, 400, 404, 402, 404, 403, 405, 406, 407, 408, 409, 400, 409, 401, 402, 404, 403, 406, 407, 409, 400, 408, 409, 400, 400, 401, 402, 403, 404, 405, 407, 408, 409, 400, 400, 409, 401, 402, 403, 403, 404, 404, 405, 406, 407, 408, 409, 400, 409, 400, 401, 402, 404, 403, 404, 405, 408, 409, 400, 402, 404, 409, 403, 404, 406, 407, 409, 401, 403, 404, 408, 409, 400, 402, 404, 405, 409, 400, 403, 404, 406, 407, 408, 409, 400, 404, 408, 409, 400, 409, 400, 400, 401, 402, 403, 404, 409, 400, 401, 403, 404, 402, 405, 406, 407, 408, 409, 400, 409, 400, 401, 402, 403, 404, 405, 406, 407, 409, 401, 404, 408, 409, 400, 402, 409, 400, 403, 404, 404, 405, 406, 407, 408, 409, 400, 409, 400, 401, 404, 409, 401, 402, 403, 404, 406, 407, 408, 409, 400, 409, 401, 402, 404, 409, 400, 403, 404, 405, 409, 400, 400, 402, 404, 406, 407, 408, 409, 400, 409, 400, 401, 402, 403, 404, 405, 409, 400, 400, 402, 404, 406, 408, 409, 400, 400, 407, 409, 400, 401, 402, 404, 403, 404, 405, 406, 407, 408, 409, 400, 400, 409, 401, 400, 402, 404, 405, 409, 400, 400, 403, 404, 406, 408, 409, 400, 401, 404, 407, 408, 409, 400, 409, 400, 400, 409, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 400, 400, 401, 404, 409, 400, 400, 402, 409, 400, 400, 403, 404, 405, 406, 407, 409, 400, 408, 409, 400, 409, 400, 400, 409, 400, 400, 400, 400, 401, 409, 400, 400, 401, 400, 409, 400, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 400, 410, 407, 408, 409, 400, 409, 400, 400, 409, 400, 409, 400, 400, 409, 400, 400, 401, 409, 400, 400, 409, 400, 401, 409, 400, 409, 400, 400, 409, 400, 401, 409, 400, 409, 400, 410, 409, 400, 409, 400, 401, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 401, 409, 400, 409, 400, 409, 400, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 400, 409, 400, 4
restricted to single, double, and noniterative triple excitations [CCSD(T)] [434] and the augmented correlation-consistent polarized weighted core-valence quintuple-\(\zeta\) quality basis sets (aug-cc-pwCV5Z) [246, 435]. The atomic basis sets are additionally augmented in these calculations by the set of the \([484p3d3f1g]\) bond functions (bf) [436] to accelerate the convergence toward the complete basis set limit [437].
The next leading electron-correlation correction to the interaction energy, to account for the contribution of the iterative full triple excitations in the coupled cluster method, \(\delta V^{\text{apwCVTZ}}_{\text{CCSDT}}(R)\), is obtained as
\[\delta V^{\text{apwCVTZ}}_{\text{CCSDT}}(R)=V^{\text{apwCVTZ}}_{\text{CCSDT} }(R)-V^{\text{apwCVTZ}}_{\text{CCSDT}}(R)\,, \tag{2}\]
where \(V^{\text{apwCVTZ}}_{\text{CCSDT}}(R)\) is the interaction energy calculated with the coupled cluster method restricted to single, double, and triple excitations (CCSDT) and \(V^{\text{apwCVTZ}}_{\text{CCSDT}}(R)\) - with the CCSD(T) method, both with the same augmented correlation-consistent polarized weighted core-valence triple-\(\zeta\) quality basis sets (aug-cc-pwCVTZ) [246, 435].
Finally, the valence electron-correlation correction to the interaction energy to account for the contribution of the quadruple excitations in the coupled cluster method, \(\delta V^{\text{apVTZ}}_{\text{CCSDT},\text{val}}(R)\), is obtained as
\[\delta V^{\text{apVTZ}}_{\text{CCSDT},\text{val}}(R)=V^{\text{apVTZ}}_{\text{ CCSDT},\text{val}}(R)-V^{\text{apCVZ}}_{\text{CCSDT},\text{val}}(R)\,, \tag{3}\]
where \(V^{\text{apVTZ}}_{\text{CCSDT},\text{val}}(R)\) is the interaction energy calculated with the coupled cluster method restricted to single, double, triple, and quadruple excitations (CCSDTQ) and \(V^{\text{apCVZ}}_{\text{CCSDT},\text{val}}(R)\) - with the CCSDT method, both with the same augmented correlation-consistent polarized valence triple-\(\zeta\) quality basis sets (aug-cc-pVTZ) [246, 435]. In these calculations, only valence electrons are correlated, therefore \(\delta V^{\text{apVTZ}}_{\text{CCSDT},\text{val}}(R)\) is identically equal to zero for alkali-metal and alkali-metal-alkaline-earth-metal molecules with two and three valence electrons, respectively.
The interaction energies, \(V^{basis}_{method}(R)\), in Eqs. (1)-(3) are obtained using the super-molecule approach with the basis set superposition error (BSSE) corrected by using the Boys-Bernardi counterpoise correction [438],
\[V^{basis}_{method}(R)=E_{AB}(R)-E_{A}(R)-E_{B}(R)\,, \tag{4}\]
where \(E_{AB}(R)\) is the total energy of the molecule \(AB\), and \(E_{A}(R)\) and \(E_{B}(R)\) are the total energies of the atoms \(A\) and \(B\), all computed with the given \(method\) and diatom \(basis\) set at a distance \(R\).
The employed computational scheme given by Eq. (1) works very well at all internuclear distances for all molecules in the considered doublet \(X^{2}\Sigma^{+}\) and triplet \(a^{3}\Sigma^{+}\) molecular electronic states and alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) states because they are well described at all internuclear distances by single-reference methods. On the other hand, the singlet \(X^{1}\Sigma^{+}\) molecular electronic states of the alkali-metal molecules have single-reference nature at small and intermediate internuclear distances and multireference nature at larger distances, which originates from the open-shell character of the interacting alkali-metal atoms. At larger distances, the single-reference CCSD(T) method gives incorrect results, and the CCSDT correction only partially improves them. Therefore, we compute the \(X^{1}\Sigma^{+}\) states of alkali-metal molecules with the coupled cluster methods in the vicinity of the interaction potential well at short and intermediate distances and smoothly merge them with results obtained with the internally contracted multireference configuration interaction method restricted to single and double excitations with the active space of all valence orbitals and including the Davidson corrections (MRCISD+Q) [439] at larger distances. The MRCISD+Q results are shifted to impose correct asymptotic energies. Depending on the system, the switching distance is around 11-14 bohr.
All electrons in the Li, Be, Na, and Mg atoms and outer-shell electrons in other atoms are explicitly described by the selected large atomic augmented correlation-consistent one-electron Gaussian basis sets, while the inner-shell electrons in heavier atoms are replaced by the small-core relativistic energy-consistent pseudopotentials (ECP) [440] to include the scalar relativistic effects. The ECP10MDF, ECP10MDF, ECP28MDF, ECP28MDF, ECP46MDF, ECP78MDF, and ECP78MDF pseudopotentials [441, 204] are used for the K, Ca, Rb, Sr, Cs, Ba, Fr, and Ra atoms, respectively. The electrons of two outermost shells, i.e., \((n-1)s^{2}(n-1)p^{6}ns^{1}\) from alkali-metal and \((n-1)s^{2}(n-1)p^{6}ns^{2}\) from alkaline-earth-metal atoms, are correlated in calculations with the aug-cc-pwCV\(n\)Z basis sets, while only valence electrons, i.e., \(ns^{1}\) from alkali-metal and \(ns^{2}\) from alkaline-earth-metal atoms - with the aug-cc-pV\(n\)Z basis sets.
The composite approach of Eq. (1) rely on error cancellation observed in high-level molecular electronic structure calculations [106, 121, 357]. Effectively, for three classes of molecules, we describe their valence electrons at the full configuration interaction (FCI) level because CCSD, CCSDT, and CCSDTQ are equivalent to FCI for two-, three-, and four-electron systems, respectively. In order to evaluate the accuracy of the used approach, additional test calculations are carried out for exemplary KRb, RbSr, and CaSr molecules. Different basis sets from the aug-cc-pwCV\(n\)Z and aug-cc-pV\(n\)Z families with \(n\)=D, T, Q, 5 are employed and extrapolate the complete basis set limit (CBS) using the two-point formula [442]. Additionally, potential energy curves are obtained with a series of different less accurate methods [443]: spin-restricted Hartree-Fock (RHF), second-order Moller-Plesset perturbation theory (MP2), coupled cluster method restricted to single and double excitations (CCSD), configuration interaction method restricted to single and double excitations (CISD) and its variant including the Davidson correction (CISD+Q), as well as multireference variants of the CISD and CISD+Q methods (MRCISD and MRCISD+Q), respectively.
We interpolate the potential energy curves, \(V_{\text{int}}(R)\), using the cubic spline method to obtain spectroscopic parameters. The equilibrium distance, \(R_{e}\), is defined by
\[\left.\frac{dV_{\text{int}}(R)}{dR}\right|_{R_{e}}=0\,, \tag{5}\]
and the corresponding potential energy well depth, \(D_{e}\), is given by
\[D_{e}=-V_{\text{int}}(R_{e})\,. \tag{6}\]
The harmonic constant, \(\omega_{e}\), of the interaction potential is calculated at the equilibrium distance as
\[\omega_{e}=\sqrt{\frac{1}{\mu}\left.\frac{d^{2}V_{\text{int}}(R)}{dR^{2}} \right|_{R_{e}}}\,, \tag{7}\]
where \(\mu\) represents the reduced mass. The equilibrium rotational constant, \(B_{e}\), is given as
\[B_{e}=\frac{\hbar^{2}}{2\mu R_{e}}\,. \tag{8}\]
Finally, the first anharmonicity constant, \(\omega_{e}x_{e}\approx-Y_{20}\), is obtained by fitting the Dunham expansion to the lowest vibrational energy levels obtained with the discrete variable representation (DVR).
The permanent electric dipole moments \(d(R)\) and static electric dipole polarizabilities \(\alpha(R)\) are calculated with the finite field approach using the CCSD(T) method and the aug-cc-pwCVSZ+bf basis sets. The \(z\) axis is chosen along the internuclear axis and oriented from more electronegative atoms to less electronegative ones. The value of the external field perturbation equal to \(\pm 0.0005\) a.u. is used.
All electronic structure calculations are performed with the Molpro[444, 445] and MRCC [446] packages of _ab initio_ programs. Atomic masses of the most abundant isotopes are assumed.
## III Results and discussion
The interaction of two open-shell alkali-metal atoms in their ground doublet \({}^{2}S\) electronic state results in the ground molecular electronic state of the singlet \(X^{1}\Sigma^{+}\) symmetry and the first excited electronic state of the triplet \(a^{3}\Sigma^{+}\) symmetry.
Figure 1: Potential energy curves of all the alkali-metal diatomic molecules in the ground \(X^{1}\Sigma^{+}\) electronic state.
The interaction of two closed-shell alkaline-earth-metal atoms in their ground single \({}^{1}S\) electronic state results in the ground molecular electronic state of the singlet \(X^{1}\Sigma^{+}\) symmetry. Finally, the interaction of an alkali-metal atom with an alkaline-earth-metal atom results in the ground molecular electronic state of the doublet \(X^{2}\Sigma^{+}\) symmetry. Additionally, homonuclear dimers have the gerade and ungerade symmetries in the singlet (\(X^{1}\Sigma_{g}^{+}\)) and triplet (\(a^{3}\Sigma_{u}^{+}\)) states, respectively.
### Potential energy curves
The computed potential energy curves for the alkali-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states are presented in Fig. 1 and Fig. 2, respectively, for the alkali-metal-alkaline-earth-metal diatomic molecules in the \(X^{2}\Sigma^{+}\) electronic state - in Fig. 3, and for the alkaline-earth-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) state - in Fig. 4. Calculations are performed for all the combinations of the alkali-metal (Li, Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. All potential energy curves show a smooth behavior with well-defined minima. The corresponding spectroscopic characteristics such as the equilibrium interatomic distance \(R_{e}\), well depth \(D_{e}\), harmonic constant \(\omega_{e}\), first anharmonicity constant \(\omega_{e}x_{e}\), and rotational constant \(B_{e}\) are collected in Tables 4-7, along with available experimental spectroscopic data.
#### iv.1.1 Alkali-metal diatomic molecules
The alkali-metal diatomic molecules in their ground \(X^{1}\Sigma^{+}\) electronic state are relatively strongly chemically bound. Formally, they are covalently bound with a bond order of one. The calculated well depths systematically decrease with increasing the atomic number of the alkali-metal atoms and range from 3333 cm\({}^{-1}\) for Fr\({}_{2}\) to 8509 cm\({}^{-1}\) for Li\({}_{2}\) (with an average of 3892 cm\({}^{-1}\) calculated over a set of all molecules of their type). The corresponding equilibrium distances systematically increase with increasing the atomic number of the
Figure 2: Potential energy curves of all the alkali-metal diatomic molecules in the \(a^{3}\Sigma^{+}\) electronic state.
alkali-metal atoms and take values between 5.05 bohr for Li\({}_{2}\) and 8.80 bohr for Cs\({}_{2}\) (with an average of 7.31 bohr).
The alkali-metal diatomic molecules in their first excited \(a^{3}\Sigma^{+}\) electronic state with two valence electrons spin-polarized are weakly bound van der Waals complexes. Formally, they are not chemically bound with a bond order of zero and are stabilized by the dispersion interaction. The calculated well depths range from 198 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 334 cm\({}^{-1}\) for Fr\({}_{2}\) to 34 cm\({}^{-1}\) for Fr\({}
for Li\({}_{2}\) (with an average of 239 cm\({}^{-1}\)), displaying less variation and regularity than in their ground \(X^{1}\Sigma^{+}\) states. The corresponding equilibrium distances systematically increase with increasing the atomic number of the alkali-metal atoms and take values between 7.88 bohr for Li\({}_{2}\) and 12.51 bohr for Fr\({}_{2}\) (with an average of 10.78 bohr). Thus, the alkali-metal diatomic molecules in the \(a^{3}\Sigma^{+}\) state have well depths more than an order of magnitude smaller and equilibrium distances almost twice as large as those in their \(X^{1}\Sigma^{+}\) states.
All the alkali-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states consisting of stable isotopes (15 atomic combinations) have already been studied experimentally (see Table 1). Experimental high-resolution spectroscopic studies typically provide potential energy curves with accuracy better than 1 cm\({}^{-1}\). To evaluate the accuracy of our results, we compare our spectroscopic constants with the available experimental values in Tables 4 and 5 and calculate the root-mean-square error (RMSE) between accurate experimental and present well depths and equilibrium distances.
For the \(X^{1}\Sigma^{+}\) state, the RMSE of the well depths is 54 cm\({}^{-1}\) (1.3 %). The smallest error of 8 cm\({}^{-1}\) (0.1 %) is for Li\({}_{2}\) and the largest error of 112 cm\({}^{-1}\) (3 %) is for Cs\({}_{2}\). Only for the heaviest Cs\({}_{2}\) dimer, the error is larger than 100 cm\({}^{-1}\), and only for RbCs and Cs\({}_{2}\), the error is larger than 2 %. For most alkali-metal molecules in the \(X^{1}\Sigma^{+}\) state, the error is smaller than 1 %. The RMSE of the equilibrium distances is 0.012 bohr (0.16 %). The smallest error of 0.001 bohr (0.03 %) is for Li\({}_{2}\) and the largest error of 0.02 bohr (0.25 %) is for NaRb and Rb\({}_{2}\). The theoretical and experimental harmonic and anharmonicity constants agree with each other very well, mostly within 0.1-0.2 cm\({}^{-1}\).
For the \(a^{3}\Sigma^{+}\) state, the RMSE of the well depths is 8.5 cm\({}^{-1}\) (3.5 %). The smallest error of \(<1\) cm\({}^{-1}\) (\(<0.3\) %) is for Li\({}_{2}\) and the largest error of 16 cm\({}^{-1}\) (5.7 %) is for Cs\({}_{2}\). The RMSE of the equilibrium distances is 0.07 bohr (0.6 %). The smallest error of \(<0.001\) bohr (\(<0.01\) %) is for Li\({}_{2}\) and the largest error of 0.22 bohr (1.9 %) is for Cs\({}_{2}\). Molecules containing Cs have the largest errors, but the error is larger than 0.1 bohr for the heaviest Cs\({}_{2}\) dimer only. The agreement for other spectroscopic constants is also good.
Figure 4: Potential energy curves of all the alkaline-earth-metal diatomic molecules in the ground \(X^{1}\Sigma^{+}\) electronic state.
All the alkali-metal diatomic molecules have already been studied theoretically (see Table 1). The results for the \(X^{1}\Sigma^{+}\) states, reported within the last two decades, typically agree with the experimental values within 200 cm\({}^{-1}\) (5 %), with some of them having errors smaller than 100 cm\({}^{-1}\) (3 %). For example, the calculations for 10 heteronuclear molecules in the \(X^{1}\Sigma^{+}\) state reported in Ref. [116] reached the RMSE of the well depths of 84 cm\({}^{-1}\) and the RMSE of the equilibrium distances of 0.014 bohr, which are by 55 % and 15 % larger, respectively, than the RMSEs reported in this study. The results for the \(a^{3}\Sigma^{+}\) states, reported within the last two decades, typically agree with the experimental values within 50 cm\({}^{-1}\) (20 %), with some of them having errors smaller than 10 cm\({}^{-1}\) (5 %). For example, the highly-accurate calculations for the lightest Li\({}_{2}\)[106] and NaLi [121] molecules in the \(a^{3}\Sigma^{+}\) state reached recently the spectroscopic accuracy (\(<1\)cm\({}^{-1}\)). The observed overall agreement of our results with the experimental values for the alkali-metal molecules confirms the accuracy
\begin{table}
\begin{tabular}{l r r r r r r r r r} Molecule & \(R_{e}\) (bohr) & \(D_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\chi_{e}\) (cm\({}^{-1}\)) & \(B_{e}\) (cm\({}^{-1}\)) & \(d_{e}\) (D) & \(\alpha_{e}^{\parallel}\) (a.u.) & \(\alpha_{e}^{\perp}\) (a.u.) & Ref. \\ \hline Li\({}_{2}\) & 5.053 & 8509 & 351.2 & 2.59 & 0.6722 & 0 & 296.2 & 165.7 & This work \\ & 5.051225(4) & 8516.774(4) & 351.4 & 2.58 & 0.6726 & 0 & & & [87, 90] \\ LiNa & 5.464 & 7085 & 256.4 & 1.69 & 0.3751 & 0.47 & 342.3 & 185.6 & This work \\ & 5.45859 & 7103.5(1.0) & 256.46 & 1.58 & 0.3758 & 0.47(3) & & & [111, 112, 447] \\ LiK & 6.269 & 6197 & 211.9 & 1.21 & 0.2576 & 3.36 & 480.3 & 245.5 & This work \\ & 6.2722(1) & 6216.9(1) & 211.91(2) & 1.224(6) & 0.2576(2) & 3.45(1) & & & [126, 448, 449] \\ LiRb & 6.558 & 5902 & 195.0 & 1.14 & 0.2160 & 4.00 & 531.5 & 261.6 & This work \\ & 6.5501(1) & 5928(4) & 195.18 & 1.096 & 0.2165 & 4.00(1) & & & [130, 449] \\ LiCs & 6.943 & 5817 & 184.4 & 1.05 & 0.1874 & 5.28 & 605.7 & 284.6 & This work \\ & 6.9317 & 5875.5(1) & 184.70 & 1.000 & 0.1880 & 5.5(2) & & & [40, 134] \\ LiFr & 6.976 & 5460 & 180.7 & 1.79 & 0.1819 & 4.24 & 605.1 & 276.5 & This work \\ Na\({}_{2}\) & 5.827 & 5993 & 159.5 & 0.90 & 0.1542 & 0 & 378.8 & 204.7 & This work \\ & 5.8191 & 6022.029(5) & 159.18 & 0.760 & 0.1547 & 0 & & & [140, 143] \\ NaK & 6.629 & 5248 & 120.2 & 0.20 & 0.0947 & 2.71 & 521.8 & 274.7 & This work \\ & 6.6122 & 5273.6(1) & 124.03 & 0.496 & 0.0952 & 2.72(6) & & & [163, 164, 449] \\ NaRb & 6.902 & 4993 & 105.7 & 0.16 & 0.0700 & 3.30 & 574.3 & 293.4 & This work \\ & 6.8850 & 5030.50(5) & 106.85 & 0.380 & 0.0702 & 3.2(1) & & & [29, 175, 177] \\ NaCs & 7.295 & 4897 & 100.0 & 0.39 & 0.0577 & 4.52 & 665.9 & 323.7 & This work \\ & 7.2766 & 4954.24(1) & 98.88 & 0.321 & 0.0580 & 4.8(2) & & & [186, 187, 449] \\ NaFr & 7.301 & 4599 & 95.8 & 0.20 & 0.0542 & 3.51 & 646.7 & 309.7 & This work \\ K\({}_{2}\) & 7.405 & 4415 & 92.6 & 0.32 & 0.0564 & 0 & 705.7 & 372.0 & This work \\ & 7.415(2) & 4450.67(7) & 92.40 & 0.324 & 0.0562 & 0 & & & [196, 197] \\ KRb & 7.687 & 4171 & 76.2 & 0.28 & 0.0382 & 0.63 & 762.4 & 398.6 & This work \\ & 7.6868(1) & 4217.82(1) & 75.85 & 0.230 & 0.0381 & 0.57(2) & & & [29, 208, 209] \\ KCs & 8.097 & 3996 & 68.5 & 0.20 & 0.0305 & 1.86 & 879.0 & 445.2 & This work \\ & 8.096(13) & 4069.3(1.5) & 68.394(3) & 0.193(1) & 0.03048(5) & & & & [222] \\ KFr & 8.095 & 3838 & 65.2 & 0.16 & 0.0277 & 0.91 & 833.3 & 421.6 & This work \\ Rb\({}_{2}\) & 7.962 & 3933 & 57.5 & 0.01 & 0.0223 & 0 & 821.6 & 428.1 & This work \\ & 7.9556(1) & 3993.593(3) & 57.75 & 0.125 & 0.0224 & 0 & & & [233, 236] \\ RbCs & 8.379 & 3750 & 50.1 & 0.11 & 0.0166 & 1.21 & 939.7 & 479.2 & This Work \\ & 8.3660 & 3836.37(0) & 50.01 & 0.110 & 0.0166 & 1.23(1) & & & [24, 249, 250] \\ RbFr & 8.370 & 3619 & 46.0 & 0.14 & 0.0140 & 0.29 & 891.8 & 452.8 & This work \\ Cs\({}_{2}\) & 8.801 & 3538 & 42.0 & 0.081 & 0.0117 & 0 & 1061.0 & 538.4 & This work \\ & 8.7781 & 3649.9(5) & 42.021 & 0.082 & 0.0117 & 0 & & & [258] \\ CsFr & 8.779 & 3440 & 37.7 & 0.077 & 0.0094 & 0.88 & 1008.5 & 507.8 & This work \\ Fr\({}_{2}\) & 8.768 & 3333 & 32.8 & 0.04 & 0.0070 & 0 & 953.9 & 477.6 & This work \\ \end{tabular}
\end{table}
Table 4: Spectroscopic constants of the alkali-metal diatomic molecules in the ground \(X^{1}\Sigma^{+}\) electronic state: equilibrium interatomic distance \(R_{e}\), well depth \(D_{e}\), harmonic constant \(\omega_{e}\), first anharmonicity constant \(\omega_{e}x_{e}\), rotational constant \(B_{e}\), permanent electric dipole moments \(d_{e}\), and parallel \(\alpha_{e}^{\parallel}\) and perpendicular \(\alpha_{e}^{\perp}\) components of the static electric dipole polarizability. Available experimental values are also collected.
and sufficiency of the employed methodology for describing the electronic structure of such molecules, is better on average than in previous theoretical studies but cross-validates the accuracy of different approaches, and allows to establish the present results as a benchmark for future more accurate calculations.
#### iv.2.2 Alkali-metal-alkaline-earth-metal diatomic molecules
The alkali-metal-alkaline-earth-metal diatomic molecules in their ground \(X^{2}\Sigma^{+}\) electronic state are weakly to moderately bound and have the van der Waals character. Formally, they are chemically bound with a bond order of half. They are radicals because of their single unpaired valence electron. The calculated well depths range from 569 cm\({}^{-1}\) for FrMg to 3335 cm\({}^{-1}\) for LiBa (with an average of 1311 cm\({}^{-1}\)). They
\begin{table}
\begin{tabular}{l r r r r r r r r r r} Molecule & \(R_{e}\) (bohr) & \(D_{e}\) (cm\({}^{-1}\)) & \(T_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\) (cm\({}^{-1}\)) & \(B_{e}\) (cm\({}^{-1}\)) & \(d_{e}\) (D) & \(\alpha_{e}^{\perp}\) (a.u.) & \(\alpha_{e}^{\perp}\) (a.u.) & Ref \\ \hline Li\({}_{2}\) & 7.880 & 333.6 & 8175 & 65.1 & 3.31 & 0.2768 & 0 & 696.0 & 252.4 & This work \\ & 7.88017(6) & 333.780(6) & 8183.0 & 65.2(2) & 3.19(4) & 0.2764 & 0 & & & [88, 96] \\ LiNa & 8.910 & 238 & 6847 & 42.12 & 1.80 & 0.1413 & 0.18 & 560.7 & 270.8 & This work \\ & & 229.753 & 6874 & & & & & & & [113] \\ LiK & 9.402 & 283 & 5914 & 43.5 & 1.70 & 0.1146 & 0.32 & 817.8 & 369.9 & This work \\ & 9.43(17) & 287(4) & 5930(4) & 44.2(1.5) & & & & & & [125] \\ LiRb & 9.718 & 276 & 5626 & 41.0 & 1.60 & 0.0984 & 0.37 & 845.2 & 399.1 & This work \\ & 9.71(2) & 278(4) & 5650(8) & & & & & & & [130] \\ LiCs & 9.991 & 295 & 5522 & 40.7 & 1.30 & 0.0905 & 0.45 & 985.2 & 461.5 & This work \\ & 9.916 & 309(10) & 5566(10) & 44.51 & 1.25 & & & & & [134] \\ LiFr & 10.239 & 250 & 5209 & 37.7 & 1.5 & 0.0844 & 0.21 & 813.0 & 415.7 & This work \\ Na\({}_{2}\) & 9.736 & 185 & 5808 & 25.2 & 0.88 & 0.0552 & 0 & 491.6 & 282.0 & This work \\ & 9.719(2) & 172.91(4) & 5849.1 & 23.79 & 0.643 & 0.0565 & 0 & & & [142, 148] \\ NaK & 10.301 & 206 & 5042 & 23.5 & 0.97 & 0.0392 & 0.03 & 697.9 & 387.3 & This work \\ & 10.280 & 207.86(2) & 5065.8 & 23.01 & 0.622 & 0.0394 & & & & [163] \\ NaRb & 10.615 & 201 & 4792 & 19.9 & 0.47 & 0.0295 & 0.06 & 728.0 & 416.0 & This work \\ & 10.583 & 203.36(5) & 4827.1 & 18.86 & & 0.0282 & & & [177, 178] \\ NaCs & 10.902 & 212 & 4686 & 19.30 & 0.44 & 0.0259 & 0.09 & 837.71 & 482.0 & This work \\ & 10.856 & 217.17(1) & 4737.1 & 19.57 & 0.44 & 0.0261 & & & & [187] \\ NaFr & 11.092 & 186 & 4413 & 17.75 & 0.40 & 0.0235 & 0.10 & 710.4 & 430.6 & This work \\ K\({}_{2}\) & 10.853 & 248 & 4167 & 21.0 & 0.44 & 0.0262 & 0 & 946.3 & 483.8 & This work \\ & 10.8364(2) & 255.02(5) & 4195.7 & 21.63(4) & 0.470 & 0.0260 & 0 & & & [193, 200] \\ KRb & 11.211 & 242 & 3928 & 19.32 & 0.39 & 0.0180 & 0.06 & 972.9 & 512.5 & This work \\ & 11.1549(2) & 249.03(1) & 3968.8 & 18.79 & 0.98 & 0.0181 & & & & [209] \\ KCs & 11.475 & 256 & 3740 & 16.5 & 0.24 & 0.0152 & 0.10 & 1096.1 & 576.0 & This work \\ & 11.4355(2) & 267.14(2) & 3803.0 & 17.52 & 0.306 & 0.0158 & & & [222, 223] \\ KFr & 11.685 & 220 & 3617 & 14.8 & 0.26 & 0.0133 & 0.13 & 935.5 & 530.4 & This work \\ Rb\({}_{2}\) & 11.491 & 235 & 3698 & 13.4 & 0.16 & 0.0107 & 0 & 999.0 & 541.3 & This work \\ & 11.5160(2) & 241.503(3) & 3752.1 & 13.50 & 0.171 & 0.0108 & 0 & & & [235, 236] \\ RbCs & 11.810 & 249 & 3501 & 12.29 & 0.15 & 0.0083 & 0.03 & 1118.2 & 604.9 & This work \\ & 11.7528(2) & 259.34(3) & 3577.0 & 12.55 & 0.15 & 0.00841 & & & & [250] \\ RbFr & 12.010 & 215 & 3535 & 10.3 & 0.03 & 0.0068 & 0.07 & 961.3 & 558.8 & This work \\ Cs\({}_{2}\) & 12.143 & 263 & 3275 & 10.9 & 0.082 & 0.0062 & 0 & 1237.4 & 668.2 & This work \\ & 11.915(2) & 278.58(4) & 3370.7 & 11.37 & 0.184 & 0.00652 & 0 & & [147, 259] \\ CsFr & 12.344 & 226 & 3215 & 9.20 & 0.11 & 0.0048 & 0.07 & 1073.6 & 626.8 & This work \\ Fr\({}_{2}\) & 12.502 & 198 & 3135 & 7.37 & 0.04 & 0.0035 & 0 & 929.3 & 576.0 & This work \\ \end{tabular}
\end{table}
Table 5: Spectroscopic constants of the alkali-metal diatomic molecules in the first triplet \(a^{3}\Sigma^{+}\) electronic state: equilibrium interatomic distance \(R_{e}\), well depth \(D_{e}\), transition energies \(T_{e}\), harmonic constant \(\omega_{e}\), first anharmonicity constant \(\omega_{e}x_{e}\), rotational constant \(B_{e}\), permanent electric dipole moments \(d_{e}\), and parallel \(\alpha_{e}^{\parallel}\) and perpendicular \(\alpha_{e}^{\perp}\) components of the static electric dipole polarizability. Available experimental values are also collected.
systematically decrease with increasing the atomic number of the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atoms for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth atom, the alkali-metal atom for a given alkaline-earth atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-earth atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom, the alkali-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a given alkaline-earth atom for a given alkaline-earth-metal atom for a given alkaline-earth atom for a
as clearly visible in Fig. 3, but the trend is less regular with changing alkaline-earth-metal atoms. The molecules containing Li are significantly stronger bound than others. The corresponding equilibrium distances systematically increase with increasing the atomic numbers of both the alkali-metal and alkaline-earth-metal atoms and take moderate values between 4.86 bohr for LiBe and 9.98 bohr for RaFr (with an average of 7.94 bohr). The alkali-metal-alkaline-earth-metal molecules have two-to-four times smaller (larger) well depth than the alkali-metal molecules in the \(X^{1}\Sigma^{+}\) (\(a^{3}\Sigma^{+}\)) electronic state, but their equilibrium distances are only slightly larger than those of the ground-state alkali-metal dimers.
Spectroscopic measurements of the rovibrational levels of the ground electronic state were recorded for the LiBe, LiMg, LiCa, KCa, LiSr, KSr, RbSr, and LiBa molecules (see Table 2). Our spectroscopic constants are compared with the available experimental values in Table 6. However, the experimental data did not allow for accurate determination of well depths of corresponding potential energy curves. Nevertheless, our well depths agree within 10 % with the rough experimental estimates for LiBa [325] and LiMg [291] obtained from the extrapolations of the Morse vibrational constants, and within 2 % and 5 % with the experimental estimates guided by previous theoretical calculations for LiCa [302] and RbSr [320], respectively. The theoretical and experimental vibrational and rotational constants agree even better, mostly with differences below 3 %. Therefore, we estimate the uncertainty of the calculated potential energy curves for the alkali-metal-alkaline-earth-metal diatomic molecules to be around 3-6 %.
Most of the alkali-metal-alkaline-earth-metal diatomic molecules have already been studied theoretically (see Table 2). Our results agree well with previous accurate calculations. For example, for the family of the LiSr, NaSr, KSr, RbSr, and CsSr molecules studied in Ref. [314], the average absolute differences between the present and previous well depths is 49 cm\({}^{-1}\) (3.1 %) and equilibrium distances - 0.11 bohr (1.4 %), with all equilibrium distances sys
\begin{table}
\begin{tabular}{l r r r r r r r r r} Molecule & \(R_{e}\) (bohr) & \(D_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\) (cm\({}^{-1}\)) & \(\omega_{e}\chi_{e}\) (cm\({}^{-1}\)) & \(B_{e}\) (cm\({}^{-1}\)) & \(d_{e}\) (D) & \(\alpha_{e}^{\perp}\) (a.u.) & \(\alpha_{e}^{\perp}\) (a.u.) & Ref \\ \hline Be\({}_{2}\) & 4.62 & 919 & 270.3 & 25.2 & 0.6255 & 0 & 136.2 & 61.1 & This work \\ & 4.620(1) & 934.8(3) & 270.7 & 26.0 & 0.623 & 0 & & [336, 337] \\ BeMg & 6.20 & 435 & 80.2 & 5.73 & 0.2390 & 0.17 & 173.3 & 94.9 & This work \\ BeCa & 6.53 & 783 & 113.0 & 5.0 & 0.1918 & 0.39 & 320.7 & 169.2 & This work \\ BeSr & 6.93 & 756 & 101.0 & 4.5 & 0.1535 & 0.44 & 374.4 & 208.6 & This work \\ BeBa & 7.16 & 868 & 105.4 & 4.1 & 0.1387 & 0.60 & 489.8 & 279.0 & This work \\ BeRa & 7.62 & 632 & 81.6 & 3.4 & 0.1198 & 0.46 & 427.7 & 259.2 & This work \\ Mg\({}_{2}\) & 7.35 & 434 & 51.9 & 1.8 & 0.0930 & 0 & 214.6 & 124.9 & This work \\ & 7.352 & 430.3(5) & 51.12(4) & 1.64(1) & 0.0929(1) & 0 & & & [368, 370] \\ & 7.67 & 673 & 60.3 & 1.87 & 0.0683 & 0.07 & 375.5 & 195.7 & This work \\ & 7.632 & 691.5(5) & 60.26(2) & 1.652(6) & 0.06896(2) & & & & [380] \\ MgSr & 8.01 & 676 & 50.2 & 0.57 & 0.0498 & 0.009 & 426.3 & 234.9 & This work \\ MgBa & 8.25 & 758 & 51.4 & 0.87 & 0.0433 & 0.002 & 547.0 & 303.9 & This work \\ MgRa & 8.63 & 614 & 43.8 & 0.88 & 0.0373 & 0.13 & 482.6 & 284.7 & This work \\ Ca\({}_{2}\) & 8.13 & 1050 & 63.4 & 1.16 & 0.0456 & 0 & 574.1 & 260.0 & This work \\ & 8.083 & 1102.08(9) & 64.93(1) & 1.07(3) & 0.0461(1) & 0 & & & [383, 386] \\ CaSr & 8.50 & 1046 & 50.8 & 0.24 & 0.0303 & 0.15 & 645.4 & 295.5 & This work \\ CaBa & 8.76 & 1196 & 51.0 & 0.53 & 0.0253 & 0.21 & 804.1 & 359.6 & This work \\ CaRa & 9.12 & 943 & 42.0 & 0.47 & 0.0213 & 0.43 & 711.1 & 343.1 & This work \\ Sr\({}_{2}\) & 8.88 & 1046 & 39.6 & 0.43 & 0.0174 & 0 & 718.6 & 330.2 & This work \\ & 8.8288(2) & 1081.64(2) & 40.328 & 0.399 & 0.01758 & 0 & & [396, 397] \\ SrBa & 9.15 & 1191 & 37.4 & 0.34 & 0.0134 & 0.007 & 886.1 & 392.9 & This work \\ SrRa & 9.50 & 945 & 28.4 & 0.20 & 0.0106 & 0.31 & 783.5 & 377.3 & This work \\ Ba\({}_{2}\) & 9.43 & 1366 & 34.3 & 0.20 & 0.0098 & 0 & 1080.4 & 453.5 & This work \\ & & 33.2(2) & 0.5(2) & & 0 & & & [406] \\ BaRa & 9.78 & 1068 & 26.7 & 0.21 & 0.0074 & 0.43 & 952.3 & 439.2 & This work \\ Ra\({}_{2}\) & 10.13 & 858 & 20.5 & 0.14 & 0.0052 & 0 & 842.3 & 424.6 & This work \\ \end{tabular}
\end{table}
Table 7: Spectroscopic constants of the alkaline-earth-metal diatomic molecules in the ground \(X^{1}\Sigma^{+}\) electronic state: equilibrium interatomic distance \(R_{e}\), well depth \(D_{e}\), harmonic constant \(\omega_{e}\), first anharmonicity constant \(\omega_{e}x_{e}\), rotational constant \(B_{e}\), permanent electric dipole moments \(d_{e}\), and parallel \(\alpha_{e}^{\parallel}\) and perpendicular \(\alpha_{e}^{\perp}\) components of the static electric dipole polarizability. Available experimental values are also collected.
tematically shorter in Ref. [314]. Slightly worse agreement with the average absolute differences of 90 cm\({}^{-1}\) (6.7 %) for well depths and 0.07 bohr (0.9 %) for equilibrium distances is found when our results are compared with another systematic study of 16 alkali-metal-alkaline-earth-metal combinations presented in Ref. [283]. It is worth mentioning that different methods, basis sets, and pseudopotentials were used in the present work and Refs. [283, 314], thus the observed overall agreement additionally cross-validates the accuracy of different approaches. The differences are a bit larger between the present and some older results, which used smaller basis sets and less accurate wave functions.
#### iv.1.3 Alkaline-earth-metal diatomic molecules
The alkaline-earth-metal diatomic molecules in their ground \(X^{1}\Sigma^{+}\) electronic state are weakly bound van der Waals complexes. Formally, they are not chemically bound with a bond order of zero and are stabilized by the dispersion interaction. The calculated well depths range from 434 cm\({}^{-1}\) for Mg\({}_{2}\) to 1366 cm\({}^{-1}\) for Ba\({}_{2}\) (with an average of 869 cm\({}^{-1}\)). The trends are not regular with changing the atomic number of the alkaline-earth-metal atoms (see Fig. 4), but generally heavier atoms are stronger bound due to their larger polarizability, as expected for van der Waals systems, with an exception for the unusually strongly bound Be\({}_{2}\) dimer [334]. The corresponding equilibrium distances systematically increase with increasing the atomic number of the alkaline-earth-metal atoms and take moderate values between 4.62 bohr for Be\({}_{2}\) and 10.13 bohr for Ra\({}_{2}\) (with an average of 8.11 bohr). Thus, the alkaline-earth-metal molecules have slightly smaller well depths and slightly larger equilibrium distances than the corresponding alkali-metal-alkaline-earth-metal molecules.
Spectroscopic measurements of the rovibrational levels of the ground electronic state were recorded for the Be\({}_{2}\), Mg\({}_{2}\), Mg\({}_{Ca}\), Ca\({}_{2}\), Sr\({}_{2}\), and Ba\({}_{2}\) molecules (see Table 3), and the accurate dissociation energies were determined for the listed molecules except for Ba\({}_{2}\). Our spectroscopic constants are compared with the available experimental values in Table 7. The RMSE of the well depths is 30 cm\({}^{-1}\) (3.0%). The smallest error of 4 cm\({}^{-1}\) (0.9 %) is for Mg\({}_{2}\) and the largest error of 52 cm\({}^{-1}\) (4.7 %) is for Ca\({}_{2}\). The RMSE of the equilibrium distances is 0.03 bohr (0.4 %). The theoretical and experimental vibrational and rotational constants also mostly agree within 3%. Therefore, we estimate the uncertainty of the calculated potential energy curves for the alkaline-earth-metal diatomic molecules to be around 3-6% with possibly larger values for heavier molecules.
Most of the alkaline-earth-metal diatomic molecules have already been studied theoretically (see Table 3), but the detailed or accurate calculations have been reported for homonuclear Be\({}_{2}\), Mg\({}_{2}\), Ca\({}_{2}\), Sr\({}_{2}\), and Ba\({}_{2}\) dimers, only, while heteronuclear combinations were studied using small basis sets and low-level methods. The highly-accurate calculations reaching the spectroscopic accuracy (\(<1\)cm\({}^{-1}\)) were presented for the lightest Be\({}_{2}\)[357, 343], and recently Mg\({}_{2}\)[378]. Our results agree within 1 % with these more accurate approaches and establish the most accurate reference for all other alkaline-earth-metal molecules.
### Convergence and accuracy
Estimating the theoretical uncertainties of molecular electronic structure calculations is challenging. A prerequisite is an accurate description of the involved atoms. The performance of the CCSD(T) method with the aug-cc-pwCV5Z basis sets for obtaining properties of the alkali-metal and alkaline-earth-metal atoms was carefully tested in Refs. [246, 430] and demonstrated to reproduce well the most accurate available theoretical or experimental data. For example, the calculated atomic static electric dipole polarizabilities coincide with previous accurate values with the RMSE of 3 \(e^{2}a_{0}^{2}/E_{\text{h}}\) (1%), and the atomic ionization potentials and the lowest \(S\)-\(P\) excitation energies agree with experimental data with the RMSE of 172 cm\({}^{-1}\) (0.5 %) and 109 cm\({}^{-1}\) (0.8 %), respectively.
In this subsection, we analyze the convergence of the interatomic interaction energy calculations with the size of the employed atomic basis sets and the quality of the employed wave-function representation for the selected exemplary KRb, RbSr, and CaSr molecules.
Figure 5 presents the potential energy curves of the KRb molecule in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states, the RbSr molecule in the \(X^{2}\Sigma^{+}\) electronic state, and the CaSr molecule in the \(X^{1}\Sigma^{+}\) electronic state calculated with several different basis sets and the CCSD(T) method. The family of the aug-cc-pwCV\(n\)Z basis sets is employed, including the largest aug-cc-pwCV5Z basis sets augmented by bond functions (aug-cc-pwCV5Z+bf). We also show the interaction energies obtained with the two largest basis sets, with \(n\)=Q and 5 extrapolated to the complete basis set limit (CBS) using the \(1/n^{3}\) two-point formula [442]. Additionally, the aug-cc-pV5Z basis sets with all electrons correlated (aug-cc-pV5Z/all) and with only valence electrons correlated (aug-cc-pV5Z/val) are used.
The smooth convergence toward the complete basis set limit can be observed, with the aug-pwCVTZ basis sets performing reasonably well and the aug-pwCVQZ basis sets being less than 5 % from the CBS limit. To reach an accuracy better than 5 %, the largest basis sets have to be used. The CBS limit can be approached using either the extrapolation scheme or bond functions. In our test calculations, the extrapolated potential energy curves have slightly larger (by 0.5-2 %) well depths than the curves obtained with the basis sets augmented by bond functions, with bond functions performing best for the alkaline-earth-metal dimer. Additionally, we studied the convergence of the CCSDT correction of Eq. (2) with the basis set size and found the opposite trend. In the final calculations, we decided to use the bond functions to accelerate the convergence toward the CBS limit because this approach is simpler and works very well for weakly-bound van der Waals complexes [450], including other metal-atom molecules [429].
The importance of including the inner-shell electron correlation is evident when the potential energy curves obtained
using the aug-cc-pV5Z basis sets with and without correlating \((n-1)s^{2}(n-1)p^{6}\) electrons are compared. The equilibrium distances are larger by around 0.5 bohr for all the studied molecules, and the well depths are significantly larger for all the molecules except ground-state alkali-metal dimers when only valence electrons are correlated. Thus, including the inner-shell electron correlation (core-core and core-valence contributions) is crucial for accurately describing the interatomic interactions in the studied systems. This correlation can be included directly, as in the present work, or effectively using core polarization potentials in calculations with large-core pseudopotentials, which provides results in good agreement with the present values [115].
Figure 6 presents the potential energy curves of the KRb molecule in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states, the RbSr molecule in the \(X^{2}\Sigma^{+}\) electronic state, and the CaSr molecule in the \(X^{1}\Sigma^{+}\) electronic state calculated at several different levels of theory: RHF, MP2, CISD, CISD+Q, MRCISD, MRCISD+Q, MRCISD+\(p\), MRCISD+Q+\(p\), CCSD, CCSD(T), CCSD(T)+\(\Delta\)T, and CCSD(T)+\(\Delta\)T+\(\Delta\)Q, as introduced in Sec. II. The aug-cc-pwCV5Z+bPsi basis sets are used in all calculations except for the CCSD(T)+\(\Delta\)T and CCSD(T)+\(\Delta\)T+\(\Delta\)Q curves, which are obtained with Eq. (1), including corrections given by Eqs. (2) and (3).
The mean-field spin-restricted Hartree-Fock method (RHF) does not properly describe the studied molecules because the correlation energy is crucial for binding in all the systems. Low-level methods such as the second-order perturbation theory (MP2) or the configuration interaction method, including single and double excitations (CISD) are presented
Figure 5: Potential energy curves of (a) the KRb molecule in the \(X^{1}\Sigma^{+}\) electronic state, (b) the KRb molecule in the \(a^{3}\Sigma^{+}\) electronic state, (c) the RbSr molecule in the \(X^{2}\Sigma^{+}\) electronic state, and (d) the CaSr molecule in the \(X^{1}\Sigma^{+}\) electronic state computed using different basis sets and the CCSD(T) method. See the text for details.
for completeness and poorly reproduce the correlation energy, although the curve at the MP2 level is accidentally close to the final result for the alkaline-earth-metal dimer. The multireference version of the configuration interaction method with the active space of valence orbitals (\(ns+ns\) in MRCISD) describes better the ground-state alkali-metal and alkali-metal-alkaline-earth-metal molecules but does not improve the CISD results for other molecules, where only single spin configuration can be constructed from the valence orbitals. Increasing the size of the active space in the configuration interaction approach by including the lowest unoccupied \(p\) orbitals (\(nsnp+nsnp\) in MRCISD/\(p\)) noticeably improves its accuracy. The inclusion of the Davidson correction to the CISD and MRCISD results (CISD+Q and MRCISD+Q, respectively) brings them closer to the most accurate coupled cluster values. However, the MRCI approach is size-inconsistent, and its convergence toward the full configuration interaction results by increasing the CI active space is hard to control and limited by available computing power.
The coupled cluster (CC) method provides the most accurate interaction energies with a well-controlled convergence toward the full configuration interaction results by including higher and higher excitations in the CC wave function in a systematic way. It is also size-consistent. The inclusion of triple excitations is important for all the studied molecules. The so-called gold standard of quantum chemistry, the CCSD(T) method, which provides a good estimate of connected triple excitations perturbatively, performs very well for the alkali-metal molecules. For these molecules, the inclusion of full triple excitations beyond the CCSD(T)
Figure 6: Potential energy curves of (a) the KRb molecule in the \(X^{1}\Sigma^{+}\) electronic state, (b) the KRb molecule in the \(a^{3}\Sigma^{+}\) electronic state, (c) the RbSr molecule in the \(X^{2}\Sigma^{+}\) electronic state, and (d) the CaSr molecule in the \(X^{1}\Sigma^{+}\) electronic state computed with the different electronic-structure methods and the aug-cc-pCV5Z basis sets. See the text for details.
results (CCSD(T)+\(\Delta\)T) changes the potential well depth by less than \(0.5\,\%\) and is more important at larger distances. In contrast, the CCSDT corrections increase the well depths for the alkali-metal-alkaline-earth-metal and alkaline-earth-metal molecules by more than \(5\,\%\). The importance of full triple excitations for the molecules containing alkaline-earth-metal atoms is not surprising since the CCSDT and CCSDTQ methods are needed to describe the alkali-metal-alkaline-earth-metal and alkaline-earth-metal molecules at the valence full configuration level, respectively. Therefore, the CCSDTQ corrections increase the well depths for the alkaline-earth-metal molecules by an additional \(5\,\%\).
We can assess the importance of higher excitations in the coupled cluster calculations based on statistics for all the studied molecules. The inclusion of the CCSDT corrections of Eq. (2) increases on average the well depths for the alkali-metal molecules in the \(X^{1}\Sigma^{+}\) state by 10 cm\({}^{-1}\) (0.3 %) (except Cs\({}_{2}\), CsF, Fr\({}_{2}\) with the opposite effect of the same magnitude), for the alkali-metal-alkaline-earth-metal molecules in the \(X^{2}\Sigma^{+}\) state by 88 cm\({}^{-1}\) (7.5 %), and for the alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) state by 52 cm\({}^{-1}\) (8.1 %). For the alkali-metal molecules in the \(a^{3}\Sigma^{+}\) state, the well depths change on average by 1.1 cm\({}^{-1}\) (0.4 %) with an increase (decrease) of binding energy for lighter (heavier) molecules. The inclusion of the CCSDT corrections also increases, on average, the equilibrium distances for the alkali-metal molecules in the \(X^{1}\Sigma^{+}\) state by 0.017 bohr (0.2 %) and decreases on average the equilibrium distances for the alkali-metal molecules in the \(a^{3}\Sigma^{+}\) state by \(<0.01\) bohr (\(<0.1\) %), for the alkali-metal-alkaline-earth-metal molecules in the \(X^{2}\Sigma^{+}\) by 0.04 bohr (0.5 %), and for the alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) state by 0.1 bohr (2.0 %). Additionally, the CCSDTQ corrections of Eq. (3) increase on average the well depths for the alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) state by 44 cm\({}^{-1}\) (5.1 %) and decrease on average their equilibrium distances by 0.03 bohr (0.4 %).
Based on the convergence analysis in this subsection and the comparison with available experimental data in the previous subsection, where a good agreement was observed, we can conclude that the employed electronic structure methods, basis sets, and energy-consistent pseudopotentials properly treat the relativistic effects and reproduce the correlation energy while being close to being converged in the size of the basis function set. We can also estimate the uncertainties of the employed computational approach of Eq. (1) for calculating potential energy curves are around:
* 25-100 cm\({}^{-1}\) (0.5-2 %) of \(D_{e}\) and 0.005-0.02 bohr (0.05-0.2 %) of \(R_{e}\) for the alkali-metal molecules in the \(X^{1}\Sigma^{+}\) state,
* 5-15 cm\({}^{-1}\) (2-6 %) of \(D_{e}\) and 0.05-0.2 bohr (0.5-2 %) of \(R_{e}\) for the alkali-metal molecules in the \(a^{3}\Sigma^{+}\) state,
* 50-100 cm\({}^{-1}\) (3-6 %) of \(D_{e}\) and 0.01-0.04 bohr (0.1-0.5 %) of \(R_{e}\) for the alkali-metal-alkaline-earth-metal molecules in the \(X^{2}\Sigma^{+}\) state,
* 50-100 cm\({}^{-1}\) (3-6 %) of \(D_{e}\) and 0.01-0.05 bohr (0.1-0.6 %) of \(R_{e}\) for the alkaline-earth-metal molecules in the \(X^{1}\Sigma^{+}\) state.
### Permanent electric dipole moments
The computed permanent electric dipole moments as functions of the internuclear distance for the alkali-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states are presented in Fig. 7 and Fig. 8, respectively, for the alkali-metal-alkaline-earth-metal diatomic molecules in the \(X^{2}\Sigma^{+}\) electronic state - in Fig. 9, and for the alkaline-earth-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) state - in Fig. 10. Calculations are performed for all the combinations of the alkali-metal (Li, Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. All curves exhibit smooth behavior, and different classes of molecules share similar characteristics. The corresponding values for the equilibrium distances, \(d_{e}\equiv d(R_{e})\), are indicated in the plots and collected in Tables 4-7, along with available experimental data.
The permanent electric dipole moments originate from the uneven distribution of charge in heteronuclear molecules. They describe the response of polar molecules to external static electric fields, resulting in molecular polarization [451]. The dipole-dipole interactions also dominate long-range intermolecular interactions and ultracold collisions between polar molecules and are crucial for their applications [10].
The alkali-metal diatomic molecules in their ground \(X^{1}\Sigma^{+}\) electronic state possess the largest permanent electric dipole moments ranging from 0.3 D for RbFr to 5.3 D for LiCs (with an average of 2.5 D) at their equilibrium distances. Molecules containing Li are the most polar, followed by those containing Na. In contrast, the alkali-metal diatomic molecules in the \(a^{3}\Sigma^{+}\) electronic state have the smallest permanent electric dipole moments ranging from 0.03 D for NaK and RbCs to 0.45 D for LiCs (with an average of 0.15 D) at their equilibrium distances. Triplet-state molecules containing Li are also the most polar.
The alkali-metal-alkaline-earth-metal diatomic molecules in their ground \(X^{2}\Sigma^{+}\) electronic state exhibit intermediate permanent electric dipole moments ranging from 0.08 D for NaBa to 3.6 D for LiBe (with an average of 1.3 D) at their equilibrium distances. Among them, molecules containing Be or Cs are the most polar. Finally, the alkaline-earth-metal diatomic molecules in their ground \(X^{2}\Sigma^{+}\) electronic state have small permanent electric dipole moments ranging from almost zero for combinations containing Mg to 0.6 D for BeBa (with an average of 0.25 D) at their equilibrium distances. In this group, molecules containing Be or Ra are the most polar.
The magnitude and orientation of the permanent electric dipole moments at equilibrium and large interatomic distances correlate with the difference in atomic electronegativities for all the molecules, except for singlet-state CsFr. Electronegativity is a measure of an atom' ability to attract shared electrons to itself. Thus, the permanent electric dipole moments are oriented from more electronegative atoms to less electronegative ones.
The permanent electric dipole moments have been measured for almost all the alkali-metal diatomic molecules con
sisting of stable isotopes (except KCs) in their ground vibrational level of the ground \(X^{1}\Sigma^{+}\) electronic state [21; 24; 29; 40; 447; 449; 452]. The RMSE of our calculated values is 0.13 D (4.5 %). No experimental measurements of permanent electric dipole moments have been reported for other classes of molecules studied in this work.
A part of theoretical studies collected in Tables 1-3 reported calculations of permanent electric dipole moments alongside potential energy curves at different levels of theory, generally in agreement with the present results. Here, we compare our values with the previous most accurate and systematic studies. For the 10 ground-state alkali-metal molecules consisting of stable isotopes, the average absolute differences between the present and previous values are 0.1 D (4.8 %) [115], 0.04 D (2.5 %) [116], and 0.09 D (4.7 %) [453]. For the 16 lightest ground-state alkali-metal-alkaline-earth molecules, the average absolute differences between the present and previous values are 0.15 D (13 %) [283] and 0.08 D (6.6 %) [454]. The average absolute differences between the present results and values for LiSr, NaSr, KSr, RbSr, and CsSr reported in Ref. [314] are 0.08 D (7.9 %). It is worth mentioning that different methods, basis sets, and pseudopotentials were used in the present work and Refs. [115; 283; 314]. Furthermore, the authors of Refs. [453; 454] included the relativistic effects directly in all-electron calculations in contrast to our scalar relativistic pseudopotentials. Thus the observed overall agreement additionally cross-validates the accuracy of different approaches. To the best of our knowledge, the calculations of permanent electric dipole moments have not been previously reported for alkaline-earth molecules. We estimate the uncertainty of our permanent electric dipole moments to be around 5 %.
Figure 7: Permanent electric dipole moment curves of the alkali-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) electronic state. The points indicate values for equilibrium distances.
### Static electric dipole polarizabilities
The polarizability tensor of \(\Sigma\)-state molecules has two independent components: parallel \(\alpha^{\parallel}(R)\equiv\alpha^{zz}(R)\) and perpendicular \(\alpha^{\perp}(R)\equiv\alpha^{xx}(R)=\alpha^{yy}(R)\) ones with \(z\) axis chosen along the internuclear axis in a molecule-fixed reference frame. The calculated static electric dipole polarizabilities as functions of the internuclear distance for the alkali-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) and \(a^{3}\Sigma^{+}\) electronic states, the alkali-metal-alkaline-earth-metal diatomic molecules in the \(X^{2}\Sigma^{+}\) electronic state, and the alkaline-earth-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) state are provided in the Supplemental Material [409]. Calculations are performed for all the combinations of the alkali-metal (Li, Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. The corresponding values for the equilibrium distances, \(\alpha^{\parallel}_{e}\equiv\alpha^{\parallel}(R_{e})\) and \(\alpha^{\perp}_{e}\equiv\alpha^{\perp}(R_{e})\), are collected in Tables 4-7. The polarizabilities are reported in atomic units of \(e^{2}a_{0}^{2}/E_{\text{b}}\) throughout this paper.
At large internuclear distances, the polarizabilities approach their asymptotic behavior given by the atomic polarizabilities \(\alpha_{A}\) and \(\alpha_{B}\)[455]
\[\begin{split}\alpha^{\parallel}(R)&\approx \alpha_{A}+\alpha_{B}+\frac{4\alpha_{A}\alpha_{B}}{R^{3}}+\frac{4(\alpha_{A}+ \alpha_{B})\alpha_{A}\alpha_{B}}{R^{6}}\,,\\ \alpha^{\perp}(R)&\approx\alpha_{A}+\alpha_{B}- \frac{2\alpha_{A}\alpha_{B}}{R^{3}}+\frac{(\alpha_{A}+\alpha_{B})\alpha_{A} \alpha_{B}}{R^{6}}\,.\end{split} \tag{9}\]
Two independent polarizability tensor components can also be transformed to the isotropic \(\bar{\alpha}(R)\) and anisotropic \(\Delta\alpha(R)\) ones using
\[\begin{split}\bar{\alpha}(R)&=\frac{2\alpha^{\perp }(R)+\alpha^{\parallel}(R)}{3}\,,\\ \Delta\alpha(R)&=\alpha^{\parallel}(R)-\alpha^{ \perp}(R)\,.\end{split} \tag{10}\]
The polarizability describes the molecular response to an electric field in the second order of perturbation theory. For
Figure 8: Permanent electric dipole moment curves of the alkali-metal diatomic molecules in the \(a^{3}\Sigma^{+}\) electronic state. The points indicate values for equilibrium distances.
Figure 9: Permanent electric dipole moment curves of the alkali-metal–alkaline-earth-metal diatomic molecules in the \(X^{2}\Sigma^{+}\) electronic state. The points indicate values for equilibrium distances.
example, optical dipole trapping is governed by the isotropic polarizability \(\bar{\alpha}\), while the laser-induced molecular alignment is controlled by its anisotropic component \(\Delta\alpha\)[451]. Molecular polarizabilities are also useful in evaluating long-range intermolecular interactions [456].
The calculated polarizabilities for all the studied molecules are well described by the formulas in Eq. (9) at intermediate and large internuclear distances. For the most weakly-bound alkali-metal dimers in the \(a^{3}\Sigma^{+}\) state, these formulas also apply around their equilibrium distances, while the more strongly-bound alkali-metal dimers in the \(X^{1}\Sigma^{+}\) state have smaller polarizabilities at equilibrium distances, influenced by stronger chemical bonding of a polarized-covalent nature. In general, the magnitude of the molecular polarizabilities correlates with the atomic polarizabilities, which increase with increasing the atomic number of the alkali-metal and alkaline-earth-metal atoms, with an exception for Fr and Ra due to the contraction of electronic orbitals by the relativistic effects. The parallel components at equilibrium distances are around twice larger than the perpendicular ones for all atomic combinations.
The polarizabilities have been measured only for a few ground-state alkali-metal molecules [447, 449, 457, 460, 461, 462]. Our results agree with the experimental values within their uncertainties. For example, for LiNa, the calculated isotropic and anisotropic polarizabilities of \(\bar{\alpha}_{e}=237.8\) and \(\Delta\alpha_{e}=156.6\) can be compared with the experimental ones of \(\bar{\alpha}_{0}=270(34)\) and \(\Delta\alpha_{0}=162(13)\) from Ref. []. Our isotropic values \(\bar{\alpha}_{e}\) of 209.2, 262.7, 483.2, 559.2, 712.6, 357.1, and 589.8 for Li\({}_{2}\), Na\({}_{2}\), K\({}_{2}\), Rb\({}_{2}\), Cs\({}_{2}\), NaK, and KCs, respectively, agree with the corresponding experimental estimates of 216(20), 256(20), 499(40), 533(40), 702(54), 344(20), and 601(34) from Ref. [459]. The experimental values for the molecules containing the alkaline-earth-metal atoms are even more scarce. For the Ba\({}_{2}\) dimer, the ratio of the polarizabilities of the ground state molecules and separate atoms was measured to be 1.30(13) [463], which agrees with our value of 1.21.
Figure 10: Permanent electric dipole moment curves of the alkaline-earth-metal diatomic molecules in the \(X^{1}\Sigma^{+}\) electronic state. The points indicate values for equilibrium distances.
The present polarizabilities agree well within a few percent with previous systematic computational studies employing different methods, basis sets, and pseudopotentials presented for the alkali-metal molecules in Refs. [464, 465, 453] and the alkali-metal-alkaline-earth-metal molecules in Ref. [454, 283, 307, 283]. The agreement is also satisfactory with other calculations reported in some older works, which used smaller basis sets and less accurate wave functions [466, 467, 468, 379].
## IV Summary and conclusions
Motivated by the growing experimental interest in producing and using ultracold gases of molecules that contain different alkali-metal or alkaline-earth-metal atoms, we have conducted a comprehensive theoretical study on the ground-state electronic properties of such molecules. We have calculated the interactions energies, permanent electric dipole moments, and static electric dipole polarizabilities as functions of the internuclear distance for all 78 possible homonuclear and homonuclear diatomic combinations of alkali-metal (Li, Na, K, Rb, Cs, Fr) and alkaline-earth-metal (Be, Mg, Ca, Sr, Ba, Ra) atoms. We have employed the hierarchy of coupled cluster methods up to CCSDTQ with large Gaussian basis sets and small-core relativistic energy-consistent pseudopotentials. The inclusion of full triple and quadruple excitations in the coupled cluster method to obtain potential energy curves has allowed for the description of valence electrons at the full configuration interaction level for all molecules. Thus, results for three classes of experimentally relevant molecules have been presented at a consistent level of theory. Corresponding spectroscopic constants have been collected. We have computed the electronic properties of some molecules, such as those containing francium and radium, for the first time. The permanent electric dipole moments of heteronuclear alkaline-earth molecules have also been presented for the first time. We have analyzed the convergence, estimated computational uncertainties, and compared previous experimental and theoretical data with the present values.
The presented results can serve as a reference and theoretical benchmark for future potentially more accurate electronic-structure computations [121], including both ground and excited electronic states. The potential energy curves can be used to obtain rovibrational levels to guide and explain the formation and new spectroscopic measurements for experimentally unexplored atomic combinations. Our interaction potentials can also serve as a starting point for fitting curves to future accurate experimental rovibrational energy levels for molecules containing alkaline-earth-metal atoms [320].
The calculated permanent electric dipole moments and static electric dipole polarizabilities can be employed to construct long-range intermolecular interaction potentials [469]. They can also be used to describe a molecular response to static electric and off-resonant laser fields [451], respectively, and, in general, to design and guide new formation, control, and manipulation schemes [470, 471].
Finally, the determined well depths and related dissociation energies can allow for assessing the energetics of chemical reactions between ground-state molecules and in atom-molecule collisions. The chemical reactivity of the alkali-metal diatomic molecules is well understood [472, 473, 474]. On the other hand, the reactivity of the alkali-metal-alkaline-earth-metal (except RbSr [475]) and alkaline-earth-metal molecules have not been theoretically studied, while fast chemical reactions between ground-state Sr\({}_{2}\) were recently observed [41]. The presented data can directly provide the energy changes for atom-exchange chemical reactions in molecular gases and atom-molecule mixtures. They can also be used to evaluate the energy changes for trimer formation chemical reactions if binding energies of triatomic molecules are calculated.
Full potential energy curves, permanent electric dipole moments, and static electric dipole polarizabilities as functions of the interatomic distance in a numerical form are collected in the Supplemental Material [409].
###### Acknowledgements.
We gratefully acknowledge the National Science Centre Poland (grant no. 2020/38/E/ST2/00564) for financial support and Poland's high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2021/015237.
|
2308.15042
|
Quantum phases of the biased two-chain-coupled Bose-Hubbard Ladder
|
We investigate the quantum phases of bosons in a two-chain-coupled ladder.
This bosonic ladder is generally in a biased configuration, meaning that the
two chains of the ladder can have dramatically different on-site interactions
and potential energies. Adopting the numerical density-matrix
renormalization-group method, we analyze the phase transitions in various
parameter spaces. We find signatures of both insulating-to-superfluid and
superfluid-to-insulating quantum phase transitions as the interchain tunnelling
is increased. Interestingly, tunning the interaction to some intermediate
values, the system can exhibit a reentrant quantum phase transition between
insulating and superfluid phases. We show that for infinite interaction bias,
the model is amenable to some analytical treatments, whose prediction about the
phase boundary is in great agreement with the numerical results. We finally
clarify some critical parameters which separate the system into regimes with
distinct phase behaviours, and briefly compare typical properties of the biased
and unbiased bosonic ladder systems. Our work enriches the Bose-Hubbard
physics.
|
Jingtao Fan, Xiaofan Zhou, Suotang Jia
|
2023-08-29T05:52:02Z
|
http://arxiv.org/abs/2308.15042v1
|
# Quantum phases of the biased two-chain-coupled Bose-Hubbard Ladder
###### Abstract
We investigate the quantum phases of bosons in a two-chain-coupled ladder. This bosonic ladder is generally in a biased configuration, meaning that the two chains of the ladder can have dramatically different on-site interactions and potential energies. Adopting the numerical density-matrix renormalization-group method, we analyze the phase transitions in various parameter spaces. We find signatures of both insulating-to-superfluid and superfluid-to-insulating quantum phase transitions as the interchain tunnelling is increased. Interestingly, tunning the interaction to some intermediate values, the system can exhibit a reentrant quantum phase transition between insulating and superfluid phases. We show that for infinite interaction bias, the model is amenable to some analytical treatments, whose prediction about the phase boundary is in great agreement with the numerical results. We finally clarify some critical parameters which separate the system into regimes with distinct phase behaviours, and briefly compare typical properties of the biased and unbiased bosonic ladder systems. Our work enriches the Bose-Hubbard physics.
## I Introduction
Strongly correlated bosons, especially those moving in the periodic potentials, have always been the research interest for both experimentalists and theorists, as they are related to a variety of quantum phenomenon [1; 2]. The simplest model describing such systems is the Bose-Hubbard (BH) model, which incorporates the contributions from the kinetic energy of individual atoms and the repulsive interactions between them [3; 4; 5; 6; 7; 8]. Although originally developed in the context of \({}^{4}\)He liquid [3], it has been demonstrated that the BH model can be feasibly implemented with ultracold atoms trapped in optical lattices [9; 10; 11; 12]. Utilizing the unprecedented degree of controllability of the laser fields, all the characteristic parameters of the BH model can be tuned in the optical lattice with high precision [11; 12]. Relying on this, the quantum phase transition from a superfluid (SF) to a Mott insulator (MI), which is the most important prediction of the BH model, has been experimentally realized in one [13], two [14] and three dimensions [15]. Since then, lots of related studies have been performed on the extensions of the BH model, by considering, for example, diverse forms of interactions [16; 17; 18] and gauge fields [19]. These extensions stimulate the development of new directions bridging condensed matter physics, stochastic physics and quantum optics.
In this context, the bosons confined to low-dimensional lattices merit special attention, since the correlations built up in these systems are considerably enhanced by the interactions between atoms [20]. Among various low-dimensional lattice models, the two-chain-coupled BH ladder is of particular importance [21; 22], since it serves as an intermediate geometry intervening the one and two lattice systems [23]. This provides beneficial insights about the characteristics of the SF-to-MI transition in going from one to two dimensions. As a matter of fact, the BH ladder has been experimentally simulated in different artificial systems [24; 25; 26], stimulating immense interests of research towards various aspects of this model, such as the chiral currents [27; 28; 29; 30; 31; 32], the quantum magnetism [33; 34; 35; 36; 37], and the topological states [38; 39; 40; 41; 42; 43]. The BH ladders considered in these studies, however, are mostly limited to the symmetric case where both the on-site interactions and potential energies are identical for the two chains. Notice that general ladder systems should also involve the configurations where the two chains have distinctly different system parameters. Actually, letting the two chains of the ladder asymmetric, with respect to either the interactions or potential energies, may impose major impacts on various properties of the system [44; 45; 46; 47; 48]. More importantly, this "biased" ladder structure also underlies the physics behind a large class of systems, such as the dressed dipolar molecules [49; 50] or Rydberg gases [51; 52; 53] in optical lattices and the low-dimensional magnetic materials under external fields [54; 55].
In this work, we investigate the ground-state properties of a biased bosonic ladder at half filling, using state-of-the-art density-matrix renormalization-group (DMRG) numerical methods [56; 57]. By saying "biased", we mean that the two chains constituting the whole ladder can have dramatically different on-site interactions and potential energies. We first provide an analysis of the quantum phases in the limit of infinite interaction bias, where the on-site interaction is infinite for one chain and finite for the other. It is found that, as the interchain tunnelling is increased, either the MI-to-SF or the SF-to-MI quantum phase transition can occur depending on the value of interactions. More interestingly, tuning the finite interaction to some intermediate values, the system may even exhibit a reentrant quantum phase transition between MI and SF. By mapping the finite interaction into an effective canonical Kerr nonlinear form, we analytically derive the phase boundary between MI and SF,
which agrees well with the numerical results. With the knowledge of the system under infinite interaction bias, we then discuss the more general parameter regime where the interactions of both chains of the ladder are finite. We map out the ground-state phase diagrams in various parameter spaces, and characterize several critical parameters which separate the system into regimes with distinct phase behaviours. Finally, we briefly compare the typical properties of the biased and unbiased bosonic ladder systems.
## II Model and method
As illustrated in Fig. 1(a), the system in consideration is a bosonic ladder with two coupled chains, which we denote as spin-up and spin-down, respectively. The interspin tunnelling is allowed along the rung. We assume the bosonic ladder is typically biased, i.e., atoms with different spins experience different potential energies and local interactions. Such a scenario can be effectively engineered in spin-dependent optical lattices [58] or optical superlattices [59; 29], where the spin index distinguishing different chains can be represented by either the hyperfine sublevels or optical wells, according to different experiment implementations [see Fig. 1(b) for illustration]. The Hamiltonian describing this system reads
\[\hat{H} = -t\sum_{\langle i,j\rangle,\sigma}\hat{b}_{i,\sigma}^{\dagger} \hat{b}_{j,\sigma}-h\sum_{j}(\hat{b}_{j,\uparrow}^{\dagger}\hat{b}_{j, \downarrow}+\text{H.c.}) \tag{1}\] \[+\Delta\sum_{j}(\hat{n}_{j,\uparrow}-\hat{n}_{j,\downarrow})+ \sum_{j,\sigma}\frac{U_{\sigma}}{2}\hat{n}_{j,\sigma}\left(\hat{n}_{j,\sigma}-1\right)\]
where the field operator \(\hat{b}_{j,\sigma}\) (\(\hat{b}_{j,\sigma}^{\dagger}\)) annihilates (creates) a bosonic atom with spin \(\sigma\) (=\(\uparrow,\downarrow\)) at the lattice site \(j\). While atoms with the same spin can hop between adjacent sites \(\langle i,j\rangle\) with the intraspin hopping rate \(t\), an interspin field along the rung of the ladder couples atoms with different spins at rate \(h\). The energy bias \(\Delta\), which we assume to be positive in this work, tends to polarize the atoms along the rung of the ladder and \(U_{\sigma}\) denotes the on-site repulsive interaction of atoms with spin \(\sigma\) (=\(\uparrow,\downarrow\)). In this work, we focus on the commensurate ladder with total atomic density to be \(\rho=N/2L=1/2\). Here, \(N=N_{\uparrow}+N_{\downarrow}\) is the total number of bosons on the two-chain ladder, each of which has length \(L\). This amounts to setting the total system size to be \(2L\). In the following discussion, we set the energy scale by taking \(t=1\), and we also take \(\Delta=10\) unless otherwise specified.
The Hamiltonian (1) can be viewed as a natural extension of the single-component BH model incorporating the spin degree of freedom, which is controlled by both the transverse and longitude magnetic fields. Without the interspin tunnelling \(h\), the ladder decouples and reduces to two independent BH chains. A finite energy bias \(\Delta\) (\(>0\)) then polarizes the bosons to the spin-down chain with commensurability of one boson per site, leaving the spin-up chain empty. In this case, the physics is entirely governed by the one-dimensional BH model, which has been extensively explored [3; 4; 5; 6; 7; 8]. With a non-zero interspin tunnelling \(h\), however, the two BH chains are coupled together, meaning that the characteristic parameters of each individual chain may impact the ground-state properties of the composite system in a collective manner. This becomes especially interesting if the on-site interactions of the two chains are tuned quite different. Without loss of generality, let us assume \(U_{\uparrow}>U_{\downarrow}\) and take a preanalysis on the behavior of the BH chain with spin-down. In this case, the on-site interaction may effectively enhanced through high-order tunnelling process triggered by \(h\)[60], favoring the formation of MI, whereas the particle filling factor may deviate from unity at the same time, which in turn promotes the SF character. The seemingly opposite tendency of the ground-state property makes the roles of the parameters in the biased bosonic ladder less intuitive and to be quantitatively clarified.
The MI and SF phases can be directly identified by calculating the charge gap \(\delta_{L}\), defined as the difference between the energies needed to add and remove one particle from the system, i.e.,
\[\delta_{L}=\mu_{L}^{+}-\mu_{L}^{-}, \tag{2}\]
\[\mu_{L}^{+}=E_{L}(N+1)-E_{L}(N), \tag{3}\]
\[\mu_{L}^{-}=E_{L}(N)-E_{L}(N-1), \tag{4}\]
where \(E_{L}(N)\) is the the ground-state energy for \(L\) sites and \(N\) particles, and the chemical potentials \(\mu_{L}^{+}\) and \(\mu_{L}^{-}\) characterize the energy cost to add and remove one particle, respectively. The insulating phase is signaled by the opening up of \(\delta_{L}\) in the thermodynamical limit \(N,L\rightarrow\infty\)
Figure 1: (a) Schematic picture of the ladder system. The two chains constituting the ladder are designated as spin-up and spin-down, respectively. The interspin and intraspin hopping amplitudes are denoted, respectively, as \(t\) and \(h\). (b) Possible implementation of the bosonic ladder in optical superlattices. The optical double well is generally tilted by an energy difference \(\Delta\). The boson tunnelling rates along different directions simulate the hopping rates \(t\) and \(h\).
with fixed density \(\rho\), consisting with the zero compressibility \(\kappa\) (\(=\partial\rho/\partial\mu\)) of an insulator [61]. In the SF phase, however, the charge gap \(\delta_{L}\) closes and the system becomes compressible in the thermodynamical limit. Since \(\delta_{L}\) keeps finite for any finite systems, in order to pinpoint the MI-to-SF transition in the parameter space, we should extrapolate to the \(L\to\infty\) limit by utilizing the standard finite-size scaling. Here, we perform state-of-the-art DMRG calculations to compute the many-body ground state of the system, with which various physical observable can be obtained. In our numerical simulations, we set the cutoff of the single-site atom number as \(n_{\rm cutoff}=6\). We set lattice size up to \(L=40\), for which we retain 600 truncated states per DMRG block and perform 20 sweeps with a maximum truncation error of \(\sim 10^{-9}\).
## III Results
In the following, we systematically study the ground-state properties of the biased bosonic ladder. Before providing the results of general parameters, we first put the interaction bias to infinity, i.e., we consider the bosonic ladder consisting of one chain with infinite on-site interaction and the other with finite on-site interaction. The physics in this limit serves as a beneficial starting point to understand the essential mechanism behind which various phase behaviours occur.
### Infinite interaction bias: \(U_{\uparrow}-U_{\downarrow}=\infty\)
Without loss of generality, we fix the on-site interaction of the spin-up BH chain to be infinity, \(U_{\uparrow}\to\infty\), and that of the spin-down chain to be finite. This amounts to imposing a hard-core constraint on each site of the spin-up chain, where only one boson is allowed to occupy.
Before showing the full phase diagram, we can gain some useful insights into the system by inspecting certain limit parameters. As mentioned in Sec. II, the simplest limit is the zero interspin tunnelling \(h=0\), under which the ground state is fully described by the one-dimensional BH model with unit filling. It is well known that the BH model with unit filling shows a SF-to-MI transition at \(U_{\rm c}/t\sim 3.3\)[6; 7; 8]. The physics becomes richer when \(h\) is turned on. In this case, if we further set the on-site interaction \(U_{\downarrow}\) to be zero, the system closely resembles that of two-level atoms inside cavity arrays for which the Jaynes-Cummings-Hubbard (JCH) model works [62; 63; 64; 65]. This can be seen clearly if we map the field operators of the hardcore bosons to those of quantum spins by \(\hat{b}_{j,\uparrow}\longrightarrow\sigma_{-}\) and \(\hat{b}^{\dagger}_{j,\uparrow}\longrightarrow\sigma_{+}\), where \(\sigma_{-}\) and \(\sigma_{+}\) are spin-\(1/2\) lowering and raising operators, respectively. Therefore, the Hamiltonian describing the tunnelling process between the two BH chains becomes Jaynes-Cummings type, in which the spin-down bosons act as interaction-free photons bridging adjacent lattice sites. It follows directly from the JCH physics that, by increasing \(h\), the spin-down bosons behave more localized and consequently undergoes a phase transition from SF to MI at some critical tunnelling strength \(h_{\rm c}\)[62; 63].
at \(h\approx 5.1\), indicating that the phase transition appears twice. That is, the system starts from the MI, and subsequently traverses the SF phase, and ending up in the MI eventually. Figures 3(a) and 3(b) show finite-size scaling of the DMRG data of the charge gap, by linear and quadratic fittings, for two representative points located in the SF and MI phases, respectively. This reentrant MI phase transition induced by the interspin tunnelling strength \(h\), does not exist in the symmetric bosonic ladder with \(U_{\uparrow}=U_{\downarrow}\) and \(\Delta=0\), and is thus exclusive for the biased ladder here.
With the understanding above, we map out the phase diagram in the \(U_{\downarrow}-h\) plane in Fig. 4. The phase boundary has been extrapolated to the \(L\rightarrow\infty\) limit by the finite-size scaling. It is to be seen clearly that, while increasing the on-site interaction \(U_{\downarrow}\) always drives the system to the MI phase, the role of the interspin tunnelling \(h\) can be somehow opposite, i.e., it can trigger both the MI and SF phases, depending on the value of \(U_{\downarrow}\). Notice that the MI in the BH model is essentially stabilized by the direct interaction between bosons, whereas bosons with different spins are dressed together here forming composite polaritons. We therefore expect that some effective interaction between polaritons, which plays the key role in inducing different behaviours of the phase transition, may emerge.
To see this clearly, we map the local contribution of the Hamiltonian (1) into an effective kerr nonlinearity by a simple energy mismatch argument. As detailed in Appendix A, the local energy of Hamiltonian (1) consists of two plaritonic modes whose eigenenergies are
\[\omega_{n}^{\pm} = \frac{U_{\downarrow}}{2}n(n-2)+\frac{1}{2}(\Delta+U_{\downarrow}) \tag{5}\] \[\pm\frac{1}{2}\sqrt{(nU_{\downarrow}-U_{\downarrow}-\Delta)^{2}+4 nh^{2}}\]
where \(n\) is the excitation number. Since we are only interested in the low-energy physics, the focus in the following will be on the lower branch \(\omega_{n}^{-}\). We define the effective Hubbard interaction \(U_{\rm eff}\) as the energy cost incurred by forming a two-particle plaritonic excitation (with energy \(\omega_{2}^{-}\)) from two single-particle plaritonic excitation (with energy \(2\omega_{1}^{-}\)) in neighboring lattice sites [64; 65], i.e.,
\[U_{\rm eff} = \omega_{2}^{-}-2\omega_{1}^{-} \tag{6}\] \[= \frac{1}{2}(U_{\downarrow}-\Delta)+\sqrt{\Delta^{2}+4h^{2}}\] \[-\frac{\sqrt{(U_{\downarrow}-\Delta)^{2}+8h^{2}}}{2}\]
With this understanding, we can obtain an analytical expression of the phase boundary between the MI and SF phases by equating the effective interaction \(U_{\rm eff}\) with the critical interaction strength of the BH model with unit filling, namely
\[U_{\rm eff}=U_{\rm c}\approx 3.3 \tag{7}\]
As shown in Fig. 4, the curve defined by Eq. (7), agrees well with the numerical results obtained by the DMRG calculation. It should be emphasized that, in deriving Eq. (6), we have implicitly assumed that the ground-state property of the whole lattice system is mainly governed by its low-energy local physics. This requires that (i) the energy scale owned by each local lattice sites is considerably larger than the kinetic energy of bosons, namely at least \(\Delta\gg 1\) or \(h\gg 1\), and (ii) the density fluctuations are weak enough so that only the lowest-lying excitations of individual lattice sites need to be taken into consideration. This guarantees the effectiveness of Eq. (7) in predicting the MI-to-SF phase boundary, since the density fluctuations are extremely suppressed in the MI. Eq. (6) provides further guidance to the driving force inducing different phase transitions. An interesting finding is that \(U_{\rm eff}\) exhibits nonmonotonic behaviour as \(h\) increases from
Figure 4: The phase diagram in the \(U_{\downarrow}-h\) plane for \(\Delta=10\) and \(U_{\uparrow}=\infty\). The red solid line with square symbol (black solid line) denotes The MI-SF phase boundary obtained by the DMRG calculation [Eq. (7)]. The values of \(h_{\rm m}\) for each \(U_{\downarrow}\) is also pinpointed in the phase diagram by the blue solid line with circle symbol. For comparation, location of the minimum of \(U_{\rm eff}\), determined by Eq. (8), is plotted by the green dashed line. The shaded area, bounded by critical interactions \(U_{\rm c}\approx 3.3\) and \(U_{\downarrow}\approx 3.9\), characterizes the parameter region where the MI-to-SF-to-MI transition can occur.
zero. As illustrated in Fig. 5(a), with the increase of \(h\), the effective interaction \(U_{\rm eff}\) decreases first to a minimum and then increases monotonically [see blue line], which explains the MI-to-SF-to-MI transition found in Fig. 4. The location of the minimum of \(U_{\rm eff}\) can be easily deduced by requiring \(\partial U_{\rm eff}/\partial h=0\), yielding a trivial solution \(h=0\) and a nontrivial solution,
\[h=\frac{\sqrt{\Delta^{2}-(\Delta-U_{\downarrow})^{2}}}{2}. \tag{8}\]
It is straightforward to show that Eq. (8), within its range of values, minimizes \(U_{\rm eff}\). The curve obtained from Eq. (8) is depicted in Fig. 4. Notice that, whereas \(U_{\rm eff}\) is minimized by \(h=0\) when \(U_{\downarrow}=0\), consistent with the JCH physics [62; 63], a nonzero \(U_{\downarrow}\) shifts the location of the interaction minimum (i.e., \(h=0\)) to some finite value. Within this picture, an upper bound of \(U_{\downarrow}\), beyond which no SF phase would exist, can be obtained. This is immediately achieved by substituting Eq. (8) into Eq. (7), which is then solved by
\[U_{\downarrow}=U_{\rm c1}\equiv\Delta+U_{\rm c}-\sqrt{\Delta^{2}-U_{\rm c}^{2 }}. \tag{9}\]
It becomes clear that, the parameter region of \(U_{\downarrow}\) within which the MI-to-SF-to-MI transition can occur is \(U_{\rm c}<U_{\downarrow}<U_{\rm c1}\).
An experimental measurable quantity that is able to mirror the effective interaction is the condensate fraction (CF), defined as the number of bosons in the condensate with respect to the total number of bosons [61; 67]. It has been shown that the condensate fraction of Bose gases monotonically decreases as the local interaction increases [61]. For the bosonic ladder considered here, the CF is defined as the largest eigenvalue of the matrix \(\left\langle\hat{b}_{i,\sigma}^{\dagger}\hat{b}_{j,\sigma^{\prime}}\right\rangle\) divided by the total number of bosons [67]. Figure 5(b) shows the CF as a function of \(h\) for \(U_{\downarrow}=3.6\) and different system sizes. It is demonstrated that the CF increases first, reaching its maximum at \(h\approx 3.4\), and then decreases. The location of the maximum of CF, designated as \(h_{\rm m}\), depends sensitively on the value of \(U_{\downarrow}\). As shown in Fig. 4, we plot \(h_{\rm m}\) for varying \(U_{\downarrow}\), which exhibits the same behaviour as that obtained from Eq. (8). The agreement between \(h_{\rm m}\) and Eq. (8) signals that the picture of the effective interaction \(U_{\rm eff}\) works in a wide range of parameters, even inside the SF phase where the density fluctuation is somehow enhanced.
### Finite interaction bias: \(U_{\uparrow}-U_{\downarrow}<\infty\)
Having understood the physics of bosonic ladder under the infinite interaction bias, we are now in the stage to explore the more general parameter regime where the interactions of both chain of the ladder are finite. Here we are particularly interested in the influence of finite spin-up interaction on various quantum phases. By calculating the charge gap \(\delta_{L}\) with extrapolation to the thermodynamic limit, we obtain the phase diagrams in the \(U_{\uparrow}-h\) plane in Figs. 6(a)-(d) with different \(U_{\downarrow}\). As shown in Fig. 6(a), in which the spin-down interaction is fixed as \(U_{\downarrow}=3.0\) (\(<U_{\rm c}\)), the SF region is confined by a smooth phase boudary, which extends up to \(U_{\uparrow}\rightarrow\infty\) and \(h\rightarrow\infty.\) A phase transition from the SF to MI may occur when increasing \(U_{\uparrow}\) (\(h\)) for some fixed \(h\) (\(U_{\uparrow}\)). Increasing the spin-down interaction slightly larger than \(U_{\rm c}\), for example \(U_{\downarrow}=3.6\), the MI can emerge for small \(h\), penetrating the SF region, as illustrated in Fig. 6(b). Importantly, as \(h\) approaches infinity, the spin-up interaction delimiting different quantum phases decreases and saturates to some critical value \(U_{\rm c2}\).
In fact, through an analysis of the platitonic modes, the MI-to-SF phase boundary in the \(h\rightarrow\infty\) limit can be derived as \(U_{\uparrow}+U_{\downarrow}=4U_{\rm c}\approx 13.2\) (see Appendix B for details). Setting \(U_{\uparrow}=U_{\downarrow}=U\), we immediately reproduce the result of the symmetric case, i.e., \(U=2U_{\rm c}\approx 6.6\), obtained by the bosonization method [21]. Under this framework, the critical interaction \(U_{\rm c2}\) is straightforwardly written as
\[U_{\rm c2}=4U_{\rm c}-U_{\downarrow}. \tag{10}\]
As marked in Fig. 6(b) by blue dashed line, the critical interaction \(U_{\rm c2}\) defined above separates the phase diagram
Figure 5: (a) The effective Hubbard interaction \(U_{\rm eff}\) as a function of \(h\) for different \(U_{\downarrow}\) with \(\Delta=10\) and \(U_{\uparrow}=\infty\). (b) The condensate fraction calculated for different system sizes. Note that the result of \(L\rightarrow\infty\) is obtained by extrapolation using the finite-size scaling. \(h_{\rm m}\) specifies the location of the maximum of the condensate fraction. The other parameters are \(\Delta=10\), \(U_{\downarrow}=3.6\) and \(U_{\uparrow}=\infty\).
into two distinct parameter regimes. For the \(U_{\uparrow}>U_{\text{c2}}\) side, there exists the interesting MI-to-SF-to-MI phase transition we explored in Subsection III.1, whereas for \(U_{\uparrow}<U_{\text{c2}}\), the MI-to-SF phase transition can appear only once by monotonically varying \(h\).
Adopting the description of the effective interaction introduced in Subsection III.1, we anticipate that if \(U_{\downarrow}\) is increased to be larger than \(U_{\text{c1}}\), a finite upper bound of \(U_{\uparrow}\), beyond which the SF phase disappears, can emerge. This is confirmed by the phase diagram in Fig. 6(c), where we take \(U_{\downarrow}=5\) (\(>U_{\text{c1}}\)=3.9). As expected, the SF phase is destroyed for \(U_{\uparrow}\gtrsim 11\), in contrast to the behaviour shown in Figs. 6(a) and 6(b). Increasing \(U_{\downarrow}\) further such that \(U_{\downarrow}>4U_{\text{c}}\), the critical interaction \(U_{\text{c2}}\) touches zero, meaning that the SF disappears, at least when \(h\) is sufficiently large. The phase diagram with \(U_{\downarrow}=15\) (\(>4U_{\text{c}}\)) is plotted in Fig. 6(d), from which we find that the area of SF completely vanishes.
Up to now, our focus are basically on the parameter regime where both the interactions and potential energies of the two chains are asymmetric. The individual effect incurred by one of the two asymmetric ingredients, i.e., either the interaction asymmetry or the potential energy asymmetry, has not been elucidated. Here we complement this study by plotting two additional phase diagrams, each of which has only one asymmetric ingredient. The phase diagram in the \(U_{\uparrow}-h\) plane with zero energy bias (\(\Delta=0\)) and fixed spin-down interaction (\(U_{\downarrow}=3.6\)) is plotted in Fig. 7(a). The phase diagram in this case shares the same structure with that in Fig. 6(a), albeit with shrunken SF area. It is also understood that no MI phase can be found for sufficiently small \(h\), contrasting the behaviour in Fig. 6(b), since a zero \(\Delta\) always closes the charge gap for lattices with non-integer filling at \(h=0\). As shown in Fig. 7(b), by requiring the interactions of the two chains to be equal, saying \(U=U_{\uparrow}=U_{\downarrow}\), we map the phase diagram in the \(U-h\) plane with finite energy bias \(\Delta=10\). With the increase of \(h\), the critical interaction of the SF-to-MI transition monotonically increases, asymptotically up to \(U_{\text{c2}}\approx 6.6\), showing a distinct behaviour compared to cases with asymmetric interactions.
## IV Discussion and Conclusion
As mentioned in Sec. II, the considered model can be directly implemented with ultracold atoms inside optical lattices under various experiment designs. For example, the bosonic ladder can be prepared by growing optical superlattices, which forms a double-well structure along one direction [29; 59]. The tunnelling strength \(h\), on-site interactions \(U_{\uparrow/\downarrow}\) and energy bias \(\Delta\) can be independently controlled by properly tuning the geometry of the optical double well. Alternatively, one can employ the spin-dependent optical lattices [58], where atoms with different hyperfine states experience different lattice potentials. In this scenario, \(\Delta\) and \(h\) are respectively controlled by the detuning and Rabi frequency of an additional coupling laser, and the on-site interactions \(U_{\uparrow/\downarrow}\) can be tuned via Feshbach resonances or the lattice depths experienced by atoms with different spins. In spite of the research interests of the model in its own right, our results offer beneficial insights into engineering effective interactions on demand by dressing different atomic internal states [51; 60]. That said, our model constitutes only a subset of the rich physics in the BH ladder with coupled chains, and many interesting extensions are to be applied in the future. For example, with a ladder structure, the hopping process of atoms may carry non-trivial peierls phases, giving rise to synthetic gauge fields [68]. These gauge fields may not only affect the MI-to-SF transitions dramatically [19] but also induce various chiral currents [27; 28; 29; 30; 31; 32]. Another direction is to fit the system into the grand-canonical description by introducing a tunable chemical potential [3; 44]. This may provide new perspectives on the magnetic or charge correlations in Mott lobes with different filling factors.
In conclusion, we have theoretically studied the ground-state properties of the BH ladder with half filling in a biased configuration by using state-of-the-art DMRG numerical methods. It is found that the interchain tunnelling can drive both the MI-to-SF and SF-to-MI quan
tum phase transitions, depending on the value of interactions. A reentrant quantum phase transition between MI and SF has also been predicted by setting the on-site interactions to some intermediate values. Under appropriate conditions, the model is shown to be amenable to some analytical treatment, whose predictions about the phase boundary is in great agreement with the numerical results. Armed with these knowledge, we have mapped out the full phase diagram and characterized some critical parameters separating the system into regimes with distinct phase behaviours.
###### Acknowledgements.
This work is supported by the National Key R&D Program of China under Grant No. 2022YFA1404003, the National Natural Science Foundation of China (NSFC) under Grant No. 12004230, 12174233 and 12034012, the Research Project Supported by Shanxi Scholarship Council of China and Shanxi '1331KSC'.
## Appendix A Plaritonic modes of the local Hamiltonian
In this Appendix, we derive Eq. (5) in the main text. To that end, we rearrange the Hamiltonian (1) as
\[\hat{H}=\sum_{j}\hat{H}_{L}^{(j)}-t\sum_{\langle i,j\rangle,\sigma}\hat{b}_{i, \sigma}^{\dagger}\hat{b}_{j,\sigma} \tag{10}\]
where
\[\hat{H}_{L}^{(j)} = -h(\hat{b}_{\uparrow}^{\dagger}\hat{b}_{\downarrow}+\text{H.c.}) +\Delta(\hat{n}_{\uparrow}-\hat{n}_{\downarrow}) \tag{11}\] \[+\sum_{\sigma}\frac{U_{\sigma}}{2}\hat{n}_{\sigma}\left(\hat{n}_ {\sigma}-1\right)\]
describes the local physics at lattice site \(j\). Note that we have omitted the subscript \(j\) in the right hand side of Eq. (11) for simplicity. The Hamiltonian (11) can be spanned by the Fock basis \(\left|n_{\downarrow},n_{\uparrow}\right\rangle\), where \(\hat{n}_{\sigma}\) is the occupation number for bosons with spin \(\sigma(=\uparrow,\downarrow)\). In the \(U_{\uparrow}\rightarrow\infty\) limit, a hardcore constraint on the spin-up bosons can be imposed, meaning that we only need to retain the states \(\left|n,0\right\rangle\) and \(\left|n-1,1\right\rangle\) with the total occupation \(n=n_{\downarrow}+n_{\uparrow}\). The plaritonic modes of Hamiltonian (11) are therefore admixtures of \(\left|n,0\right\rangle\) and \(\left|n-1,1\right\rangle\), and the eigenenergies are readily diagonalized as
\[\omega_{n}^{\pm} = \frac{U_{\downarrow}}{2}n(n-2)+\frac{1}{2}(\Delta+U_{\downarrow}) \tag{12}\] \[\pm\frac{1}{2}\sqrt{(nU_{\downarrow}-U_{\downarrow}-\Delta)^{2}+4 nh^{2}}\]
## Appendix B Effective low-energy description in the \(h\rightarrow\infty\) limit
Here we provide an effective low-energy decryption of the model in the \(h\rightarrow\infty\) limit. We first introduce two branches of quasi-modes \(\hat{b}_{j,+}\) and \(\hat{b}_{j,-}\) defined as
\[\hat{b}_{j,+}=\frac{1}{\sqrt{2}}(\hat{b}_{j,\uparrow}+\hat{b}_{j,\downarrow}) \tag{13}\]
\[\hat{b}_{j,-}=\frac{1}{\sqrt{2}}(\hat{b}_{j,\uparrow}-\hat{b}_{j,\downarrow}) \tag{14}\]
Under the transformations of Eqs. (13) and (14), the Hamiltonian (1) is rewritten as
\[\hat{H}=\sum_{j}\hat{H}_{L}^{(j)}-t\sum_{\langle i,j\rangle}\left(\hat{b}_{i,+ }^{\dagger}\hat{b}_{j,+}+\hat{b}_{i,-}^{\dagger}\hat{b}_{j,-}\right) \tag{15}\]
where the local Hamiltonian reads
\[\hat{H}_{L}^{(j)} = (\frac{U_{\downarrow}}{8}+\frac{U_{\uparrow}}{8})[(\hat{n}_{j,+} +\hat{n}_{j,-})^{2}+(\hat{b}_{j,+}^{\dagger}\hat{b}_{j,-}+\hat{b}_{j,-}^{ \dagger}\hat{b}_{j,+})^{2}-2(\hat{n}_{j,+}+\hat{n}_{j,-})] \tag{16}\] \[+(\frac{U_{\downarrow}}{4}-\frac{U_{\uparrow}}{4})[(\hat{b}_{j,+ }^{\dagger}\hat{b}_{j,-}+\hat{b}_{j,-}^{\dagger}\hat{b}_{j,+})(\hat{n}_{j,+} +\hat{n}_{j,-}-1)]\] \[+\frac{\Delta}{2}(\hat{n}_{j,+}+\hat{n}_{j,-}-\hat{b}_{j,+}^{ \dagger}\hat{b}_{j,-}-\hat{b}_{j,-}^{\dagger}\hat{b}_{j,+})+h(\hat{n}_{j,+}- \hat{n}_{j,-})\]
It follows from Eq. (16) that the low-energy physics is dominated by bosons on the "\(-\)" plaritonic branch in the \(h\rightarrow\infty\) limit. We thus anticipate an effective low-energy theory which is purely described by field operators of the "\(-\)" plaritonic branch. The simplest way to achieve this is to average the Hamiltonian (15) with respect to the vacuum state of the the "\(+\)" plaritonic branch, yielding
\[\hat{H}_{\rm eff} = \sum_{j}\left[(\frac{\Delta}{2}-h)\hat{n}_{j,-}+\frac{\tilde{U}}{2} \hat{n}_{j,-}(\hat{n}_{j,-}-1)\right] \tag{10}\] \[-t\sum_{\langle i,j\rangle}\hat{b}^{\dagger}_{i,-}\hat{b}_{j,-}\]
where \(\tilde{U}=(U_{\downarrow}+U_{\uparrow})/4\). Notice that the effective description in the Hamiltonian (10) becomes accurate when \(h\) approaches infinity. More importantly, the Hamiltonian (10) is written in the same form of the one-dimensional BH model with effective on-site interaction \(\tilde{U}\). It follows that the physics of our ladder system in this limit can be effectively described by the one-dimensional BH model with simple substitution of system parameters. Given this, the SF-to-MI phase boundary is readily obtained as \(\tilde{U}=U_{\rm c}\approx 3.3\).
|
2306.12281
|
Quantum Fluctuation Theorem for Arbitrary Measurement and Feedback
Schemes
|
Fluctuation theorems and the second law of thermodynamics are powerful
relations constraining the behavior of out-of-equilibrium systems. While there
exist generalizations of these relations to feedback controlled quantum
systems, their applicability is limited, in particular when considering strong
and continuous measurements. In this letter, we overcome this shortcoming by
deriving a novel fluctuation theorem, and the associated second law of
information thermodynamics, which remain applicable in arbitrary feedback
control scenarios. In our second law, the entropy production is bounded by the
coarse-grained entropy production which is inferrable from the measurement
outcomes, an experimentally accessible quantity that does not diverge even
under strong continuous measurements. We illustrate our results by a qubit
undergoing discrete and continuous measurement, where our approach provides a
useful bound on the entropy production for all measurement strengths.
|
Kacper Prech, Patrick P. Potts
|
2023-06-21T14:09:30Z
|
http://arxiv.org/abs/2306.12281v2
|
# Quantum Fluctuation Theorem for Arbitrary Measurement and Feedback Schemes
###### Abstract
Fluctuation theorems and the second law of thermodynamics are powerful relations constraining the behavior of out-of-equilibrium systems. While there exist generalizations of these relations to feedback controlled quantum systems, their applicability is limited, in particular when considering strong and continuous measurements. In this letter, we overcome this shortcoming by deriving a novel fluctuation theorem, and the associated second law of information thermodynamics, which remain applicable in arbitrary feedback control scenarios. In our second law, the entropy production is bounded by the coarse-grained entropy production which is inferable from the measurement outcomes, an experimentally accessible quantity that does not diverge even under strong continuous measurements. We illustrate our results by a qubit undergoing discrete and continuous measurement, where our approach provides a useful bound on the entropy production for all measurement strengths.
_Introduction._ Stochastic thermodynamics [1; 2; 3; 4; 5; 6; 7; 8] aims to understand the thermodynamic behavior of nanoscale classical and quantum systems out of equilibrium. In this framework, thermodynamic quantities, such as entropy, work, and heat are fluctuating quantities defined at a trajectory level. A crucial result in this field is provided by the Fluctuation Theorem (FT), \(\langle e^{-\sigma}\rangle=1\)[9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], where \(\sigma\) denotes the entropy production and \(\langle\cdot\rangle\) the ensemble average. This relation implies the second law of thermodynamics, \(\langle\sigma\rangle\geq 0\), and therefore, it can be considered a generalization thereof.
The idea of employing measurement and feedback in thermodynamic processes dates back to the thought experiments of Maxwell and Szilard [24; 25; 26], where knowledge about the microscopic degrees of freedom of a system could be utilized to seemingly overcome the second law of thermodynamics. The FT and the second law can be generalized to feedback controlled processes by treating information on an equal footing as entropy production [27], thus resolving this paradox. Indeed, for many scenarios modified FTs and a corresponding second law of information thermodynamics may be derived by including a stochastic information term, \(I\), resulting in [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]
\[\langle e^{-\sigma-I}\rangle=1\implies\langle\sigma\rangle\geq-\langle I\rangle, \tag{1}\]
which allows \(\langle\sigma\rangle\) to become negative.
Crucially, for a given feedback control protocol there is no unique way to define the information term \(I\)[34]. For classical feedback control scenarios, at least two approaches have proven to be useful. In the first approach, the information term \(I\) is given by the transfer entropy (see Refs. [30; 32]). This quantity is applicable to multiple as well as continuous measurements, and reduces to the mutual information [29] for a single measurement. This approach was extended to quantum systems for single [28] as well as, recently, for continuous measurement [42].
While the resulting FT is a powerful relation highlighting the connection between information and thermodynamics, it also has significant shortcomings, both in the classical as well as in the quantum regime: i) The information term \(I\) accumulates with consecutive measurements. As a result, \(\langle I\rangle\) can significantly outgrow the entropy production in the case of many measurements and may even diverge for continuous measurements [48; 50], causing the second law of information thermodynamics to become trivial. ii) The information term \(I\) for a given trajectory explicitly depends on the actual trajectory of the system. For this reason, it cannot be directly accessed from the measurement outcomes alone [31; 43].
In the second approach developed for classical systems [34], the stochastic information is given by the coarse-grained entropy \(I=-\sigma_{\rm cg}\)[30; 51]. This approach circumvents the shortcomings mentioned above. i) The ensemble average \(\langle\sigma_{\rm cg}\rangle\) does not diverge even for continuous measurements (as long as the entropy production itself remains finite). ii) The coarse-grained entropy only depends on the measurement outcomes, making it accessible from the measured data. While this approach is particularly useful for continuous and potentially strong measurements, it has not yet been extended to quantum systems.
Here we fill this gap by deriving a detailed FT that holds for open quantum systems undergoing measurement and feedback
\[\frac{P_{\rm B}[\bar{\Gamma}]}{P[\Gamma]}=e^{-(\sigma[\Gamma]-\sigma_{\rm cg} [Y])}, \tag{2}\]
where \(\Gamma\) denotes a trajectory that includes both the measurement outcomes \(Y\) as well as additional quantities specifying the time-evolution of the system. The probability for such a trajectory is denoted by \(P[\Gamma]\), \(P_{\rm B}[\bar{\Gamma}]\) is the probability of the reversed trajectory \(\bar{\Gamma}\) in a corresponding backward experiment, and \(\sigma[\Gamma]\) is the entropy production associated with the trajectory. All these quantities will be introduced in detail below. From Eq. (2), an integral FT
\[\langle e^{-(\sigma-\sigma_{\rm cg})}\rangle=1, \tag{3}\]
as well as a generalized second law
\[\langle\sigma\rangle\geq\langle\sigma_{\rm cg}\rangle \tag{4}\]
follow. To illustrate our results, we numerically investigate a continuous measurement of a qubit and we demonstrate that as the strength of the continuous measurement increases, \(\langle\sigma_{\rm cg}\rangle\) provides a useful bound on \(\langle\sigma\rangle\), while the generalization of the transfer entropy diverges [42]. We note that in contrast to Ref. [42], Eqs. (3) and (4) do not rely on Markovian dynamics of the reduced system. Our results thus provide useful FTs and second laws for arbitrary measurement and feedback schemes.
_General measurement and feedback scheme._ We consider an open quantum system that may exchange energy with a reservoir at an inverse temperature \(\beta\). At times \(t_{n}\), the system is being measured and the outcome \(y_{n}\) is obtained. The system and reservoir together are described by a density matrix \(\hat{\varrho}_{t}^{Y_{n}}\), conditioned on all previous measurement outcomes \(Y_{n}\equiv\{y_{1},y_{2},...,y_{n}\}\), with \(t_{n}<t<t_{n+1}\). The measurements are modeled by POVMs [52; 53; 54] and update the state according to
\[\hat{\varrho}_{t_{n}}^{Y_{n}}=\frac{\hat{M}_{n}(y_{n})\hat{\varrho}_{t_{n}}^{ Y_{n-1}}\hat{M}_{n}^{\dagger}(y_{n})}{P[y_{n}|Y_{n-1}]}, \tag{5}\]
where \(P[y_{n}|Y_{n-1}]={\rm Tr}\{\hat{M}_{n}^{\dagger}(y_{n})\hat{M}_{n}(y_{n})\hat {\varrho}_{t_{n}}^{Y_{n-1}}\}\) and \(\hat{M}_{n}(y_{n})\) is the Kraus operator corresponding to obtaining outcome \(y_{n}\) in the measurement at \(t_{n}\). The set of Kraus operators fulfils the relation \(\sum_{y}\hat{M}_{n}^{\dagger}(y)\hat{M}_{n}(y)=\tilde{I}\), where \(\tilde{I}\) is the identity operator.
In between measurements, the time-evolution of the system and reservoir together is unitary and determined by the total Hamiltonian \(\hat{H}_{\rm tot}(\lambda_{t}^{Y_{n}})=\hat{H}(\lambda_{t}^{Y_{n}})+\hat{V}( \lambda_{t}^{Y_{n}})+\hat{H}_{\rm R}\), where the first term denotes the Hamiltonian of the system, the second term the system-reservoir coupling, and the last term the Hamiltonian of the reservoir. The Hamiltonian depends on a control parameter \(\lambda_{t}^{Y_{n}}\), which may depend on both time, as well as the set of previous measurement outcomes.
At the initial time \(t=0\), the joint quantum state is given by \(\hat{\varrho}_{0}=\hat{\rho}_{0}\otimes\hat{\tau}_{\rm R}\), where \(\hat{\rho}_{0}=\sum_{a}p_{a}|a\rangle\langle a|\) is the initial state of the system, and \(\hat{\tau}_{\rm R}=e^{-\beta\hat{H}_{\rm R}}/Z_{\rm R}\) is the thermal state of the reservoir with the inverse temperature \(\beta\) and the partition function \(Z_{\rm R}={\rm Tr}\{e^{-\beta\hat{H}_{\rm R}}\}\). At all times, the reduced state of the system may be obtained by tracing out the reservoir \(\hat{\rho}_{t}^{Y_{n}}={\rm Tr}_{\rm R}\{\hat{\varrho}_{t}^{Y_{n}}\}\). For simplicity, we assume that at the final time \(t=\tau\) the control parameter takes the same value \(\lambda_{\tau}\) for all \(Y\equiv Y_{M}\), where \(M\) denotes the total number of measurements. However, our approach can readily be generalized (see [55]).
_Trajectory thermodynamics._ The evolution of the quantum state of the system and the reservoir can be unravelled into the trajectories \(\Gamma=\{Y,\gamma\}\) containing the set of the measurement outcomes \(Y\) and information about the system and the reservoir \(\gamma=\{a,E_{0},f,E_{\tau}\}\) at the beginning and the end of the protocol. Here \(E_{0}\) and \(E_{\tau}\) are eigenvalues of \(\hat{H}_{\rm R}\) and refer to the energies of the reservoir at the beginning and at the end of the trajectory. Similarly, \(a\) and \(f\) denote the system state at the beginning and end of the trajectory. While \(a\) is an eigenvalue of the initial state \(\hat{\rho}_{0}\), \(f\) is an eigenvalue of the _reference state_\(\hat{\rho}_{\rm r}=\sum_{f}p_{f}|f\rangle\langle f|\), which in general may be chosen arbitrarily. As we will see below, it specifies the initial quantum state of the system in the backward experiment. Here we restrict ourselves to the case where \(\hat{\rho}_{\rm r}\) is independent of \(Y\) (see [55] for a generalization).
The entropy production corresponding to the trajectory \(\Gamma\) is defined as [4; 21; 41; 56; 57]
\[\sigma[\Gamma]=-\ln p_{f}+\ln p_{a}+\beta Q[\Gamma], \tag{6}\]
where \(Q[\Gamma]=E_{\tau}-E_{0}\) is the change of the energy of the reservoir, which we identify as heat. We denote the probability associated to a trajectory by \(P[\Gamma]\). It can be computed by choosing \(|a\rangle\otimes|E_{0}\rangle\) as the initial state, computing the total density matrix conditioned on the measurement outcomes \(Y\), and projecting onto the state \(|f\rangle\otimes|E_{\tau}\rangle\)[55].
For the reference state, two choices are popular. First, one may choose the average final state \(\hat{\rho}_{\rm r}=\sum_{Y}P[Y]\hat{\rho}_{\tau}^{Y}\), such that the average of \(\sigma[\Gamma]\) reduces to the average entropy production in the measurement and feedback experiment, see for instance [41; 42]. Second, one may choose a thermal state \(\hat{\rho}_{\rm r}=e^{-\beta\hat{H}(\lambda_{\tau})}/{\rm Tr}\{e^{-\beta\hat{ H}(\lambda_{\tau})}\}\), such that, when the initial state is also thermal, the FT results in a generalization of the Jarzynski relation, see for instance [38; 41].
_FTs and the second law._ In order to derive the detailed FT [see Eq. (2)], we need to introduce the backward experiment. In the backward experiment, measurements with corresponding Kraus operators \(\hat{\Theta}M_{n}^{\dagger}(y)\hat{\Theta}^{-1}\), where \(\hat{\Theta}\) is the time-reversal operator [4], are performed at times \(\tau-t_{n}\). Note that these Kraus operators always define elements of POVMs and thus correspond to physical measurements [55]. In between measurements, the time-evolution is determined by the Hamiltonian \(\hat{\Theta}\hat{H}_{\rm tot}(\xi_{t})\hat{\Theta}^{-1}\). In contrast to the forward experiment, no feedback is performed in the backward experiment. Instead, a fixed protocol \(\{\xi_{t}\}\) is applied. The initial state for the backward experiment is given by the time-reversed reference state \(\hat{\Theta}\hat{\rho}_{\rm r}\otimes\hat{\tau}_{\rm R}\hat{\Theta}^{-1}\). We denote the probability for a trajectory \(\Gamma\) given a fixed protocol as \(P_{\rm tr}[\Gamma|\{\xi_{t}\}]\), where the subscript stands for "time-reversed". Before introducing the full backward experiment, we note that the following FT holds
\[e^{-\sigma[\Gamma]}=\frac{P_{\rm tr}[\bar{\Gamma}]\{\lambda_{\tau-t}^{Y}\}]}{P[ \Gamma]}, \tag{7}\]
where the overbar signifies time-reversal, i.e., \(\bar{\Gamma}=\{\bar{Y},\bar{\gamma}\}\), with \(\bar{\gamma}=\{f,E_{\tau},a,E_{0}\}\) and \(\bar{Y}=\{y_{M},...,y_{1}\}\). This is in fact the standard FT [4; 21; 41; 56], since \(P[\Gamma]\) also
provides the probability for the trajectory \(\Gamma\) in an experiment where the protocol \(\{\lambda_{t}^{Y}\}\) is determined in advance rather than by feedback. In addition, it is known that adding measurements (without feedback) does not alter the FT [58; 59; 60]
Similar to the classical case in Ref. [34], the full backward experiment is then described by the distribution
\[P_{\rm B}[\bar{\Gamma}]=\frac{P_{\rm tr}[\bar{\Gamma}]\{\lambda_{\tau-t}^{Y}\} P[Y]}{P_{\rm tr}[\bar{Y}]\{\lambda_{\tau-t}^{Y}\}]}, \tag{8}\]
where \(P[Y]=\sum_{\gamma}P[\Gamma]\) is the probability that the measurement outcomes \(Y\) are obtained in the forward experiment and \(P_{\rm tr}[\bar{Y}]\{\lambda_{\tau-t}^{Y}\}=\sum_{\gamma}P_{\rm tr}[\bar{ \Gamma}]\{\lambda_{\tau-t}^{Y}\}\) is the probability of obtaining \(\bar{Y}\) in the time-reversed scenario. The backward experiment determined by Eq. (8) has a clear experimental interpretation: 1. \(Y\) is sampled from the distribution \(P[Y]\). 2. The protocol \(\{\lambda_{\tau-t}^{Y}\}\) is implemented, together with the time-reversed measurements. 3. A postselection is performed: if the measurement outcomes coincide with \(\bar{Y}\), the experiment is a success, otherwise, the data is discarded and the experiment repeated starting from step 2.
We then find the FTs given in Eqs. (2) and (3), our first main result, by noting that (see [55] for a derivation)
\[e^{-\sigma_{\rm cg}[Y]}=\frac{P_{\rm tr}[\bar{Y}]\{\lambda_{\tau-t}^{Y}\}]} {P[Y]}, \tag{9}\]
where \(\sigma_{\rm cg}[Y]\) denotes the entropy production, coarse-grained over \(\gamma\), the experimentally inaccessible part of the trajectory [30; 34; 51]. By comparing the last equation with Eq. (7), we may interpret \(\sigma_{\rm cg}[Y]\) as the entropy production inferable from the measurement outcomes alone. Using Jensen's inequality \(\langle f(X)\rangle\leq f(\langle X\rangle)\) for a convex function \(f(X)\), we obtain the second law of information thermodynamics given in Eq. (4), which constitutes our second main result.
_Lindblad master equation._ Since open quantum systems are very often modeled using Lindblad master equations, we now adapt the results of the previous section to scenarios where the dynamics of the reduced density matrix in between the measurements (\(t_{n}<t<t_{n+1}\)) is described by [61; 62; 63; 64] (we set \(\hbar=1\) hereafter)
\[\frac{d}{dt}\hat{\rho}_{t}^{Y_{n}}=-i[\hat{H}(\lambda_{t}^{Y_{n}}),\hat{\rho} _{t}^{Y_{n}}]+\sum_{j}\mathcal{D}[\hat{L}_{j}(\lambda_{t}^{Y_{n}})]\hat{\rho} _{t}^{Y_{n}} \tag{10}\]
where \(\mathcal{D}[\hat{O}]\hat{\rho}=\hat{O}\hat{\rho}\hat{O}^{\dagger}-\frac{1}{2}\{ \hat{\rho},\hat{O}^{\dagger}\hat{O}\}\). The POVM measurements update the density matrix \(\hat{\rho}_{t_{n}}^{Y_{n}}\) according to the rule presented in Eq. (5). The jump operators obey a local detailed balance, i.e., for each \(j\) there is a \(\tilde{j}\) such that \(\hat{L}_{\tilde{j}}=\hat{L}_{j}^{\dagger}e^{-\beta q_{j}/2}\), with \(q_{j}=-q_{\tilde{j}}\).
In this scenario, a trajectory is defined as \(\Gamma=\{Y,\gamma\}\), where \(\gamma=\{a,(s_{1},j_{1}),...,(s_{k},j_{k}),...,(s_{K},j_{K}),f\}\). Here \((s_{k},j_{k})\) implies that at time \(s_{k}\), a jump mediated by the operator \(\hat{L}_{j_{k}}\) occurred. The remaining symbols retain their meaning. On the trajectory level, the heat released to the reservoir reads \(Q[\Gamma]=\sum_{k}q_{j_{k}}(\lambda_{s_{k}}^{Y})\). The expression for \(\sigma[\Gamma]\) given in Eq. (6), as well as the associated ensemble averages continue to hold [65; 66; 21; 38; 42; 21].
The backward experiment and the probability \(P_{\rm tr}[\Gamma]\{\xi_{t}\}\) are analogously to the unitary case (see [55] for details). In the quantum jump unravelling, the reversed version of \(\Gamma\) is given by \(\bar{\Gamma}=\{\bar{Y},\bar{\gamma}\}\), where \(\bar{\gamma}=\{f,(\tau-s_{K},\tilde{j}_{K}),...,(\tau-s_{k},\tilde{j}_{k}),...,(\tau-s_{1},\tilde{j}_{1}),a\}\), which results in Eq. (7). By defining the full backward experiment with Eq. (8), we recover our main results, Eqs. (2), (3), and (4) for systems described by Lindblad master equations.
_Qubit model._ We illustrate our results by investigating a feedback controlled qubit with a Hamiltonian \(\hat{H}=\frac{\omega}{2}\hat{\sigma}_{z}\), where \(\hat{\sigma}_{x,y,z}\) are Pauli matrices in the basis \(|0\rangle\) (ground) and \(|1\rangle\) (excited). The system is initially in a thermal state with inverse temperature \(\beta\), \(\hat{\rho}_{0}=e^{-\beta\hat{H}}/{\rm Tr}\{e^{-\beta\hat{H}}\}\). We consider multiple feedback
Figure 1: The second law for the qubit model. Entropy production \(\langle\sigma\rangle\) (black), coarse-grained \(\langle\sigma_{\rm cg}\rangle_{\rm c/q}\) (blue/red), (quantum-classical) mutual information [panel a)] or transfer entropy [panels b) and c)] \(\langle I_{\rm m/te}\rangle_{\rm c/q}\) (blue-dashed/red-dashed), and extracted work \(\langle W\rangle_{\rm q}\) (green) as a function of the measurement error \(\epsilon\). The subscripts c and q correspond to the classical and quantum protocols, respectively. The entropy production \(\langle\sigma\rangle\) is equal for both cases but only for the classical protocol can it be related to the extracted work \(\beta\langle W\rangle_{\rm c}=-\langle\sigma\rangle\). a) Single measurement. b) Two measurements with \(\kappa\Delta t=1\). c) Two measurements with \(\kappa\Delta t=0.2\). In all panels, \(\beta\omega=1\).
protocols that aim at extracting work from the qubit. First, we consider a single measurement with Kraus operators that commute with the Hamiltonian: \(\hat{M}_{0}=\sqrt{1-\epsilon}|0\rangle\langle 0|+\sqrt{\epsilon}|1\rangle\langle 1|\) and \(\hat{M}_{1}=\sqrt{1-\epsilon}|1\rangle\langle 1|+\sqrt{\epsilon}|0\rangle \langle 0|\). If the measurement outcome is \(0\), the system is assumed to be in the ground state and no feedback is performed. If the measurement outcome is equal to \(1\), the system is assumed to be in the excited state and the unitary transformation \(\hat{U}_{1}=|0\rangle\langle 1|+|1\rangle\langle 0|\) is applied. This transformation can extract work by transforming the excited state to the ground state. In this protocol, heat from the bath is turned into work using information. Since only the populations of the qubit are relevant, we label this protocol as _classical_. Considering as a reference state the thermal state, the first law of thermodynamics implies that the entropy production is determined by the extracted work \(\langle\sigma\rangle=-\beta\langle W\rangle_{\rm c}\), where the subscript stands for classical. Equation (4) then provides a bound on the extracted work. This is illustrated in Fig. 1 a), where we compare our bound to the mutual information \(\langle I_{\rm mi}\rangle_{\rm c}\). Notably, for small measurement errors, our bound is tighter.
In addition to the classical protocol, we consider a _quantum_ protocol where the measurement operators do not commute with the Hamiltonian: \(\hat{M}_{+}=\sqrt{1-\epsilon}|+\rangle\langle+|+\sqrt{\epsilon}|-\rangle\langle-|\) and \(\hat{M}_{-}=\sqrt{1-\epsilon}|-\rangle\langle-|+\sqrt{\epsilon}|+\rangle\langle+|\), where \(|\pm\rangle=(|0\rangle+|1\rangle)/\sqrt{2}\). The corresponding unitary transformations employed to extract work are \(\hat{U}_{+}=|0\rangle\langle+|+|1\rangle\langle-|\) and \(\hat{U}_{-}=|0\rangle\langle-|+|1\rangle\langle+|\). In this protocol, the measurement provides the energy which is then extracted in the form of work, in analogy to Ref. [67; 68]. This implies that the extracted work can no longer be related to the heat exchanged with the bath and generally differs from the entropy production. As illustrated in Fig. 1 a), more work can be extracted with the measurement providing an additional source of energy. Due to the similarity of the classical and the quantum protocol, we find that the entropy production is equal in the two cases. However, in the quantum protocol our bound becomes slightly less tight because the measurement outcome provides less information on heat, which fully determines the entropy production. The bound provided by the quantum-classical mutual information \(\langle I_{\rm mi}\rangle_{\rm q}\), which generalizes the mutual information to the quantum regime [28], hardly changes compared to the classical protocol.
Similar observations can be made when measurement and feedback is applied twice, with a time delay \(\Delta t\) in between the measurements. During this intermediate stage the interaction with the reservoir is described with the Lindblad jump operators \(\hat{L}_{-}=\sqrt{\kappa}\hat{\sigma}_{-}\) and \(\hat{L}_{+}=\sqrt{\kappa}e^{-\beta\omega/2}\hat{\sigma}_{+}\), where \(\hat{\sigma}_{\pm}=\hat{\sigma}_{x}\pm i\hat{\sigma}_{y}\). The second law is shown in Fig. 1 b) and c) for \(\kappa\Delta t=1\) and \(\kappa\Delta t=0.2\), respectively. In this case, \(\langle I_{\rm te}\rangle_{\rm c}\) denotes the transfer entropy, and \(\langle I_{\rm te}\rangle_{\rm q}\) denotes the quantum classical transfer entropy [42]. For large \(\Delta t\), this scenario corresponds to performing the single-measurement scenario discussed above twice. For small \(\Delta t\), the classical protocol does not benefit from the second measurement. In contrast, the extracted work in the quantum protocol can be increased already for small \(\Delta t\) since the energy source is the measurement rather than the environment.
_Continuous feedback control._ Finally, we illustrate our FT and the second law in a continuous feedback protocol. We consider a qubit with a Hamiltonian \(\hat{H}(t)=\frac{\omega}{2}\hat{\sigma}_{z}+\chi\hat{\sigma}_{x}\cos\left( \Omega t\right)\). The effect of the environment is characterized by the same jump operators as above, \(\hat{L}_{\pm}\). The continuous quantum measurement is realised with the measurement operator \(\hat{M}_{1}\) from the classical scenario at a rate \(\kappa_{\rm m}\). In each infinitesimal time-step of duration \(\delta t\), the measurement operator is applied with probability \(\kappa_{\rm m}\delta t\). The time-evolution will thus be interrupted by measurements of the excited state occurring with rate \(\kappa_{\rm m}\) and error probability \(\epsilon\). Upon the detection of the
|
2303.08407
|
Entanglement quantification via nonlocality
|
Nonlocality, manifested by the violation of Bell inequalities, indicates
quantum entanglement in the underlying system. A natural question that arises
is how much entanglement is required for a given nonlocal behavior. In this
paper, we explore this question by quantifying entanglement using a family of
generalized Clauser-Horne-Shimony-Holt-type Bell inequalities. We focus on two
entanglement measures, entanglement of formation and one-way distillable
entanglement, which are related to entanglement dilution and distillation,
respectively. We also study the interplay among nonlocality, entanglement, and
measurement incompatibility. The result reveals that the relationship between
entanglement and measurement incompatibility is not simply a trade-off under a
fixed nonlocal behavior. In addition, we consider two realistic scenarios
non-maximally entangled states and Werner states and apply our entanglement
quantification results. By optimizing the Bell inequality for entanglement
estimation, we derive analytical results for the entanglement of formation.
|
Yuwei Zhu, Xingjian Zhang, Xiongfeng Ma
|
2023-03-15T07:16:46Z
|
http://arxiv.org/abs/2303.08407v1
|
# Entanglement quantification via nonlocality
###### Abstract
Nonlocality, manifested by the violation of Bell inequalities, indicates quantum entanglement in the underlying system. A natural question that arises is how much entanglement is required for a given nonlocal behavior. In this paper, we explore this question by quantifying entanglement using a family of generalized Clauser-Horne-Shimony-Holt-type Bell inequalities. We focus on two entanglement measures, entanglement of formation and one-way distillable entanglement, which are related to entanglement dilution and distillation, respectively. We also study the interplay among nonlocality, entanglement, and measurement incompatibility. The result reveals that the relationship between entanglement and measurement incompatibility is not simply a trade-off under a fixed nonlocal behavior. In addition, we consider two realistic scenarios non-maximally entangled states and Werner states and apply our entanglement quantification results. By optimizing the Bell inequality for entanglement estimation, we derive analytical results for the entanglement of formation.
## I Introduction
In the early development of quantum mechanics, Einstein, Podolsky and Rosen noticed that the new physical theory leads to a "spooky action" between separate observables that is beyond any possible classical correlation [1]. Later, Bell formalizes such a quantum correlation via an experimentally feasible test that is now named after him [2]. In one of the simplest settings, the Clauser-Horne-Shimony-Holt (CHSH) Bell test [3], two distant experimentalists, Alice and Bob, each has a measurement device and share a pair of particles. While they may not know their devices and physical system _a priori_, they can each take random measurements and later evaluate the Bell expression,
\[S=\sum_{a,b,x,y}ab(-1)^{xy}p(a,b|x,y)=\sum_{x,y}(-1)^{xy}\mathbb{E}(ab|x,y), \tag{1}\]
where \(x,y\in\{0,1\}\) represent their random choices of measurement settings and \(a,b\in\{+1,-1\}\) denote their measurement results, as shown in Fig. 1. If Alice and Bob observe a value of \(S>2\), then they cannot explain the observed correlation using any physical theory that follows local realism. This phenomenon is called Bell nonlocality. To demonstrate such a nonlocal behavior, the physical systems must exhibit a non-classical feature, wherein quantum theory, entanglement is such an ingredient [4]. The CHSH expression has a maximal value of \(2\sqrt{2}\), which requires the maximally entangled state in a pair of qubits, \(|\Phi^{+}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\)[5]. We also term this state the Bell state.
Entanglement characterizes a joint physical state among multiple parties that cannot be generated through local operations and classical communication (LOCC) [6; 7]. Beyond its role in understanding quantum foundations, entanglement is a useful resource in a variety of quantum information processing tasks, including quantum communication [8], quantum computation [9], and quantum metrology [10]. With a resource-theoretic perspective, a large class of information processing operations can be interpreted as entanglement conversion processes under LOCC, wherein one may quantify the participation of entanglement using appropriate measures [11; 12; 13; 14]. Therefore, the fundamental
Figure 1: A diagram of the CHSH Bell test. Two space-like separated users, Alice and Bob, share an unknown quantum state and own untrusted devices. In each round of the CHSH Bell test, Alice applies the measurement determined by the random input \(x\in\{0,1\}\) and outputs her measurement result, \(a\in\{\pm 1\}\). The measurement process is similar on Bob’s side, with input \(y\) and output \(b\). As the round of tests accumulated, the CHSH Bell value \(S\) in Eq. (1) can be evaluated.
question is to detect and quantify entanglement in a system. While state tomography fully reconstructs the information of a quantum state and hence the entanglement properties [15; 16; 17], as detection loss and environmental noise are inevitable in practice, the tomography results may suffer from a precision problem. Notably, the measurement devices may not even be trusted in extreme adversarial scenarios.
Fortunately, quantum nonlocality provides a way to bypass the problem. Note that in the Bell test, one does not need to characterize the quantum devices _a priori_, and thus, the indication of entanglement from Bell nonlocality is a device-independent conclusion [18; 19]. This observation leads to the question of what the minimum amount of entanglement is necessary for a given nonlocal behavior. In other words, Bell tests can serve as a device-independent entanglement quantification tool. In the literature, there are already endeavors into the question [20; 21; 22]. The quantitative results provide us with tools for devising novel quantum information processing tasks. A notable investigation is the analysis of device-independent quantum key distribution [18; 19; 23]. With the link among nonlocality, entanglement, and secure communication, we can quantify the key privacy solely from Bell nonlocality [24; 25; 26].
Despite the physical intuition for entanglement quantification via Bell nonlocality, the quantitative relation between entanglement and nonlocality can be subtle [27]. Above all, while the notion of Bell nonlocality arises from the observation of correlations, entanglement is defined by the opposite of a restricted state preparation process. In fact, the Bell nonlocality is a stronger notion than entanglement. Though a nonlocal behavior necessarily requires the presence of entanglement, not all entangled states can unveil a nonlocal correlation [28]. The conceptual difference even leads to some counter-intuitive results, where a series of works aimed at characterizing their exact relation, such as the discussions on the Peres conjecture -- whether Bell nonlocality is equivalent to distillability of entanglement [29; 30].
In addition, there are different entanglement measures that enjoy distinct operational meanings. As entanglement is not a reversible theory, these measures are generally not identical with each other [31]. An interesting yet vague question is whether different entanglement measures from nonlocal behavior can be the same.
In this work, we systematically study entanglement quantification via a family of generalized CHSH-type Bell inequalities. We treat the measurement devices as black boxes. Different implementations can lead to the same observed nonlocal behavior. As depicted in Fig. 2, a nonlocal behavior necessarily needs both entanglement and incompatible local measurements. In other words, a system with separable states or compatible local measurements definitely fails to observe nonlocality. One may expect a trade-off relationship between state entanglement and measurement incompatibility for a given nonlocal behavior. Hence, we explore the interplay among entanglement, nonlocality, and measurement incompatibility with different entanglement measures.
The rest of the paper is organized as follows. In Sec. II, we review the necessary concepts in nonlocality and entanglement theories. In Sec. III, we present the general framework for estimating entanglement in the underlying system using the set of generalized CHSH-type Bell inequalities. Then, we consider two special entanglement measures, the entanglement of formation and the one-way distillable entanglement. In Sec. IV, we utilize the entanglement quantification results and investigate the interplay among nonlocality, entanglement, and measurement incompatibility. From a practical perspective, in Sec. V, we also simulate statistics that arise from pure entangled states and Werner states and examine the performance of our results.
Figure 2: The interplay among nonlocality, entanglement, and measurement incompatibility. A nonlocal behavior necessarily indicates both entanglement and incompatible local measurements. A system with separable states or compatible local measurements fails in exhibiting nonlocality. Intuitively, under a given nonlocal behavior, one may expect a trade-off relationship between entanglement and measurement incompatibility. In this work, we start from the entanglement quantification via nonlocality, from which we realize that the relation between entanglement and measurement incompatibility is subtler than a simple trade-off. We study the interplay among nonlocality, entanglement, and measurement incompatibility in detail.
## II Preliminary
### General CHSH-type Bell tests
In this work, we consider the family of generalized CHSH-type Bell tests. Under quantum mechanics, the Bell expression is given by [27]
\[\begin{split} S&=\text{Tr}\Big{[}\rho_{AB}\left( \alpha\hat{A}_{0}\otimes\hat{B}_{0}+\alpha\hat{A}_{0}\otimes\hat{B}_{1}+\hat{A }_{1}\otimes\hat{B}_{0}-\hat{A}_{1}\otimes\hat{B}_{1}\right)\Big{]}\\ &=\text{Tr}\Big{(}\rho_{AB}\hat{S}_{\alpha}\Big{)},\end{split} \tag{2}\]
where \(\rho_{AB}\) is the underlying bipartite quantum state, \(\hat{A}_{x}\) and \(\hat{B}_{y}\) are the observables measured by Alice and Bob, according to their measurement choices, \(x,y\in\{0,1\}\), respectively. The family of Bell expressions is parameterized by \(\alpha\geq 1\), which tilts the contributions of \(\hat{A}_{0}\otimes(\hat{B}_{0}+\hat{B}_{1})\) and \(\hat{A}_{1}\otimes(\hat{B}_{0}-\hat{B}_{1})\) to the Bell value. When \(\alpha=1\), Eq. (2) degenerates to the original CHSH expression defined by Eq. (1) [3]. For simplicity, we call the expression under the fixed parameter of \(\alpha\) as \(\alpha\)-CHSH expression and \(\hat{S}_{\alpha}\) the \(\alpha\)-CHSH operator. If the underlying quantum state is separable or the local measurement observables are compatible, the \(\alpha\)-CHSH expression is upper bounded by \(S(\alpha)\leq 2\alpha\), which commits a local hidden variable model to reproduce the correlation. Observation of a larger value, termed Bell inequality violation, necessarily implies the existence of entanglement and measurement incompatibility between the local measurement observables. In quantum theory, the largest value of \(\alpha\)-CHSH expression is \(2\sqrt{\alpha^{2}+1}\).
In the study of Bell nonlocality, we do not put prior trust in the underlying physical systems. In particular, we do not assume a bounded system dimension. Nevertheless, the simplicity of CHSH-type Bell expressions allows us to apply Jordan's lemma to effectively reduce the system to a mixture of qubit pairs [19]. We shall explain how to apply this result when we come to the part of main results.
### Entanglement measures
As our discussion is effectively restricted to a pair of qubits, we shall focus on a few entanglement measures that enjoy a closed form in such a case. We consider \(\rho_{AB}\) as a pair of qubits here. The first measure we consider is the entanglement of formation, \(E_{\text{F}}(\rho_{AB})\)[14]. Operationally, this measure provides a computable bound on the entanglement cost, which quantifies the optimal state conversion rate of diluting maximally entangled states into the desired states of \(\rho_{AB}\) under LOCC [14]. In the case of two-qubit states, we can calculate the entanglement of formation with the following expression,
\[E_{\text{F}}(\rho_{AB})=h\left(\frac{1+\sqrt{1-C^{2}(\rho_{AB})}}{2}\right), \tag{3}\]
where \(h(p)=-p\log p-(1-p)\log(1-p)\) is the binary entropy function, and \(C(\rho_{AB})\) is the concurrence of \(\rho_{AB}\), a useful entanglement monotone [32; 33]. Given a general two-qubit quantum state, \(\rho_{AB}\), its concurrence is analytically given by
\[C(\rho_{AB})=\max\{0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\}, \tag{4}\]
where values \(\lambda_{i}\) are the decreasingly ordered square roots of the eigenvalues of the matrix
\[X(\rho_{AB})=\sqrt{\rho_{AB}}(\sigma_{y}\otimes\sigma_{y})\rho_{AB}^{*}( \sigma_{y}\otimes\sigma_{y})\sqrt{\rho_{AB}}. \tag{5}\]
Here, the density matrix of \(\rho_{AB}\) is written on the computational basis of \(\{\ket{00},\ket{01},\ket{10},\ket{11}\}\), where \(\ket{0}\) and \(\ket{1}\) are the eigenstates of \(\sigma_{z}\), and \(\rho_{AB}^{*}\) is the complex conjugate of \(\rho_{AB}\).
As opposed to the entanglement dilution process, the entanglement distillation process defines another entanglement measure, the distillable entanglement [14]. In this process, given sufficiently many copies of a given state, \(\rho_{AB}\), the distillable entanglement is the maximal state conversion rate of distilling maximally entangled states under LOCC. While the calculation of this measure for a general state remains an open question, a well-studied lower bound is the one-way distillable entanglement, where classical communication is restricted to a one-way procedure between the two users. This measure can be calculated by the negative conditional entropy,
\[E_{D}^{*}(\rho_{AB})=-H(A|B)_{\rho}, \tag{6}\]
where \(H(A|B)_{\rho}=H(\rho_{AB})-H(\rho_{B})\). When the underlying state is clear from the context, we shall omit the subscript for simplicity.
We remark that the above results of entanglement measures are defined via dilution and distillation processes with infinitely many independent and identical (i.i.d.) copies of the quantum state under study or, in the i.i.d. asymptotic limit. In this work, we shall focus on the entanglement quantification via Bell nonlocality in this limit. That is, we treat both the amount of entanglement and the Bell value as expected values.
## III Device-Independent Entanglement Quantification
### Entanglement quantification via optimization
In this section, we formulate the problem of entanglement quantification via Bell nonlocality. Using the nomenclature in quantum cryptography, we also term it device-independent entanglement quantification. After specifying a particular entanglement measure, \(E\), we ask the minimal amount of entanglement in the initial quantum system that supports the observed Bell expression value,
\[E_{\text{est}} =\min_{\rho_{AB},\hat{A}_{0},\hat{A}_{1},\hat{B}_{0},\hat{B}_{1}} E(\rho_{AB}), \tag{7}\] \[\text{s.t.}\quad S =\text{Tr}\left(\rho_{AB}\hat{S}_{\alpha}\right),\] \[\rho_{AB} \geq 0,\] \[\text{Tr}(\rho_{AB}) =1.\]
Here, we denote the estimated entanglement measure of \(E\) from Bell nonlocality as \(E_{\text{est}}\). As clarified above, \(\hat{S}_{\alpha}\) is the \(\alpha\)-CHSH operator, an operator function of the measurement observables.
The optimization problem is difficult to solve directly. First, it involves multiple variables, including the underlying state and the measurement observables. Second, we do not have empirical knowledge of the system dimension. In Eq. (7), the dimension of \(\rho_{AB}\) is unknown. Thus we cannot solve this optimization problem. To get around these issues, we degenerate its solution into several steps, as shown in Fig. 3.
Figure 3: Steps for estimating entanglement via CHSH-type Bell inequalities. Step 1: The original entanglement estimation problem is formulated as Eq. (7). The only constraint is the observed Bell value, \(S\). Step 2: Using a duality argument, we consider the optimization problem in Eq. (8), which can be interpreted as maximizing the Bell value for a given quantum state, \(\rho_{AB}\). The arguments in the optimal solution are regarded as the “optimal measurements” that lead to the maximal Bell value for the state. Step 3: By applying Jordan’s lemma, we can view the measurement process as resulting from a convex combination of pairs of qubits. Step 4: We can further restrict the qubit pairs to Bell-diagonal states in solving the optimization problem. We show that in the CHSH Bell test, any two-qubit state can be transformed to a Bell-diagonal state through LOCC without changing the \(\alpha\)-CHSH Bell value.
In the first step, we use duality arguments and transform Eq. (7). Note that the objective function in Eq. (7), \(E_{\mathrm{est}}:=f(S)\), is continuous and monotonously increasing in its argument \(S\), hence having a well-defined inverse function. Consider the following problem,
\[S^{\star} =\max_{\rho_{AB},\hat{A}_{0},\hat{A}_{1},\hat{B}_{0},\hat{B}_{1}} \operatorname{Tr}\left(\rho_{AB}\hat{S}_{\alpha}\right), \tag{8}\] \[\mathrm{s.t.}\quad E(\rho_{AB}) =E_{\mathrm{est}},\] \[\rho_{AB} \geq 0,\] \[\operatorname{Tr}(\rho_{AB}) =1,\]
where the objective function in Eq. (8) comes from the inverse function of the original optimization problem, \(S^{\star}:=f^{-1}(E_{\mathrm{est}})\). In solving Eq. (8), as the objective function is bilinear in \(\rho_{AB}\) and Bell operator \(\hat{S}_{\alpha}\), the optimization equals the maximization over the two arguments individually, \(S^{\star}=\max_{\rho_{AB}}\max_{\hat{A}_{0},\hat{A}_{1},\hat{B}_{0},\hat{B}_{1 }}\operatorname{Tr}\left(\rho_{AB}\hat{S}_{\alpha}\right)\). For the inner optimization, denote \(S^{\star}(\rho_{AB})=\max_{\hat{A}_{0},\hat{A}_{1},\hat{B}_{0},\hat{B}_{1}} \operatorname{Tr}\left(\rho_{AB}\hat{S}_{\alpha}\right)\), which can be seen as the maximal \(\alpha\)-CHSH Bell value that can be obtained with \(\rho_{AB}\). Then, we may equivalently solve Eq. (7) with the following optimization,
\[E_{\mathrm{est}} =\min_{\rho_{AB}}E(\rho_{AB}), \tag{9}\] \[\mathrm{s.t.}\quad S^{\star}(\rho_{AB}) =S,\] \[\rho_{AB} \geq 0,\] \[\operatorname{Tr}(\rho_{AB}) =1.\]
For simplicity, we call the measurements that yield the maximal \(\alpha\)-CHSH Bell value for \(\rho_{AB}\) the "optimal measurements".
**Definition 1**.: _The optimal measurements of state \(\rho_{AB}\) are the observables that maximize the \(\alpha\)-CHSH expression in Eq. (2) for \(\rho_{AB}\), i.e., \(\operatorname{argmax}_{\hat{A}_{0},\hat{A}_{1},\hat{B}_{0},\hat{B}_{1}} \operatorname{Tr}\left(\rho_{AB}\hat{S}_{\alpha}\right)\)._
To bypass the dimension problem, we utilize Jordan's lemma. We leave the detailed analysis in Appendix A.1. Here, we briefly state the indication of Jordan's lemma in our work. In the CHSH-type Bell test, we can effectively view the measurement process as first performing local operations to transform the underlying quantum state into an ensemble of qubit pairs, \(\{p^{\mu},\rho_{AB}^{\mu}\}\), with \(p^{\mu}\) a probability distribution, and then measuring each pair of qubits with associate qubit observables. The measurement on each pair of qubits corresponds to a Bell value, \(S^{\mu}\), and the observed Bell value is the average of these values, \(S=\sum_{\mu}p^{\mu}S^{\mu}\). Guaranteed by the convexity property of an entanglement measurement, we can lower-bound the amount of entanglement in the initial system by studying the average amount of entanglement in the ensemble of qubit-pairs, \(\sum_{\mu}p^{\mu}E(\rho_{AB}^{\mu})\). In this way, we can essentially focus on quantifying entanglement in a pair of qubits.
By further utilizing the non-increasing property under LOCC of an entanglement measure and choosing proper local computational bases, we may further restrict the pair of qubits to a diagonal state on the Bell-state basis,
\[\rho_{\lambda}=\lambda_{1}\left|\Phi^{+}\middle\rangle\!\middle\langle\Phi^{+} \middle\rvert+\lambda_{2}\left|\Phi^{-}\middle\rangle\!\middle\langle\Phi^{-} \middle\rvert+\lambda_{3}\left|\Psi^{+}\middle\rangle\!\middle\langle\Psi^{+} \middle\rvert+\lambda_{4}\left|\Psi^{-}\middle\rangle\!\middle\langle\Psi^{-} \right\rvert,\right. \tag{10}\]
with \(\left|\Phi^{\pm}\right\rangle=(\left|00\right\rangle\pm\left|11\right\rangle)/ \sqrt{2},\left|\Psi^{\pm}\right\rangle=(\left|01\right\rangle\pm\left|10 \right\rangle)/\sqrt{2}\). We term such a state a Bell-diagonal state with respect to the computational basis. Without loss of generality, we assume \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4}\), since we can relabel the eigenvalues corresponding to the Bell-basis states with local unitary operations.
**Lemma 1**.: _In a CHSH Bell test, under a fixed computational basis, a two-qubit state, \(\rho_{AB}\), can be transformed into a Bell-diagonal state, \(\rho_{\lambda}\), through LOCC, with the \(\alpha\)-CHSH Bell value unchanged._
The lemma indicates that an observed Bell value can always be interpreted as arising from a Bell-diagonal state. Furthermore, the operations in the lemma are restricted to LOCC and state mixing. Since these operations do not increase entanglement, we can hence restrict our analysis of lower-bounding entanglement to the set of Bell-diagonal states. The LOCC transformation in this result was first constructed in Ref. [34] (see Lemma 3 therein). Here, we verify the unchanged \(\alpha\)-CHSH Bell value through the LOCC transformation. We present proof of the lemma in Appendix A.2.
With the above simplifications, we have the following lemma in solving the problem in Eq. (9).
**Lemma 2**.: _The maximal value of the \(\alpha\)-CHSH expression in Eq. (2) for a Bell-diagonal state shown in Eq. (10), \(\rho_{\lambda}\), is given by_
\[S=2\sqrt{\alpha^{2}(\lambda_{1}+\lambda_{2}-\lambda_{3}-\lambda_{4})^{2}+( \lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_{4})^{2}}, \tag{11}\]
_where \(\lambda_{i}\) is the \(i\)-th largest eigenvalue of \(\rho_{\lambda}\)._
We give the proof of Lemma 2 in Appendix A.2. For two projective measurements, their incompatibility is defined as the largest inner product of their eigenvectors. For qubit observables, this definition is equivalent to the commutator. In proving Lemma 2, a notable issue is that the optimal measurements may not be the most incompatible measurements. Up to a minus sign before the observables, the optimal measurements of the Bell-diagonal state in Eq. (10) are as follows,
\[\begin{split}\hat{A}_{0}&=\sigma_{z},\\ \hat{A}_{1}&=\sigma_{x},\\ \hat{B}_{0}&=\cos\theta\sigma_{z}+\sin\theta\sigma_{ x},\\ \hat{B}_{1}&=\cos\theta\sigma_{z}-\sin\theta\sigma_{ x},\end{split} \tag{12}\]
where \(\theta\) fully determines the amount of incompatibility of the local observables, with \(\tan\theta=(\lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_{4})/[\alpha(\lambda_ {1}+\lambda_{2}-\lambda_{3}-\lambda_{4})]\) determined by the Bell-diagonal state and parameter \(\alpha\). While the observables on Alice's side are maximally incompatible with each other, the commutator of the observables on Bob's side is given by \([\hat{B}_{0},\hat{B}_{1}]=\sin 2\theta[\sigma_{x},\sigma_{z}]\). For example, when the considered state is the maximally entangled state with \(\lambda_{1}=1,\lambda_{2}=\lambda_{3}=\lambda_{4}=0\) and \(\alpha=1\), which corresponds to the original CHSH expression, the optimal measurements coincide with the most incompatible measurements. For more cases where \(\sin 2\theta\) is strictly smaller than \(1\), \(\hat{B}_{0}\) and \(\hat{B}_{1}\) are not maximally incompatible.
**Observation 1**.: _The observables that yield the largest \(\alpha\)-CHSH Bell value for a quantum state are not the most incompatible ones in general._
Notwithstanding, a subtle issue is that we do not have access to the underlying probability distribution in the qubit-pair ensemble, \(p^{\mu}\), or the underlying Bell value for each pair of qubits. As we only know the average Bell value over the ensemble, we need to be careful of convexity issues. Suppose the solution to Eq. (7) with the restriction of a pair of qubits takes the form \(E_{\rm est}=E_{\rm est}(S)\). When extending the result to possibly an ensemble of qubit pairs, if \(E_{\rm est}(S)\) is not convex in \(S\), then
\[E_{\rm est}\left(\sum_{\mu}p^{\mu}S^{\mu}\right)\leq\sum_{\mu}p^{\mu}E_{\rm est }\left(S^{\mu}\right)\leq\sum_{\mu}p^{\mu}E\left(\rho^{\mu}_{AB}\right)\leq E (\rho_{AB}), \tag{13}\]
which holds for any probability distribution \(p^{\mu}\). Hence, we can directly lower-bound the amount of entanglement in the underlying state by \(E_{\rm est}(S)\), where \(S=\sum_{\mu}p^{\mu}S^{\mu}\) represents the observed Bell value. Yet if the function \(E_{\rm est}(S)\) is concave, then the first inequality above no longer holds valid. Consequently, we need to take a convex closure of the function \(E_{\rm est}\) to estimate the amount of entanglement from a quantum state with an unknown dimension.
Following the above discussions, we study the entanglement measures of entanglement of formation and one-way distillable entanglement, which are essentially given by concurrence and conditional entropy of entanglement, respectively.
### Concurrence and entanglement of formation
In this subsection, we take concurrence \(C(\cdot)\) as the objective entanglement measure in Eq. (7). For this measure, we have an analytical estimation result.
**Theorem 1**.: _Suppose the underlying quantum state is a pair of qubits. For a given tilted CHSH expression in Eq. (2) parametrized by \(\alpha\), if the Bell expression value is \(S\), then the amount of concurrence in the underlying state can be lower-bounded,_
\[C(\rho_{AB})\geq\sqrt{\frac{S^{2}}{4}-\alpha^{2}}. \tag{14}\]
_The equality can be saturated when measuring a Bell-diagonal state in Eq. (10) with eigenvalues_
\[\begin{split}\lambda_{1}&=\frac{1}{2}+\frac{1}{2}\sqrt {\frac{S^{2}}{4}-\alpha^{2}},\\ \lambda_{2}&=\frac{1}{2}-\frac{1}{2}\sqrt{\frac{S^{2 }}{4}-\alpha^{2}},\\ \lambda_{3}&=\lambda_{4}=0,\end{split} \tag{15}\]
_using measurements in Eq. (12) with \(\theta=\arctan\biggl{(}\frac{1}{\alpha}\sqrt{\frac{S^{2}}{4}-\alpha^{2}}\biggr{)}\)._
We leave the detailed derivation in Appendix B.
**Observation 2**.: _Given a \(\alpha\)-CHSH Bell value, the measurements that require the minimum entanglement are not the most incompatible measurements in general._
As the entanglement of formation can be expressed by concurrence in a closed form for a pair of qubits, this entanglement measure is directly lower-bounded,
\[E_{\mathrm{F}}(\rho_{AB})\geq h\left(\frac{1}{2}+\frac{1}{2}\sqrt{1+\alpha^{2 }-\frac{S^{2}}{4}}\right). \tag{16}\]
In Fig. 4, we depict the entanglement estimation result when \(\alpha=1.5\) for a pair of qubits input. As can be seen from the curves, the functions that give the estimated amount of entanglement are concave in the Bell value. As we discussed above, we should take a convex closure for the measures when extending the results to general states with an unknown dimension. Therefore, the final estimation is given by
\[C(\rho_{AB}),E_{\mathrm{F}}(\rho_{AB})\geq\frac{S-2\alpha}{2\sqrt{1+\alpha^{2} }-2\alpha}. \tag{17}\]
### Negative conditional entropy and one-way distillable entanglement
In this subsection, we estimate the one-way distillable entanglement, \(E_{D}^{\rightarrow}(\rho_{AB})\), depicted by the negative conditional entropy, \(-H(A|B)\), via Bell nonlocality. For the set of Bell-diagonal states on the qubit-pair systems, since the reduced density matrix of a subsystem is a maximally mixed state, \(H(B)=1\), the conditional von Neumann entropy of the
Figure 4: Diagram of concurrence and entanglement formation estimation results when the CHSH-type expression in Eq. (2) takes \(\alpha=1.5\) and input states are two-qubit states. We plot the estimated values of concurrence and entanglement of formation with the blue solid line and the red dashed line, respectively. The estimations are both concave in \(S\in(3,2\sqrt{3.25}]\) and range from \(0\) to \(1\).
state is reduced to \(H(A|B)=H(AB)-H(B)=H(AB)-1\). Using the notation in Eq. (10), the term of joint von Neumann entropy can be expressed by
\[H(AB)=H(\vec{\lambda})=-\sum_{i=1}^{4}\lambda_{i}\log\lambda_{i}. \tag{18}\]
Thus, the lower bound of one-way distillable entanglement for a pair of qubits becomes the following optimization problem,
\[\begin{split} E^{\rightarrow}_{D,\text{est}}&=\min _{\lambda_{i},i=1,2,3,4}1+\sum_{i=1}^{4}\lambda_{i}\log\lambda_{i},\\ \text{s.t.}& S=2\sqrt{\alpha^{2}(\lambda_{1}+ \lambda_{2}-\lambda_{3}-\lambda_{4})^{2}+(\lambda_{1}-\lambda_{2}+\lambda_{3}- \lambda_{4})^{2}},\\ &\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4},\\ & 1=\sum_{i=1}^{4}\lambda_{i},\lambda_{i}\geq 0\;,i=1,2,3,4.\end{split} \tag{19}\]
As this is a convex optimization problem, we can solve it efficiently via off-the-shelf numerical toolboxes. We present numerical results for some values of \(\alpha\) in Fig. 5. Given any \(\alpha>1\), the estimation value \(E^{\rightarrow}_{D,\text{est}}(S)\) is a convex function on \(S\in(2\alpha,2\sqrt{1+\alpha^{2}}]\). Following Eq. (13), the solution can be directly lifted as the lower bound on one-way distillable entanglement for a general state.
## IV Interplay among entanglement, measurement incompatibility, and nonlocality
Besides entanglement, another key ingredient behind nonlocality is measurement incompatibility. Both entanglement and measurement compatibility can be regarded as quantum resources to unveil non-classical physical phenomena. Hence a natural intuition is that for a given Bell value, there is a trade-off relation between entanglement and measurement incompatibility, where more incompatible measurement may compensate an underlying system with
Figure 5: Diagram of one-way distillable entanglement estimation results. The estimation is depicted by CHSH-type Bell expressions with several discretely increasing \(\alpha\). For each value of \(\alpha\), the estimation result, \(E^{\rightarrow}_{D,\text{est}}(S)\), depicted over the valid interval \(S\in(2\alpha,2\sqrt{1+\alpha^{2}}]\) is convex. When \(\alpha\) increases, \(E^{\rightarrow}_{D,\text{est}}(S)\) at \(S=2\alpha\) for each \(\alpha\) increases and converges to \(0\). Since \(E^{\rightarrow}_{D,\text{est}}(S)\) is convex in \(S\), the estimation results hold valid without assuming the system dimension.
less entanglement and vice versa. However, as we have discussed for the notion of optimal measurements, the observables that yield the largest Bell value for a quantum state may not correspond to the maximally incompatible ones. Particularly, as shown in Theorem 1, for the case of the least amount of entanglement for a nonlocal behavior, the observables are generally not maximally incompatible. In this section, we make a detailed investigation into the relation between entanglement and measurement incompatibility under a given Bell nonlocal behavior.
To simplify the discussion, we restrict our analysis with the following assumptions: (1) the underlying system is a pair of qubits, and (2) the measurement operators are qubit observables. Note that in the fully device-independent scenario without any _a priori_ assumption, the measurement result is a mixture of basic scenarios in this form, which can be shown by Jordan's lemma. In a sense, the situation we look at represents a typical setting for the question. To characterize a nonlocal behavior, we use the value of a particular \(\alpha\)-CHSH Bell expression.
We quantify the least amount of entanglement that is necessary for a given Bell value,
\[E_{\text{est}} =\min_{\rho_{AB}}E(\rho_{AB}), \tag{20}\] \[\text{s.t.}\quad\text{Tr}\left(\rho_{AB}\hat{S}_{\alpha}\right) =S,\] \[\hat{A}_{0} =\sigma_{z},\] \[\hat{A}_{1} =\sigma_{x},\] \[\hat{B}_{0} =\cos\theta\sigma_{z}+\sin\theta\sigma_{x},\] \[\hat{B}_{1} =\cos\theta\sigma_{z}-\sin\theta\sigma_{x},\] \[\rho_{AB} \geq 0,\] \[\rho_{AB} \in\mathcal{D}(\mathcal{H}_{2}\otimes\mathcal{H}_{2}),\] \[\text{Tr}(\rho_{AB}) =1.\]
where \(E\) represents a chosen entanglement measure. We still denote the solution to the optimization as \(E_{\text{est}}\), while it now represents the least amount of entanglement that is necessary for the nonlocal behavior under the given measurement incompatibility. In this optimization, we assume that the measurement incompatibility is parameterized by one parameter, \(\theta\). On Alice's side, the two local observables are fixed to be maximally incompatible with each other. On Bob's side, when \(\theta=0\) and \(\pi/2\), the two local observables commute. When \(\theta=\pi/4\), the local observables enjoy the maximal incompatibility, which is the other extreme. In the following discussions, we restrict the parameter to be \(\theta\in[0,\pi/4]\), as other cases can be obtained via symmetry. As there are two nonlocal parties in the Bell nonlocality setting, one may investigate measurement incompatibility with possibly other scenarios. We choose this scenario since that, for any given quantum state, one may encounter its optimal measurements by varying \(\theta\). In fact, if the measurements in Eq. (20) become optimal for \(\rho_{AB}\), the solution to Eq. (20) coincides with the solution to Eq. (9) under the system dimension constraint. In this way, the solution to Eq. (20) should reach the tight lower bound of device-independent entanglement estimation in a proper measurement setting.
### Original CHSH (\(\alpha=1\))
To observe the interplay among entanglement, measurement incompatibility, and nonlocality, we numerically solve the optimization problem in Eq. (20) by taking concurrence and one-way distillable entanglement as an entanglement measure and varying \(\theta\in[0,\pi/4]\) and \(S\) discretely from the \(\alpha\)-CHSH Bell values.
In Fig. 6, we choose the original CHSH Bell expression and present the numerical results when taking the concurrence as the entanglement measure. For a nonlocal behavior, where \(S\in(2,2\sqrt{2}]\), we denote \(\theta=\theta_{C}^{*}\) when the estimated concurrence reaches its minimum, \(C_{\text{est}}(S,\theta)=\sqrt{S^{2}/4-1}\). As we have derived in Theorem 1, \(\theta_{C}^{*}=\arctan\sqrt{S^{2}/4-1}\). When the amount of measurement incompatibility between the local observables is smaller than that of this point, which corresponds to \(\theta<\theta_{C}^{*}\), there is a trade-off relation between concurrence and measurement incompatibility, where less entanglement is required for the given Bell value as the amount of measurement incompatibility increases. However, when \(\theta>\theta_{C}^{*}\), as the underlying state enjoys more entanglement of concurrence, larger measurement incompatibility is also required for the observed Bell value. As \(S\) increases from \(2\) to \(2\sqrt{2}\), the range of feasible values of \((\theta,C_{\text{est}})\) shrinks as \(S\) grows. When \(S=2\sqrt{2}\), the underlying state is maximally entangled and the local measurement observables are the most incompatible ones, and the range of possible values of \((\theta,C_{\text{est}})\) degenerates to the point of \((\pi/4,1)\). This result coincides with the self-testing finding [35], where the only feasible experimental setting for the maximum CHSH Bell value enjoys the above properties.
In Fig. 7, we present the numerical results when taking the one-way distillable entanglement as the entanglement measure. Under a fixed Bell value, there is a strict trade-off relation between entanglement and measurement in
compatibility. The more incompatible the measurement observables are, the less entanglement is necessary for the nonlocal behavior, and _vice versa_. In addition, the range for the trade-off shrinks with a larger Bell violation value. In the extreme of the largest Bell violation value, \(S=2\sqrt{2}\), the setting should involve both the maximally entangled state and measurement observables that are maximally incompatible, in accordance with the self-testing result. One thing to note is that the estimated negative conditional entropy reaches its minimum exactly when \(\theta=\pi/4\) for all \(S\in(2,2\sqrt{2}]\), which holds no longer valid in \(\alpha\)-CHSH inequality when \(\alpha>1\).
Figure 6: Illustration of the interplay among Bell nonlocality, measurement incompatibility, and concurrence. In this figure, we consider the original CHSH Bell expression and parameterize the measurement observables as in Eq. (12), where incompatibility is quantified through \(\theta\). We focus on the interval of \(\theta\in[0,\pi/4]\), and the results elsewhere can be obtained using symmetry. For a given value of \(S\), when \(\theta<\theta_{C}^{\circ}=\arctan\sqrt{S^{2}/4-1}\), there is a trade-off relation between entanglement and measurement incompatibility, where less entanglement of concurrence is required for the nonlocal behavior when the measurements become more incompatible and _vice versa_. Afterward, more entanglement is required for the given Bell value as \(\theta\) increases. As \(S\) increases, the range of possible values of \((\theta,C_{\rm est})\) shrinks and \(\theta_{C}^{\circ}\) gets close to \(\pi/4\).
Figure 7: Illustration of the interplay among Bell nonlocality, measurement incompatibility, and one-way distillable entanglement. In this figure, we consider the original CHSH Bell expression and parameterize the measurement observables as in Eq. (12). As \(S\) increases, the range of possible values of \((\theta,E_{\rm D,est}^{\rightarrow})\) shrinks. For a given Bell value, less entanglement is required when \(\theta\) increases in the valid region.
### General CHSH-type (\(\alpha>1\))
Besides the original CHSH Bell expression, we also study the relation among entanglement, measurement incompatibility, and nonlocality for general \(\alpha\)-CHSH expressions. Fixing parameter \(\alpha>1\), for any Bell value \(S\in(2\alpha,2\sqrt{1+\alpha^{2}}]\), denote the range of plausible values of parameter \(\theta\) by \(\theta_{\rm min}\leq\theta\leq\theta_{\rm max}\). In Fig. 8, we investigate the issue under parameter \(\alpha=1.2\). For both the concurrence of entanglement and one-way distillable entanglement, when \(\theta\) increases from \(\theta_{\rm min}\) to \(\theta_{\rm max}\), the corresponding amount of estimated entanglement first monotonically decreases from \(1\), which corresponds to the maximally entangled state. In this region, there is a trade-off relation between entanglement and measurement incompatibility under the given Bell value. After reaching its minimum at \(\theta=\theta_{E}^{*}\), a point that is related to the particular entanglement measure under study, more entanglement is required as the local measurement observables become more incompatible. One thing worth noting is that under the same \(S\), the values of \(\theta_{\rm min}\) and \(\theta_{\rm max}\) are the same for both entanglement measures we now study. As \(S\) grows, the supported range of incompatibility and entanglement shrinks, which converges to the single point of \(\theta=\arctan(1/\alpha)\) and \(E_{\rm est}=1\) when \(S\) approaches its maximum \(2\sqrt{1+\alpha^{2}}\). Namely, the maximum value of the \(\alpha\)-CHSH expression requires a pair of non-maximally incompatible measurements on one side. This result also coincides with the self-testing findings [27]. Another indication is that to yield a large \(\alpha\)-CHSH Bell value with \(\alpha>1\), the measurement observables on one side cannot be too incompatible, where they lie outside the feasible region of the experimental settings.
For concurrence, we can derive the critical points analytically. Given \(\alpha\)-CHSH Bell value \(S\), when \(\theta=\theta_{C}^{*}=\arctan\biggl{(}\frac{1}{\alpha}\sqrt{\frac{S^{2}}{4}- \alpha^{2}}\biggr{)}\), the system requires the least amount of concurrence, \(C_{\rm est}=\sqrt{\frac{S^{2}}{4}-\alpha^{2}}\), which can be derived from Theorem 1. When \(\theta>\theta_{C}^{*}\), we find there is a region of \(\theta\) where the least amount of concurrence behaves differently from that of one-way distillable entanglement. That is, though more concurrence is required in the underlying system as \(\theta\) grows, the system may yield less distillable entanglement. In other words, the manifestation of entanglement properties through nonlocality highly depends on the particular entanglement measure under study.
The value of \(\theta_{\rm max}\) and the value of corresponding \(E_{\rm est}\) are related to the parameter, \(\alpha\). A notable issue is that under particular value of \(\alpha\) and Bell value \(S\), \(E_{\rm est}\) at \(\theta=\theta_{\rm max}\) can reach \(1\). We find that when \(1<\alpha<\sqrt{2}+1\), for \(S<\sqrt{2}(\alpha+1)\), \(\theta_{\rm max}=\pi/4\) and the corresponding least amount of entanglement, \(E_{\rm est}\) is strictly smaller than \(1\). For a larger Bell value, \(S\geq\sqrt{2}(\alpha+1)\), \(\theta_{\rm max}\) may be smaller than \(\pi/4\), and \(E_{\rm est}\) at \(\theta=\theta_{\rm max}\) always reaches \(E_{\rm est}=1\). For Bell expressions with \(\alpha\geq\sqrt{2}+1\), as long as the Bell inequality is violated, \(S>2\alpha\), we have \(E_{\rm est}=1\) at \(\theta=\theta_{\rm max}\). In Fig. 9, we illustrate the interplay relation when \(\alpha=\sqrt{2}+1\). From this example, we can see that there can be two
Figure 8: Illustration of the interplay among Bell nonlocality, measurement incompatibility, and entanglement. In this figure, we consider the \(\alpha\)-CHSH Bell expression with \(\alpha=1.2\). The blue curves depict the results of one-way distillable entanglement, and the red curves depict the results of concurrence. For both entanglement measures, given a Bell value, the least required amount of entanglement first monotonically decreases as \(\theta\) increases. After \(\theta\) is larger than a threshold value that depends on the entanglement measure, \(\theta_{E}^{*}\), more entanglement is required as the measurements become more incompatible. The ranges of possible values of \(\theta\in[\theta_{\rm min},\theta_{\rm max}]\) are the same for the two entanglement measures. When \(S<2.2\sqrt{2}\), \(\theta_{\rm max}=\pi/4\). When \(S\geq 2.2\sqrt{2}\), \(\theta_{\rm max}\) is smaller than \(\pi/4\). The supported range shrinks as \(S\) increases. When \(S\) reaches its maximum, \(S=2\sqrt{1.2^{2}+1}\), the range degenerates to the point of \(\theta=\arctan 1/1.2\). In this case, the underlying state can only be a maximally entangled state, corresponding to \(E_{\rm est}=1\).
experimental settings that give rise to the same Bell value, where the underlying systems enjoy the same amount of entanglement, yet the incompatibility between the local measurements can be significantly different.
## V Optimizing entanglement estimation in realistic settings
While the full probability distribution of a nonlocal behavior gives the complete description in a Bell test, for practical purposes, one often applies a Bell expression to characterize nonlocality. As a given Bell expression only reflects a facet of the nonlocal behavior, one may expect a better entanglement estimation result via some well-chosen Bell expressions. In this section, we aim to specify when a non-trivial choice of \(\alpha\)-CHSH expression leads to better estimation. From an experimental point of view, the investigations may benefit experimental designs of device-independent information processing tasks. For this purpose, we simulate the nonlocal correlations that arise from two sets of states: non-maximally pure entangled states and Werner states. The deliberate use of non-maximally entangled states has been proved beneficial for observing nonlocal correlations under lossy detectors [36]. The Werner states characterize the typical effect of transmission noise upon entanglement distribution through fiber links [37; 38].
With respect to the computation bases that define Pauli operators \(\sigma_{z}\) on each local system, the measurements are parametrized as
\[\hat{A}_{0} =\sigma_{z}, \tag{21}\] \[\hat{A}_{1} =\cos\theta_{1}\sigma_{z}+\sin\theta_{1}\sigma_{x},\] \[\hat{B}_{0} =\cos\theta_{2}\sigma_{z}+\sin\theta_{2}\sigma_{x},\] \[\hat{B}_{1} =\cos\theta_{3}\sigma_{z}+\sin\theta_{3}\sigma_{x},\]
for Alice and Bob, respectively.
### Non-maximally entangled states
In the first simulation model, the underlying state is a non-maximally entangled state. We express the state on its Schmidt basis,
\[\left|\phi_{AB}(\delta)\right\rangle=\cos\delta\left|00\right\rangle+\sin \delta\left|11\right\rangle. \tag{22}\]
Figure 9: Illustration of the interplay among Bell nonlocality, measurement incompatibility, and entanglement. In this figure, \(\alpha=\sqrt{2}+1\). The blue curves depict the results of one-way distillable entanglement, and the red curves depict the results of concurrence. The relation between entanglement and measurement incompatibility is similar to that in Fig. 8. Nevertheless, given any Bell value \(S\) that is larger than \(2\alpha\), which violates the \(\alpha\)-CHSH Bell inequality, the least amount of entanglement in the system at \(\theta=\theta_{\text{max}}\) is \(1\), corresponding to the maximally entangled state. The feasible range of \(\theta\in[\theta_{\text{min}},\theta_{\text{max}}]\) shrinks as \(S\) grows and degenerates to the point of \(\theta=\arctan\bigl{(}\sqrt{2}-1\bigr{)}\), where the Bell value reaches its maximum, \(S=2\sqrt{2\sqrt{2}+4}\).
where parameter \(\delta\in[0,\pi/2]\) fully determines the amount of entanglement in the system. We first present the estimation result through a concrete example. We specify the underlying system by \(\delta=\pi/6\) and the measurements by \(\theta_{1}=\pi/2,\theta_{2}=\pi/6\) and \(\theta_{3}=-\pi/6\). As shown in Fig. 10, we estimate the amount of negative conditional entropy and concurrence with respect to the simulated statistics. The estimation results vary with respect to the value of \(\alpha\). The curves show that the original CHSH expression, corresponding to \(\alpha=1\), does not yield the best entanglement estimation result for the given statistics. One obtains the best estimation results with the value of \(\alpha\) roughly in the range \([1.4,1.6]\) for both concurrence and negative conditional entropy.
To see when better entanglement estimation is obtained with \(\alpha>1\) for the family of non-maximally entangled states, we analytically derive the condition of the underlying system for the measure of concurrence. Using Eq. (17), we have the following result.
**Theorem 2**.: _In a Bell test experiment, suppose the underlying state of the system takes the form of Eq. (22), and the observables take the form of Eq. (21). For concurrence estimation solely from the violation values of \(\alpha\)-CHSH Bell inequalities, if \(\theta_{1},\theta_{2},\theta_{3}\) and \(\delta\) satisfy_
\[\sin 2\delta\sin\theta_{1}(\sin\theta_{2}-\sin\theta_{3})+\cos\theta_{2}( \sqrt{2}+1+\cos\theta_{1})+\cos\theta_{3}(\sqrt{2}+1-\cos\theta_{1})>2(1+ \sqrt{2}), \tag{23}\]
_then there exists \(\alpha>1\), where a better estimation of \(C_{\rm est}(S)\) can be obtained by using the \(\alpha\)-CHSH inequality parameterized by this value than by using the original CHSH inequality (corresponding to \(\alpha=1\))._
Theorem 2 analytically confirms the nonlocality depicted by the original CHSH Bell value does not always provide the concurrence estimation that approaches the real value most. When a fixed nonlocal behavior is given in a CHSH Bell test, when the non-maximally entangled state parameter \(\delta\) in Eq. (22) and measurement parameters \(\theta_{1},\theta_{2},\theta_{3}\) in Eq. (21) satisfy Eq. (23), it is feasible to take a CHSH-type Bell value with \(\alpha>1\) to estimate concurrence of the state. We leave the proof of Theorem 2 in Appendix C.
**Example.** We take a special set of parameters in Eq. (23) for an example. Suppose \(\theta_{1}=\pi/2\) and \(\theta_{3}=-\theta_{2}\), which resemble the optimal measurements in Eq. (12) in form. Under this setting, we derive an explicit expression of \(\alpha_{0}>1\), such that the estimation \(C_{\rm est}(S)\) is optimal when taking \(\alpha=\alpha_{0}\) in the CHSH inequality. When \(0<\theta_{2}<\pi/4\), any non-maximally entangled state that satisfies
\[\sin 2\delta>(1+\sqrt{2})\frac{1-\cos\theta_{2}}{\sin\theta_{2}} \tag{24}\]
permits a better concurrence estimation characterizing with some \(\alpha>1\). When the state and measurements satisfy the condition in Eq. (24), one obtains the optimally estimated concurrence when the parameter \(\alpha\) equals
\[\alpha_{C}^{*}=\frac{1}{2}\left(T-\frac{1}{T}\right)>1, \tag{25}\]
Figure 10: Entanglement estimation results for nonlocal correlations arising from non-maximally entangled states. The experimental setting is given by \(\delta=\pi/6,\theta_{1}=\pi/2,\theta_{2}=\pi/6\) and \(\theta_{3}=-\pi/6\). We depict the entanglement estimation results when using different \(\alpha\)-CHSH Bell expressions. We plot the estimated values of one-way distillable entanglement and concurrence with the black solid line and the red dashed line, respectively.
where we denote \(T=\frac{\sin 2\delta\sin\theta_{2}}{1-\cos\theta_{2}}\). The optimally estimated concurrence is then given by
\[C_{\rm est}|_{\alpha_{C}^{*}}=\frac{1-\cos\theta_{2}}{2}(T^{2}+1). \tag{26}\]
It is worth mentioning that if we have the additional assumption that the underlying state is a pair of qubits, we can analytically derive a more accurate estimation result of concurrence. We leave the detailed conclusions and examples in Appendix C.
### Werner states
In the second simulation model, we consider the set of Werner states,
\[\rho_{\rm W}(p)=(1-p)\left|\Phi^{+}\right\rangle\!\!\left\langle\Phi^{+} \right|+p\frac{I}{4}, \tag{27}\]
where we write \(|\Phi^{+}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\). The Werner state is entangled when \(p<2/3\). Similarly, for the family of Werner states, there are examples that a non-trivial choice of \(\alpha\)-CHSH expression gives a better estimation result. In Fig. 11, we present such an example. In the simulation, the underlying system is parameterized by \(p=0.05\), and the measurements are parameterized by \(\theta_{1}=\pi/2,\theta_{2}=\pi/6\) and \(\theta_{3}=-\pi/6\). The optimal estimation of negative conditional entropy is obtained with the value of \(\alpha\) roughly in the range of \([1.2,1.4]\), while the optimal estimation of concurrence is obtained when \(\alpha\in[1,1.2]\). We derive an analytical result for the feasible region of state and measurement parameters that permits a better concurrence estimation for a non-trivial value of \(\alpha>1\).
**Theorem 3**.: _In a Bell test experiment, suppose the underlying state of the system takes the form of Eq. (27), and the observables take the form of Eq. (21). For concurrence estimation solely from the violation values of \(\alpha\)-CHSH Bell inequalities, if \(\theta_{1},\theta_{2},\theta_{3}\) and \(p\) satisfy_
\[(1-p)[\sin\theta_{1}(\sin\theta_{2}-\sin\theta_{3})+\cos\theta_{2}(\sqrt{2}+1+ \cos\theta_{1})+\cos\theta_{3}(\sqrt{2}+1-\cos\theta_{1})]>2(1+\sqrt{2}), \tag{28}\]
_then there exists \(\alpha>1\), where a better estimation of \(C_{\rm est}(S)\) can be obtained by using the \(\alpha\)-CHSH inequality parameterized by this value than by using the original CHSH inequality (corresponding to \(\alpha=1\))._
Theorem 3 indicates that the nonlocality depicted by the original CHSH Bell value does not always provide the most accurate concurrence estimation of Werner states. In a Bell test experiment, for Werner states parameter \(p\) in Eq. (27) and measurement parameters \(\theta_{1},\theta_{2},\theta_{3}\) in Eq. (21) satisfy Eq. (28), it is helpful to take a CHSH-type Bell
Figure 11: Entanglement estimation results for nonlocal correlations arising from Werner states. The experimental setting is given by \(p=0.05,\theta_{1}=\pi/2,\theta_{2}=\pi/6\), and \(\theta_{3}=-\pi/6\). We depict the entanglement estimation results using different \(\alpha\)-CHSH Bell expressions. We plot the estimated values of one-way distillable entanglement and concurrence with the black solid line and the red dashed line, respectively.
value with \(\alpha>1\) to estimate concurrence of the Werner state. We leave the proof and discussion of Theorem 3 in Appendix C.
**Example.** As a special example, we take the measurements setting in Eq. (23) with \(\theta_{1}=\pi/2,\theta_{3}=-\theta_{2}\), the same one as we use for the case study of non-maximally entangled states. For \(0<\theta_{2}<\pi/4\), any Werner state in Eq. (27) with \(p\) satisfying
\[p<1-\frac{1}{(\sqrt{2}-1)\sin\theta_{2}+\cos\theta_{2}} \tag{29}\]
promises a better estimation result of \(C_{\rm est}\) by using an \(\alpha\)-CHSH expression with \(\alpha>1\) in comparison with \(\alpha=1\). The right hand side of Eq. (29) is upper bounded by \(1-(\sqrt{2}+1)/(\sqrt{4+2\sqrt{2}})\doteq 0.0761\). That is, only a Werner state with \(p\lesssim 0.0761\) is possible to yield the condition in Theorem 3. Denote \(T=\frac{(1-p)\sin\theta_{2}}{1-(1-p)\cos\theta_{2}}\). Then for the underlying system of a Werner state and measurements satisfying Eq. (29), the concurrence estimation result reaches its optimal value with parameter \(\alpha\)
\[\alpha_{C}^{*}=\frac{1}{2}\left(T-\frac{1}{T}\right)>1, \tag{30}\]
and the estimation result is
\[C_{\rm est}|_{\alpha_{C}^{*}}=1-\frac{p(2-p)}{2[1-(1-p)\cos\theta_{2}]}. \tag{31}\]
Similarly, with an additional assumption on system dimension, we can obtain a more accurate concurrence estimation result. We leave the details in Appendix C.
## VI Conclusions and discussion
In this work, we study entanglement quantification via nonlocality, where we consider several entanglement measures for a family of generalized CHSH-type expressions. This family of Bell expressions allows us to effectively reduce the dimension of an unknown system to a pair of qubits, leading to results for particular entanglement measures like concurrence, entanglement of formation, and one-way distillable entanglement. Under this framework, we also investigate the interplay among entanglement, measurement incompatibility, and nonlocality. While entanglement and measurement incompatibility are both necessary conditions for a nonlocal behavior, under a given nonlocal behavior, their interplay can be subtler than a simple trade-off relation. Given a Bell value, the measurements that require the minimum entanglement are not the most incompatible measurements in general. In addition, we also apply the entanglement quantification results in realistic scenarios. For non-maximally entangled states and Werner states, we analytically show that there exist state and measurements settings where a general CHSH Bell expression with \(\alpha>1\) leads to better concurrence estimation of the underlying state than the original CHSH expression.
When quantifying entanglement from nonlocality, the estimation results highly depend on the specific entanglement measures. Before our work, there are similar investigations focusing on different entanglement measures [20; 22]. A natural question is hence how nonlocality reflects various entanglement properties. Particularly, novel results may arise from high-dimensional entanglement and Bell expressions with multiple inputs and outputs, such as the study of Peres conjecture.
In studying the interplay of entanglement and measurement incompatibility under a given nonlocal behavior, we make some additional assumptions on the measurement observables to ease the quantification of measurement incompatibility. In the sense of a fully device-independent discussion, one may consider other incompatibility measures, such as the robustness of measurement incompatibility [39]. Despite the freedom in measuring entanglement and measurement incompatibility, we believe our results unveil the subtlety of the interplay between these nonclassical notions, where more incompatible measurements may not compensate for the absence of entanglement and vice versa. From a resource-theoretic perspective, our results may indicate restrictions on the resource transformation between entanglement and measurement incompatibility in the sense of Bell nonlocality.
When applying our results to experiments, one may consider the practical issues in more detail. For instance, the problem of entanglement estimation via nonlocality can be generalized to the one-shot regime, where one considers dilution and distillation processes with a finite number of possibly non-i.i.d. quantum states. Notably, the results in Ref. [22] provide an approach to estimating one-shot one-way distillable entanglement via nonlocality, and the techniques in Ref. [40] may be applicable to the estimation of one-shot entanglement cost. We leave research in this direction for future works.
## VII Acknowledgement
This work was supported by the National Natural Science Foundation of China Grant No. 12174216.
Y.Z. and X.Z. contributed equally to this work.
## Appendix A Reductions of the original optimization problem
In Sec. III, we reduce the original optimization problem, including the essential steps of using Jordan's lemma to bypass the dimension problem and reducing a general two-qubit state to the Bell-diagonal state. Here we explain the two steps in detail.
### Jordan's lemma
We apply Jordan's lemma to bypass the system dimension problem. The description of Jordan's lemma is given below, with the proof can be found in [34].
**Lemma 3**.: _Suppose \(\hat{A}_{0}\) and \(\hat{A}_{1}\) are two Hermitian operators with eigenvalues \(\pm 1\) that act on a Hilbert space with a finite or countable dimension, \(\mathcal{H}\). Then there exists a direct-sum decomposition of the system, \(\mathcal{H}=\bigoplus\mathcal{H}^{\mu}\), such that \(\hat{A}_{0}=\bigoplus\hat{A}_{0}^{\mu}\), \(\hat{A}_{1}=\bigoplus\hat{A}_{1}^{\mu}\), \(\hat{A}_{0}^{\mu},\hat{A}_{1}^{\mu}\in\mathcal{L}(\mathcal{H}^{\mu})\), where the sub-systems satisfy \(\dim\mathcal{H}^{\mu}\leq 2,\forall\mu\)._
Without loss of generality, we can treat the two measurement observables on each side as projective ones with eigenvalues \(\pm 1\). Jordan's lemma guarantees that the two possible observables measured by Alice can be represented as
\[\hat{A}_{x}=\sum_{\mu}\hat{\Pi}^{\mu_{A}}\hat{A}_{x}\hat{\Pi}^{\mu_{A}}= \bigoplus_{\mu_{A}}\hat{A}_{x}^{\mu_{A}}, \tag{10}\]
where \(x\in\{0,1\}\), \(\hat{\Pi}^{\mu_{A}}\) are projectors onto orthogonal subspaces with dimension no larger than \(2\), and \(\hat{A}_{x}^{\mu_{A}}\) are qubit observables with eigenvalues \(\pm 1\). A similar representation applies to Bob's measurement observables. Due to the direct-sum representation, one can regard the measurement process as first applying a block-dephasing operation to the underlying quantum system. Consequently, one can equivalently regard the measurement process as measuring the following state,
\[\bar{\rho}_{AB}=\sum_{\mu}(\hat{\Pi}^{\mu_{A}}\otimes\hat{\Pi}^{\mu_{B}})\rho _{AB}(\hat{\Pi}^{\mu_{A}}\otimes\hat{\Pi}^{\mu_{B}})=\bigoplus_{\mu}p^{\mu} \rho_{AB}^{\mu}. \tag{11}\]
Here we relabel the indices with \(\mu\equiv\{\mu_{A},\mu_{B}\}\). As the block-dephasing operators act locally on each side, the measurement process does not increase entanglement in the system. Therefore, we can lower-bound the amount of entanglement in the initial system by studying the average amount of entanglement in the ensemble of qubit-pairs, \(\{p^{\mu},\rho_{AB}^{\mu}\}\).
Consequently, the expected CHSH Bell value in a test is the linear combination of the Bell values for the qubit pairs, \(S=\sum_{\mu}p^{\mu}S^{\mu}\). Note that an observer cannot access to the probability distribution, \(p^{\mu}\), and the Bell values for each pair of qubits, \(S^{\mu}\), but only the expected Bell value, \(S\), hence the final device-independent entanglement quantification result should be a function of \(S\). On the other hand, we shall first derive entanglement quantification results for each pair of qubits in the form of \(E_{\text{est}}(S^{\mu})\). It is thus essential to consider the convexity of the function, \(E_{\text{est}}\). If the function is not convex in its argument, i.e., \(E_{\text{est}}(\sum_{\mu}p^{\mu}S^{\mu})\) is not smaller than \(\sum_{\mu}p^{\mu}E_{\text{est}}(S^{\mu})\), one needs to take the convex closure of \(E_{\text{est}}\) to obtain a valid lower bound that holds for all possible configurations giving rise to the expected Bell value, \(S\).
### Restriction to Bell-diagonal states
Following the route in Fig. 3, the feasible region of the state variables in an entanglement quantification problem can be effectively restricted to the set of Bell-diagonal states on the two qubit systems. We present the following lemma.
**Lemma 4**.: _Suppose the underlying system in a CHSH-type Bell test lies in a two-qubit state, \(\rho_{AB}\). Then there exists an LOCC that transforms \(\rho_{AB}\) into an ensemble of Bell states, \(\rho_{\lambda}\), without changing the expected Bell value._
Proof.: In a CHSH-type Bell test, we transform an arbitrary pair of qubits \(\rho_{AB}\) into a Bell-diagonal state \(\rho_{\lambda}\) via three steps of LOCC. In each step, We verify that the \(\alpha\)-CHSH Bell values are equal for the states before and after the transformation with the same measurements.
_Step 1:_ In a CHSH Bell test, Alice and Bob fix their local computational bases, or, the axes of the Bloch spheres on each side. As there are only two observables on each side, one can represent them on the \(x-z\) plane of the Bloch sphere without loss of generality. Then Alice and Bob flip their measurement results simultaneously via classical communication with probability \(1/2\). This operation can be interpreted as transforming \(\rho_{AB}\) into the following state,
\[\rho_{1}=\frac{1}{2}[\rho_{AB}+(\sigma_{2}\otimes\sigma_{2})\rho_{AB}(\sigma_{ 2}\otimes\sigma_{2})]. \tag{10}\]
To avoid confusion about the subscripts, we use the following convention to denote the Pauli operators in the Appendix,
\[\sigma_{x} \equiv\sigma_{1},\] \[\sigma_{y} \equiv\sigma_{2}, \tag{11}\] \[\sigma_{z} \equiv\sigma_{3}.\]
Under the Bell basis determined by the local computational bases, \(\{\ket{\Phi^{+}},\ket{\Psi^{-}},\ket{\Phi^{-}},\ket{\Psi^{+}}\}\), \(\rho_{1}\) can be denoted as
\[\rho_{1}=\begin{bmatrix}l_{1}e^{-i\phi_{1}}&l_{1}e^{i\phi_{1}}&0&0\\ 0&0&\lambda_{\Psi^{+}}&l_{2}e^{i\phi_{2}}\\ 0&0&l_{2}e^{-i\phi_{2}}&\lambda_{\Phi^{-}}\end{bmatrix}. \tag{12}\]
It can be verified that the statistics of measuring \(\sigma_{i}\otimes\sigma_{j}\) for \(i,j=1,3\) are invariant under the operation of \(\sigma_{2}\otimes\sigma_{2}\). Thus
\[\mathrm{Tr}\Big{[}(\sigma_{2}\otimes\sigma_{2})\rho_{AB}(\sigma_{2}\otimes \sigma_{2})(\hat{A}_{x}\otimes\hat{B}_{y})\Big{]}=\mathrm{Tr}\Big{[}\rho_{AB} (\hat{A}_{x}\otimes\hat{B}_{y})\Big{]} \tag{13}\]
for \(\hat{A}_{x}\) and \(\hat{B}_{y}\), \(x,y=0,1\), which indicates that \(\rho_{1}\) and \(\rho_{AB}\) share the common Bell value.
_Step 2:_ In this step, we apply LOCC to transform \(\rho_{1}\) into a state where the off-diagonal terms on the Bell basis become imaginary numbers. For this purpose, Alice and Bob can each apply a local rotation around the \(y\)-axes of the Bloch spheres on their own systems,
\[R_{y}(\theta)=\cos\frac{\theta}{2}I+i\sin\frac{\theta}{2}\sigma_{2}=\Big{(} \begin{smallmatrix}\cos\frac{\theta}{2}&\sin\frac{\theta}{2}\\ -\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{smallmatrix}\Big{)}\,, \tag{14}\]
with its action on a general observable residing in the \(x-z\) plane given by
\[R_{y}(\theta)(\cos\gamma\sigma_{1}+\sin\gamma\sigma_{3})=\cos\!\left(\gamma+ \frac{\theta}{2}\right)\!\sigma_{1}+\sin\!\left(\gamma+\frac{\theta}{2} \right)\!\sigma_{3}. \tag{15}\]
After applying the operation, the resulting state becomes \(\rho_{2}=[R_{y}(\alpha)\otimes R_{y}(\beta)]\rho_{1}[R_{y}(-\alpha)\otimes R _{y}(-\beta)]\), where the off-diagonal terms undergo the following transformations,
\[l_{1}e^{i\phi_{1}} \rightarrow\frac{1}{2}(\lambda_{\Phi^{+}}-\lambda_{\Psi^{-}})\sin (\alpha-\beta)+l_{1}\cos\phi_{1}\cos(\alpha-\beta)+l_{1}\sin\phi_{1}i, \tag{16}\] \[l_{2}e^{i\phi_{2}} \rightarrow\frac{1}{2}(\lambda_{\Phi^{-}}-\lambda_{\Psi^{+}})\sin (\alpha+\beta)+l_{2}\cos\phi_{2}\cos(\alpha+\beta)+l_{2}\sin\phi_{2}i. \tag{17}\]
By choosing \(\alpha\) and \(\beta\) properly, the real parts in the off-diagonal terms of \(\rho_{2}\) can be eliminated. Similarly, as in the _Step 1_, the measurement of \(\sigma_{i}\otimes\sigma_{j}\) for \(i,j=1,3\) remains invariant under local rotations around the \(y\)-axes, which indicates \(\rho_{2}\) and \(\rho_{1}\) give the same Bell value under the same measurements.
_Step 3:_ Note that \(\rho_{2}\) and \(\rho_{2}^{*}\) give the same Bell value under the given measurements,
\[\mathrm{Tr}[\rho_{2}(\sigma_{i}\otimes\sigma_{j})]=\mathrm{Tr}[\rho_{2}^{*}( \sigma_{i}\otimes\sigma_{j})],i,j=1,3. \tag{18}\]
Hence without loss of generality, one can take the underlying state in the Bell test as \(\rho_{\lambda}=(\rho_{2}+\rho_{2}^{*})/2\), which is a Bell-diagonal state.
Based on the above simplification, we represent Eq. (9) under Bell-diagonal states, which leads to the following lemma.
**Lemma 5**.: _The maximal value of the \(\alpha\)-CHSH expression in Eq. (2) for a Bell-diagonal state shown in Eq. (10), \(\rho_{\lambda}\), is given by_
\[S=2\sqrt{\alpha^{2}(\lambda_{1}+\lambda_{2}-\lambda_{3}-\lambda_{4})^{2}+( \lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_{4})^{2}}, \tag{101}\]
_where \(\lambda_{i}\) is the \(i\)-th largest eigenvalue of \(\rho_{\lambda}\)._
Proof.: In an \(\alpha\)-CHSH Bell test, measurements corresponding to non-degenerate Pauli observables can be expressed as
\[\hat{A}_{x} =\vec{a}_{x}\cdot\vec{\sigma}, \tag{102}\] \[\hat{B}_{y} =\vec{b}_{y}\cdot\vec{\sigma}, \tag{103}\]
where \(\vec{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})\) are three Pauli matrices, and \(\vec{a}_{x}=(a_{x}^{1},a_{x}^{2},a_{x}^{3})\) and \(\vec{b}_{y}=(b_{y}^{1},b_{y}^{2},b_{y}^{3})\) are unit vectors for \(x,y=0,1\). A Bell-diagonal state shown in Eq. (10) can be expressed on the Hilbert-Schmidt basis as
\[\rho_{\lambda}=\frac{1}{4}\left(I+\sum_{i,j=1}^{3}T_{\lambda,ij}\sigma_{i} \otimes\sigma_{j}\right), \tag{104}\]
where
\[T_{\lambda}=\begin{bmatrix}(\lambda_{1}+\lambda_{3})-(\lambda_{2}+\lambda_{4} )&0&0\\ 0&(\lambda_{3}+\lambda_{2})-(\lambda_{1}+\lambda_{4})&0\\ 0&0&(\lambda_{1}+\lambda_{2})-(\lambda_{3}+\lambda_{4})\end{bmatrix} \tag{105}\]
is a diagonal matrix. The \(\alpha\)-CHSH expression in Eq. (2) can be expressed in terms of \(T_{\lambda}\) as
\[\begin{split}&\text{Tr}\left\{\alpha\rho_{\lambda}(\vec{a}_{0} \cdot\vec{\sigma})\otimes[(\vec{b}_{0}+\vec{b}_{1})\cdot\vec{\sigma}]+\rho_{ \lambda}(\vec{a}_{1}\cdot\sigma)\otimes[(\vec{b}_{0}-\vec{b}_{1})\cdot\vec{ \sigma}]\right\}\\ =&\alpha[\vec{a}_{0}\cdot T_{\lambda}(\vec{b}_{0}+\vec{b}_{1})]+[ \vec{a}_{1}\cdot T_{\lambda}(\vec{b}_{0}-\vec{b}_{1})].\end{split} \tag{106}\]
Following the method in Ref. [41], we introduce a pair of normalized orthogonal vectors, \(\vec{c}_{0}\) and \(\vec{c}_{1}\),
\[\vec{b}_{0}+\vec{b}_{1} =2\cos\theta\vec{c}_{0}, \tag{107}\] \[\vec{b}_{0}-\vec{b}_{1} =2\sin\theta\vec{c}_{1} \tag{108}\]
where \(\theta\in[0,\pi/2]\). This gives the maximal \(\alpha\)-CHSH Bell value,
\[S= =\max_{\vec{a}_{0},\vec{a}_{1},\vec{c}_{0},\vec{c}_{1},\theta}2 \alpha\cos\theta(\vec{a}_{0}\cdot T_{\lambda}\vec{c}_{0})+2\sin\theta(\vec{a} _{1}\cdot T_{\lambda}\vec{c}_{1}). \tag{109}\]
The maximization of the Bell value is taken over parameters \(\vec{a}_{x},\vec{b}_{y}\) for \(x,y=0,1\), with the parameters \(\lambda_{i}\) fixed. We obtain
\[\begin{split} S&=\max_{\vec{c}_{0},\vec{c}_{1}, \theta}2\alpha\cos\theta|T_{\lambda}\vec{c}_{0}|+2\sin\theta|T_{\lambda}\vec{c }_{1}|\\ &=\max_{\vec{c}_{0},\vec{c}_{1}}2\sqrt{\alpha^{2}|T_{\lambda} \vec{c}_{0}|^{2}+|T_{\lambda}\vec{c}_{1}|^{2}},\end{split} \tag{110}\]
where the first equality in Eq. (110) is saturated when \(\vec{a}_{x}=T_{\lambda}\vec{c}_{x}/|T_{\lambda}\vec{c}_{x}|,x=0,1\), and the second inequality is saturated when \(\tan\theta=|T_{\lambda}\vec{c}_{1}|/(\alpha|T_{\lambda}\vec{c}_{0}|)\). Since \(\alpha>1\) and \(\vec{c}_{0}\) and \(\vec{c}_{1}\) are orthonormal vectors, the maximum of the second line in Eq. (110) is obtained when \(|T_{\lambda}\vec{c}_{0}|\) and \(|T_{\lambda}\vec{c}_{1}|\) equal to the absolute values of the largest and the second largest eigenvalues of \(T_{\lambda}\), respectively. Without loss of generality, we assume \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4}\) in \(\rho_{\lambda}\). This leads to the ordering of the absolute values of the elements of \(T_{\lambda}\),
\[\begin{split}|T_{\lambda,33}|&=|(\lambda_{1}- \lambda_{4})+(\lambda_{2}-\lambda_{3})|\geq|(\lambda_{1}-\lambda_{4})-(\lambda _{2}-\lambda_{3})|=|T_{\lambda,11}|,\\ |T_{\lambda,11}|&=|(\lambda_{3}-\lambda_{4})+(\lambda _{1}-\lambda_{2})|\geq|(\lambda_{3}-\lambda_{4})-(\lambda_{1}-\lambda_{2})|=| T_{\lambda,22}|.\end{split} \tag{111}\]
Thus, the second line in Eq. (110) reaches its maximum when \(|\vec{c}_{0}|=(0,0,1)^{T}\) and \(|\vec{c}_{1}|=(1,0,0)^{T}\). Therefore, for any given Bell-diagonal state \(\rho_{\lambda}\) in Eq. (10), the maximal \(\alpha\)-CHSH Bell value is
\[S=2\sqrt{\alpha^{2}(\lambda_{1}+\lambda_{2}-\lambda_{3}-\lambda_{4})^{2}+( \lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_{4})^{2}}, \tag{112}\]
where measurements for \(\rho_{\lambda}\) to achieve the maximal Bell value, i.e., optimal measurements, are given by
\[\begin{split}\hat{A}_{0}&=\pm\sigma_{z},\\ \hat{A}_{1}&=\pm\sigma_{x},\\ \hat{B}_{0}&=\pm\cos\theta\sigma_{z}\pm\sin\theta \sigma_{x},\\ \hat{B}_{1}&=\pm\cos\theta\sigma_{z}\mp\sin\theta \sigma_{x},\end{split} \tag{103}\]
or
\[\begin{split}\hat{A}_{0}&=\pm\sigma_{z},\\ \hat{A}_{1}&=\mp\sigma_{x},\\ \hat{B}_{0}&=\pm\cos\theta\sigma_{z}\mp\sin\theta \sigma_{x},\\ \hat{B}_{1}&=\pm\cos\theta\sigma_{z}\pm\sin\theta \sigma_{x},\end{split} \tag{104}\]
with \(\tan\theta=(\lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_{4})/[\alpha(\lambda_ {1}+\lambda_{2}-\lambda_{3}-\lambda_{4})]\).
From the proof, we see that any Bell-diagonal state \(\rho_{\lambda}\) in Eq. (10) reaches its maximal Bell value of Eq. (2) when measurements are taken in the form Eq. (103) or Eq. (104). In other words, measurements in Eq. (103) and Eq. (104) are the optimal measurements for \(\rho_{\lambda}\) that yield the largest \(\alpha\)-CHSH Bell value. To solve the simplified entanglement estimation problem in Eq. (9) for Bell-diagonal states, we need to solve the optimal measurements first. Given a general pair of qubits \(\rho_{AB}\), the maximal \(\alpha\)-CHSH Bell value \(S\) for \(\rho_{AB}\) is expressed as a function of \(T_{ij}\),
\[S=[2(\alpha^{2}+1)(T_{11}^{2}+T_{13}^{2}+T_{31}^{2}+T_{33}^{2})+2(\alpha^{2}-1 )\sqrt{(T_{11}^{2}-T_{13}^{2}+T_{31}^{2}-T_{33}^{2})^{2}+4(T_{11}+T_{13}+T_{31 }+T_{33})^{2}}]^{1/2}, \tag{105}\]
where \(T_{ij}=\text{Tr}[\rho_{AB}(\sigma_{i}\otimes\sigma_{j})]\) is the coefficient of \(\rho_{AB}\) under Hilbert-Schmidt basis. When \(\rho_{AB}\) is Bell-diagonal, Eq. (105) degenerates to Eq. (102).
## Appendix B Proof of the lower bound of concurrence
In this section, we prove the analytical concurrence estimation result via the \(\alpha\)-CHSH Bell value. Here we restrict the underlying state as a pair of qubits.
**Theorem 4**.: _Suppose the underlying quantum state is a pair of qubits. For a given \(\alpha\)-CHSH expression in Eq. (2) parametrized by \(\alpha\), if the Bell value is \(S\), then the amount of concurrence in the underlying state can be lower-bounded,_
\[C(\rho_{AB})\geq\sqrt{\frac{S^{2}}{4}-\alpha^{2}}. \tag{106}\]
_The equality can be saturated when measuring a Bell-diagonal state in Eq. (10) with eigenvalues_
\[\begin{split}\lambda_{1}&=\frac{1}{2}+\frac{1}{2} \sqrt{\frac{S^{2}}{4}-\alpha^{2}},\\ \lambda_{2}&=\frac{1}{2}-\frac{1}{2}\sqrt{\frac{S^{ 2}}{4}-\alpha^{2}},\\ \lambda_{3}&=\lambda_{4}=0,\end{split} \tag{107}\]
_using measurements in Eq. (12) with \(\theta=\arctan\!\left(\frac{1}{\alpha}\sqrt{\frac{S^{2}}{4}-\alpha^{2}}\right)\)._
Proof.: Given any Bell value \(S\in(2\alpha,2\sqrt{1+\alpha^{2}}]\), we aim to determine the least amount of concurrence that is required to support the Bell value, \(S\). We solve the simplified optimization problem in Eq. (9), restricting the underlying state as a Bell-diagonal state in Eq. (10) and taking the objective entanglement measure as \(C(\cdot)\),
\[\begin{split} C_{\text{est}}&=\min_{\lambda_{i},i=1,2,3,4}\max\{0,2\lambda_{1}-1\},\\ \text{s.t.}& S=2\sqrt{\alpha^{2}(\lambda_{1}+\lambda_{ 2}-\lambda_{3}-\lambda_{4})^{2}+(\lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_ {4})^{2}},\\ \lambda_{1}&\geq\lambda_{2}\geq\lambda_{3}\geq \lambda_{4},\\ 1&=\sum_{i=1}^{4}\lambda_{i},\lambda_{i}\geq 0\;,i=1,2,3,4.\end{split} \tag{108}\]
We first reduce the number of variables to simplify the optimization in Eq. (14). Since the variables in Eq. (14) are not independent with each other, we express variables \(\lambda_{1}\) and \(\lambda_{4}\) as functions of variables \(\lambda_{2}\) and \(\lambda_{3}\),
\[\lambda_{\text{max}} =\lambda_{1}=\frac{1}{2}-\frac{1}{\alpha^{2}+1}(\alpha^{2}\lambda_ {2}+\lambda_{3})+\frac{1}{\alpha^{2}+1}\sqrt{\frac{S^{2}(\alpha^{2}+1)}{16}- \alpha^{2}(\lambda_{2}-\lambda_{3})^{2}}, \tag{15}\] \[\lambda_{\text{min}} =\lambda_{4}=\frac{1}{2}-\frac{1}{\alpha^{2}+1}(\lambda_{2}+ \alpha^{2}\lambda_{3})-\frac{1}{\alpha^{2}+1}\sqrt{\frac{S^{2}(\alpha^{2}+1)}{ 16}-\alpha^{2}(\lambda_{2}-\lambda_{3})^{2}}. \tag{16}\]
The non-negativity of \(\lambda_{\text{min}}\) in Eq. (16) restricts \(\lambda_{2}\) and \(\lambda_{3}\) outside an ellipse,
\[\left(\lambda_{2}-\frac{1}{2}\right)^{2}+\alpha^{2}\left(\lambda_{3}-\frac{1}{ 2}\right)^{2}\geq\frac{S^{2}}{16}, \tag{17}\]
and the fact that \(\lambda_{4}\) in Eq. (16) is the smallest among all \(\lambda_{i}\) restricts \(\lambda_{2}\) and \(\lambda_{3}\) inside an ellipse,
\[\lambda_{2}^{2}+(4\alpha^{2}+1)\lambda_{3}+2\lambda_{2}\lambda_{3}-\lambda_{2 }-(2\alpha^{2}+1)\lambda_{3}\leq\frac{S^{2}}{16}-\frac{\alpha^{2}+1}{4}. \tag{18}\]
With the above derivations, the optimization in Eq. (14) can be rewritten with independent variables \(\lambda_{2}\) and \(\lambda_{3}\) as
\[C_{\text{est}} =\min_{\lambda_{2},\lambda_{3}}2\lambda_{1}-1, \tag{19}\] \[\text{s.t.} \lambda_{1} =\frac{1}{2}-\frac{1}{\alpha^{2}+1}(\alpha^{2}\lambda_{2}+\lambda _{3})+\frac{1}{\alpha^{2}+1}\sqrt{\frac{S^{2}(\alpha^{2}+1)}{16}-\alpha^{2}( \lambda_{2}-\lambda_{3})^{2}},\] \[0 \leq(\lambda_{2}-\frac{1}{2})^{2}+\alpha^{2}(\lambda_{3}-\frac{1} {2})^{2}-\frac{S^{2}}{16},\] \[0 \geq\lambda_{2}^{2}+(4\alpha^{2}+1)\lambda_{3}+2\lambda_{2}\lambda _{3}-\lambda_{2}-(2\alpha^{2}+1)\lambda_{3}-(\frac{S^{2}}{16}-\frac{\alpha^{2 }+1}{4}),\] \[\lambda_{1} \geq\lambda_{2}\geq\lambda_{3},\] \[0 \leq\lambda_{2},\lambda_{3},\lambda_{2}+\lambda_{3}\leq 1.\]
The optimization in Eq. (19) can be solved analytically, with the global optimal value taken at
\[\lambda_{1} =\frac{1}{2}+\frac{1}{2}\sqrt{\frac{S^{2}}{4}-\alpha^{2}}, \tag{20}\] \[\lambda_{2} =\frac{1}{2}-\frac{1}{2}\sqrt{\frac{S^{2}}{4}-\alpha^{2}},\] \[\lambda_{3} =\lambda_{4}=0.\]
Therefore the estimated concurrence is lower-bounded,
\[C(\rho_{AB})\geq C_{\text{est}}(S)=\sqrt{\frac{S^{2}}{4}-\alpha^{2}}. \tag{21}\]
The lower bound of Eq. (21) is saturated when the \(\alpha\)-CHSH Bell value \(S\) is obtained by measuring the Bell-diagonal state \(\rho_{\lambda}\) with the parameters in Eq. (20) under its optimal measurements. The optimal measurements are in Eq. (17) and Eq. (18) with \(\theta=\arctan(\frac{1}{\alpha}\sqrt{\frac{S^{2}}{4}-\alpha^{2}})\).
## Appendix C Realistic settings in experiment
In this section, we analyze the realistic settings and analytically derive the condition where a better concurrence estimation can be derived with a tilted CHSH Bell expression. That is, by using the family of \(\alpha\)-CHSH expressions, a Bell expression with parameter \(\alpha>1\) gives a better estimation result than \(\alpha=1\). Considering the estimation
function, \(C_{\rm est}(S)\), as a function parameterized by \(\alpha\), then our target is to determine the condition for the following inequalities,
\[\frac{\partial C_{\rm est}(S)}{\partial\alpha}|_{\alpha=1} >0, \tag{10}\] \[C_{\rm est}(S)|_{\alpha=1} >0.\]
The conclusions of Theorem 2 and Theorem 3 can be directly solved from Eq. (10) by substituting the corresponding estimation equation and the \(\alpha\)-CHSH Bell value.
For a better understanding of Theorem 2, we take \(\theta_{1}=\pi/2\) in Eq. (23). With straightforward derivations, we find that when the measurement parameters, \(\theta_{2}\) and \(\theta_{3}\), satisfy
\[(\sqrt{2}+1)(\cos\theta_{2}+\cos\theta_{3})+(\sin\theta_{2}-\sin\theta_{3})> 2(\sqrt{2}+1), \tag{11}\]
there exists a value of \(\delta\) such that \(\theta_{1}=\pi/2\), and \(\theta_{2},\theta_{3}\) and \(\delta\) satisfy the inequality in Eq. (23). In other words, when measurement parameters are set in Eq. (21) with \(\theta_{1}=\pi/2\) and \(\theta_{2},\theta_{3}\) follow Eq. (11), there exists a proper state, \(|\phi_{AB}(\delta)\rangle\), such that concurrence estimation result \(C_{\rm est}(S)\) of the \(|\phi_{AB}(\delta)\rangle\) for some \(\alpha>1\) is larger than that with \(\alpha=1\). The conclusion from Eq. (11) also applies to Werner states.
## Appendix D Semi-device-independent optimal entanglement estimation
In some scenarios, one may trust the functioning of the source, such as knowing the input states to be pairs of qubits, which can be seen as a semi-device-independent (semi-DI) scenario. With the additional information, one may obtain a better entanglement estimation result. In this section, we compare the performance of this semi-DI scenario with the fully DI scenario under a realistic setting. Suppose the underlying state is a non-maximally entangled state \(|\phi_{AB}(\delta)\rangle\) in Eq. (22) with \(\delta=0.6\), and the measurements are given by Eq. (21) with \(\theta_{1}=\pi/2\) and \(\theta_{2}=-\theta_{3}=\pi/2-1.2\). Under this setting, the fully DI and semi-DI concurrence estimation results are illustrated in Fig. 12. The semi-DI estimation, \(C_{\rm est,semi-DI}(S)\), is strictly larger than the fully DI estimation, \(C_{\rm est,DI}(S)\), for any value of \(\alpha>1\). Besides, when \(\alpha\) takes the value of \(\alpha_{C}^{*}\doteq 2.3973\), \(C_{\rm est,semi-DI}|_{\alpha_{C}^{*}}=\sin(1.2)\doteq 0.9320\) rigorously equals to the real system concurrence when \(\delta=0.6\).
In addition, we also study the condition where a better concurrence estimation result using general \(\alpha\)-CHSH expressions is obtained under some \(\alpha>1\). We present the following theorems for the state families of Werner states and non-maximally entangled states.
Figure 12: Illustration of the comparison between DI and semi-DI concurrence estimation results. The experimental setting is given by \(\delta=0.6,\theta_{1}=\pi/2,\theta_{2}=-\theta_{3}=\pi/2-1.2\). For the estimation results varying in \(\alpha>1\), we plot the estimated values from the settings of semi-DI and DI with the blue solid line and the red dashed line, respectively. The semi-DI concurrence estimation with confirmed knowledge of input dimensions is strictly larger than the DI concurrence estimation.
**Theorem 5**.: _In a Bell test experiment where the input states are pairs of qubits, suppose the underlying state of the system takes the form of Eq. (22) and the observables take the form of Eq. (21). When \(\theta_{1},\theta_{2},\theta_{3}\) and \(\delta\) satisfy_
\[(\cos\theta_{2}+\cos\theta_{3})[\sin 2\delta\sin\theta_{1}(\sin\theta_{2}-\sin \theta_{3})+(1+\cos\theta_{1})\cos\theta_{2}+(1-\cos\theta_{1})\cos\theta_{3} ]>4, \tag{101}\]
_there exists \(\alpha>1\), where a better estimation of \(C_{\rm est}(S)\) can be obtained by using the \(\alpha\)-CHSH inequality parameterized by this value than by using the original CHSH inequality (corresponding to \(\alpha=1\))._
The proof of Theorem 5 is similar to the proof of Theorem 2. Here, we alternatively apply the concurrence estimation result for pairs of qubits input, \(C_{\rm est}(S)\) in Eq. (14), to the condition in Eq. (101). To better understand the theorem, we present an example with \(\theta_{1}=\pi/2\) in Eq. (101). When
\[\begin{split}\theta_{2}+\theta_{3}&=\arccos \biggl{(}\frac{4}{1+\sqrt{2}k}-1\biggr{)},\\ -\frac{\pi}{4}-\arccos k&<\theta_{3}-\theta_{2}<- \frac{\pi}{4}+\arccos k,\end{split} \tag{102}\]
where \(k\in[\sqrt{2}/2,1)\), there exists \(\delta\) such that \(\theta_{1}=\pi/2,\theta_{2},\theta_{3}\) and the \(\delta\) satisfy Eq. (101). In other words, when measurement parameters are set in Eq. (21) with \(\theta_{1}=\pi/2\) and \(\theta_{2},\theta_{3}\) following Eq. (102), there exist non-maximally entangled states where a better semi-DI concurrence estimation result is obtained for some \(\alpha>1\) in comparison with \(\alpha=1\).
In Fig. 12, we observe that under a well-chosen value of \(\alpha\), the semi-DI concurrence estimation coincides with the real value. In many semi-DI CHSH Bell tests, the existence of \(\alpha\) that yields an accurate estimation of state concurrence is ubiquitous. Theorem 1 indicates that under the assumption of qubit inputs, the lower bound of concurrence in Eq. (14) can be saturated at any non-maximally entangled state \(|\phi_{AB}(\delta)\rangle\), once the Bell value is obtained by the optimal measurements of the \(|\phi_{AB}(\delta)\rangle\). In fact, earlier research indicates that the observables,
\[\begin{split}\hat{A}_{0}&=\pm\sigma_{z},\\ \hat{A}_{1}&=\sigma_{x},\\ \hat{B}_{0}&=\pm\cos\theta\sigma_{z}+\sin\theta \sigma_{x},\\ \hat{B}_{1}&=\pm\cos\theta\sigma_{z}-\sin\theta \sigma_{x},\end{split} \tag{103}\]
with \(\tan\theta=\sin 2\delta/\alpha\), are the optimal measurements of \(|\phi_{AB}\rangle\) with any fixed \(\alpha\)[27]. The form of observables in Eq. (103) coincides with our initialization in Eq. (21) when \(\theta_{1}=\pi/2,\theta_{2}+\theta_{3}=0,0<\theta_{2}<\pi/4\). In this case, any non-maximally entangled state \(|\phi_{AB}(\delta)\rangle\) with \(\delta\) satisfying \(\sin 2\delta>\tan\theta_{2}\) reaches its optimal concurrence estimation when \(\alpha\) takes the value of
\[\alpha_{C}^{*}=\frac{\sin 2\delta}{\tan\theta_{2}}, \tag{104}\]
and the optimal semi-DI estimation value is
\[C_{\rm est}|_{\alpha_{C}^{*},\rm semi-DI}=C(|\phi_{AB}\rangle). \tag{105}\]
**Theorem 6**.: _In a Bell test experiment where the input states are pairs of qubits, suppose the underlying state of the system takes the form of Eq. (27) and the observables take the form of Eq. (21). When \(\theta_{1},\theta_{2},\theta_{3}\) and \(p\) satisfy_
\[(1-p)^{2}(\cos\theta_{2}+\cos\theta_{3})\cdot[\sin\theta_{1}(\sin\theta_{2}- \sin\theta_{3})+(1+\cos\theta_{1})\cos\theta_{2}+(1-\cos\theta_{1})\cos\theta _{3}]>4, \tag{106}\]
_there exists \(\alpha>1\), where a better estimation of \(C_{\rm est}(S)\) can be obtained by using the \(\alpha\)-CHSH inequality parameterized by this value than by using the original CHSH inequality (corresponding to \(\alpha=1\))._
The proof of Theorem 6 is similar to the proof of Theorem 3. Theorem 6 is derived from the concurrence estimation result for pairs of qubits input, \(C_{\rm est}(S)\) in Eq. (14), and the condition in Eq. (101). Here we take \(\theta_{1}=\pi/2\) for convenience, when \(\theta_{2}\) and \(\theta_{3}\) satisfy Eq. (102) for \(k\in[\sqrt{2}/2,1)\), there exists Werner states \(\rho_{W}\) such that, the semi-DI concurrence estimation under the settings performs better when taking an \(\alpha>1\) CHSH-type Bell expression compared with \(\alpha=1\).
We further interpret Theorem 6 via a special example. In a semi-DI CHSH Bell test, if measurements in Eq. (21) are set with \(\theta_{1}=\pi/2,\theta_{2}+\theta_{3}=0,0<\theta_{2}<\pi/4\), then any Werner state \(\rho_{W}\) with parameter \(p\),
\[p<1-\frac{1}{\sqrt{\sqrt{2}\cos\theta_{2}\cos(\theta_{2}-\pi/4)}}, \tag{107}\]
promises a better concurrence estimation when taking an \(\alpha>1\) CHSH-type Bell expression compared with \(\alpha=1\). The right hand side (RHS) of Eq. (46) is no large than \(0.0896\), which implies only Werner state with \(p\lesssim 0.0896\) is possible to fit in the condition in Theorem 6. It is worth mentioning that for any \(0<\theta_{2}<\pi/4\), the RHS of Eq. (46) is strictly larger than the RHS of Eq. (47). It allows a wide choice of Werner states to promise a better estimation when taking an \(\alpha>1\) CHSH-type Bell expression compared with \(\alpha=1\) in the semi-DI experiment. In a semi-DI system with the Werner state and measurements satisfying Eq. (46), the semi-DI concurrence estimation reaches the optimal when the CHSH Bell value is taken at \(\alpha\) equals to
\[\alpha_{C}^{*}=\frac{(1-p)^{2}\cos\theta_{2}\sin\theta_{2}}{1-(1-p)^{2}\cos^{2 }\theta_{2}} \tag{47}\]
and the optimal semi-DI estimation value is
\[C_{\rm est}|_{\alpha_{C}^{*},{\rm semi-DI}}=\frac{(1-p)\sin\theta_{2}}{\sqrt {1-(1-p)^{2}\cos^{2}\theta_{2}}}. \tag{48}\]
|
2306.02092
|
Collaborative Group: Composed Image Retrieval via Consensus Learning
from Noisy Annotations
|
Composed image retrieval extends content-based image retrieval systems by
enabling users to search using reference images and captions that describe
their intention. Despite great progress in developing image-text compositors to
extract discriminative visual-linguistic features, we identify a hitherto
overlooked issue, triplet ambiguity, which impedes robust feature extraction.
Triplet ambiguity refers to a type of semantic ambiguity that arises between
the reference image, the relative caption, and the target image. It is mainly
due to the limited representation of the annotated text, resulting in many
noisy triplets where multiple visually dissimilar candidate images can be
matched to an identical reference pair (i.e., a reference image + a relative
caption). To address this challenge, we propose the Consensus Network
(Css-Net), inspired by the psychological concept that groups outperform
individuals. Css-Net comprises two core components: (1) a consensus module with
four diverse compositors, each generating distinct image-text embeddings,
fostering complementary feature extraction and mitigating dependence on any
single, potentially biased compositor; (2) a Kullback-Leibler divergence loss
that encourages learning of inter-compositor interactions to promote consensual
outputs. During evaluation, the decisions of the four compositors are combined
through a weighting scheme, enhancing overall agreement. On benchmark datasets,
particularly FashionIQ, Css-Net demonstrates marked improvements. Notably, it
achieves significant recall gains, with a 2.77% increase in R@10 and 6.67%
boost in R@50, underscoring its competitiveness in addressing the fundamental
limitations of existing methods.
|
Xu Zhang, Zhedong Zheng, Linchao Zhu, Yi Yang
|
2023-06-03T11:50:44Z
|
http://arxiv.org/abs/2306.02092v2
|
# Relieving Triplet Ambiguity: Consensus Network for Language-Guided Image Retrieval
###### Abstract.
Language-guided image retrieval enables users to search for images and interact with the retrieval system more naturally and expressively by using a reference image and a relative caption as a query. Most existing studies mainly focus on designing image-text composition architecture to extract discriminative visual-linguistic relations. Despite great success, we identify an inherent problem that obstructs the extraction of discriminative features and considerably compromises model training: **triplet ambiguity**. This problem stems from the annotation process wherein annotators view only one triplet at a time. As a result, they often describe simple attributes, such as color, while neglecting fine-grained details like location and style. This leads to multiple false-negative candidates matching the same modification text. We propose a novel Consensus Network (Css-Net) that self-adaptively learns from noisy triplets to minimize the negative effects of triplet ambiguity. Inspired by the psychological finding that groups perform better than individuals, Css-Net comprises 1) a consensus module featuring four distinct compositors that generate diverse fused image-text embeddings and 2) a Kullback-Leibler divergence loss, which fosters learning among the compositors, enabling them to reduce biases learned from noisy triplets and reach a consensus. The decisions from four compositors are weighted during evaluation to further achieve consensus. Comprehensive experiments on three datasets demonstrate that Css-Net can alleviate triplet ambiguity, achieving competitive performance on benchmarks, such as +2.77% R@10 and +6.67% R@50 on FashionIQ.
Representation Learning, Multi-modal Retrieval, Image Retrieval with Text Feedback, Triplet Ambiguity. +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
## 1. Introduction
Image retrieval is a fundamental task in computer vision and proves to be valuable in many applications, such as product search [21; 22; 49], internet search [43] and fashion retrieval [36; 40]. Prevalent image retrieval approaches include image-to-image retrieval [12; 13; 14; 50; 55] and text-to-image retrieval [7; 19; 29; 57; 66; 68], which endeavor to locate the image of interest using a single image or descriptive texts as a query. Despite significant progress in image retrieval, users often lack a precise search target in advance but instead seek categories, such as shoes or clothing. Therefore, an interactive system is highly desirable to assist users to reconsider their intentions, as depicted in Fig. 1. Hence, Language-guided image retrieval, which aims to search the target image of interest given the composed query consisting of a reference image and the relative caption describing the desired modification, has recently attracted great attention [6; 30; 53; 58; 35].
Recent studies addressing the task of language-guided image retrieval primarily concentrate on extracting discriminative representations from image-text-image triplets. For example, TIRG [53], VAL [6], and CoSMo [35] propose different ways to modify the
Figure 1. Illustration of the language-guided image retrieval system. Using a reference image and an associated descriptive sentence, the system endeavors to accurately retrieve the intended target image from candidate images for user convenience.
visual features of the reference image conditioned on the relative caption. TIRG uses a simple gating and residual module, VAL devises a visual-linguistic attention learning framework, and CoSMo introduces the content and style modulators. Additionally, CLVC-Net [58] and CLIP4cir [1] devise more intricate multi-modal fusion modules to accentuate the modifications of the reference image. CLVC-Net uses local-wise and global-wise composition modules, while CLIP4cir finetunes the CLIP [47] text encoder and trains a combiner network to fuse the visual and textual features.
Despite the significant success, these works fail to address an inherent problem of the language-guided image retrieval task: the ambiguity of the training data triplets, _i.e._, **triplet ambiguity**. Triplet ambiguity originates from the annotation process in which annotators, focusing on single data triplet, frequently describe simple properties such as color, while neglecting more fine-grained details, such as location and style. Consequently, many noisy triplets exist where candidate images meet the requirement of the composed query but are not annotated as the desired ground-truth target image, especially when the relative caption is brief. Fig. 2 shows examples that apart from the true match marked with \(\mathcal{\hat{Y}}\), the other two candidate images marked with \(\mathcal{\hat{Y}}\) could also serve as the target image of the composed query. The noisy triplets lead to multiple false-negative candidates capable of fulfilling the same modification text, compromising the representation learning of the model. It is because the metric learning objective in this task aims to push away these false-negative samples from the composed query. We empirically verify that triplet ambiguity does exist in the language-guided image retrieval task in Sec. 4.2. Specifically, we compare the batch-based classification (mostly used in previous works) with a global-wise classification. We find that such a global-wise classification significantly degrades the performance, validating our assumption on triplet ambiguity.
To address the triplet ambiguity problem, we propose a straightforward and effective Consensus Network (Css-Net) for language-guided image retrieval, as illustrated in Fig. 3(a). The key idea underpinning our method to alleviate the triplet ambiguity is "two heads are better than one" in short. To be more specific, an individual often errs due to inherent biases, but groups are less susceptible to making similar mistakes, thereby circumventing sub-optimal solutions. This is known as the psychological finding [27] that groups perform better than individuals on the memory task. Consequently, our goal is to (1)develop a consensus module composed of compositors possessing diverse knowledge to jointly make decisions during evaluation and (2)encourage learning among the compositors to minimize their biases learned on noisy triplets by employing an additional Kullback Leibler divergence loss (KL loss) [33].
To ensure that the compositors possess distinct knowledge, we differentiate them in two ways: \(\bullet\) Motivated by the finding [37, 42] that the image features of high-resolution are semantically weak, while the image features of low-resolution are semantically strong, we first employ two image-text compositors at different depths of the same image encoder, (_i.e._, block3 and block4 of the ResNet [25]). The former focuses more on detailed change like "has a purple star pattern", while the latter emphasizes more overall change such as "is modern and fashional". \(\bullet\) Unlike the image-text compositor that uses relative caption to describe **what should change** on the reference image, we devise the text-image compositor to capture the textual cues based on text-to-image retrieval, where the reference image implies **what should preserve**. Specifically, we denote the reference image feature as \(\mathbf{f_{r}}\), the text feature as \(\mathbf{f_{s}}\), and the composed feature as \(\mathbf{\hat{g}}\). The image-text compositors primarily devised by previous works [6, 35, 53, 58] are in the residual form of \(\mathbf{\hat{g}_{IT}}=\mathbf{f_{r}}+\mathbf{comp(f_{r},f_{s})}\), where _comp_ represents a function to fuse \(\mathbf{f_{r}}\) and \(\mathbf{f_{s}}\). The proposed text-image compositors are in the form of \(\mathbf{\hat{g}_{IT}}=\mathbf{f_{s}}+\mathbf{comp(f_{s},f_{r})}\) for capturing the textual cues of the query. We incorporate two symmetric text-image compositors at the same depths of the image encoder as image-text compositors. These four compositors share the same image and text encoders but exhibit distinct feature representations based on their respective knowledge. They collaboratively make decisions during evaluation to mitigate the individual biases learned on noisy triplets, which enhances the language-guided image retrieval performance.
To further reduce the negative impact of triplet ambiguity, we impose an additional KL loss between two image-text compositors. The KL loss enables two compositors to learn from each other and reach a consensus. This **soft label** combined with the respective **knowledge** from two compositors is more effective than the supervision from one-hot labels, as it helps each compositor to mitigate its own bias learned on noisy triplets and thus prevents the overfitting to the annotated target image. To demonstrate that KL loss provides additional information for compositors, we employ an intuitive label-smoothing approach as the soft label. However, we find that the uniform distribution of soft labels without knowledge does not address the triplet ambiguity due to the high false positive rate problem. In comparison, the KL loss bridges two image-text compositors more flexibly and feasibly. The experimental results show that Css-Net has achieved competitive performance on three benchmarks, empirically validating the effectiveness of our method.
In summary, our contributions are as follows:
\(\bullet\) We identify an inherent problem in the language-guided image retrieval task and further verify the phenomenon through the preliminary experiments. We observe that the triplet ambiguity leads to sub-optimal model learning (_see Fig. 4_).
Figure 2. Illustration of the triplet ambiguity problem. Triplet ambiguity denotes multiple false-negative samples in the dataset. It is due to the annotator usually seeing one triplet with true match (\(\mathcal{\hat{Y}}\)) at a time, while neglecting other candidates (\(\mathcal{\hat{Y}}\)) in the whole dataset. Triplet ambiguity largely compromise the traditional metric learning on pushing away other negatives.
\(\bullet\) To address triplet ambiguity, we introduce a Consensus Network (Css-Net) featuring a consensus module with four unique compositors for joint inference (_see Table 3_). Moreover, we employ KL loss to facilitate learning among compositors and reduce their biases learned on noisy triplets, making Css-Net more robust to triplet ambiguity. _See results in Table 2._
\(\bullet\) Extensive experiments show that the proposed method minimizes the negative impacts of noisy triplets. On three prevalent public benchmarks, we observe that Css-Net significantly surpasses the current state-of-the-art competitive methods, _e.g._, with +2.77% Recall@10 on Shoes, and +6.67% Recall@50 on FashionIQ (_see Table 4, 5, and 6_).
## 2. Related Work
### Cross-modal Image Retrieval
Cross-modal image retrieval is a fundamental task in computer vision that has attracted wide attention from researchers. The most popular patterns of image retrieval are image-to-image matching [9, 12, 38, 51, 55, 59] and text-to-image matching [34, 64, 68], which allow users to search for images of interest with a similar image or some descriptive texts as queries. Although these paradigms have made great progress, they do not provide enough convenience for users to express their search intention. Therefore, more forms of image retrieval with flexible queries such as sketch-based image retrieval [11, 20, 52, 54] have emerged. In this work, we focus on the language-guided image retrieval task which involves a composed query of a reference image and a corresponding caption. To tackle this task, recent works [5, 6, 17, 35, 53, 58, 61, 63, 65] aim to devise a composition architecture to capture the visual-linguistic relation. For example, TIRG [53] uses a simple gating and residual module, VAL [6] devises a visual-linguistic attention learning framework, and CoSMo [35] introduces the content and style modulators. Besides, CLVC-Net [58] devises local-wise and global-wise composition modules, resembling model ensemble. Unlike the methods described above, our Css-Net does not rely on complicated composition modules for learning. Instead, our Css-Net mainly focuses on alleviating the triplet ambiguity problem that leads to a sub-optimal solution for a single compositor. To address this problem, Css-Net trains a consensus module to infer during evaluation and leverage KL loss to reduce individual bias during training.
### Attention Mechanism
The attention mechanism is widely used in language and vision tasks in machine learning to capture the long-range dependencies and the relations between features. This mechanism is also inspired by a psychological finding [8] that humans observe and pay attention to specific parts as needed. In the language-guided image retrieval task, many works use the attention mechanism to design the image-text compositor. For example, VAL [6] employs self-attention by concatenating the text feature to each location of the image features. CoSMo [35] adopts the disentangled multi-modal non-local block to stabilize the training procedure of the content modulator. Besides, CLVC-Net [58] proposes a complex cross-attention between the feature of each word in the sentence and each spatial location of the image feature. In our work, we focus on utilizing several compositors with different knowledge. Without loss of generalizability, we deploy the widely-used CoSMo [35] as our image-text compositor, which takes the feature map of the reference image and the pooled text feature (sentence-level feature) as input. Moreover, we propose a unique text-image compositor to fully utilize the attention mechanism to capture the relation of the average pooled reference image feature and the word-level text feature, which is orthogonal with existing attention-based models and could further improve the performance.
### Co-training
Co-training is a semi-supervised learning technique that exploits two classifiers to acquire complementary information on two views of the data [3]. It has been extensively utilized in various research fields such as image recognition [46], semantic segmentation [44] and domain adaptation [41, 48, 67]. For instance, in domain adaptation, these co-training works explicitly maximize the discrepancies of the classifiers by utilizing extra losses such as adversarial loss [48] or weight discrepancy loss [41]. In contrast, our work adopts a co-training paradigm that leverages four compositors with different knowledge to jointly make decisions for the language-guided image retrieval task. We do not introduce extra loss to explicitly maximize the discrepancy of the four compositors, as they inherently possess various knowledge due to their different designs. For example, the two image-text compositors focus on the detailed and overall changes to the reference images based on the perspective of finding "what should change" in the reference image, and two text-image compositors are in view of the text-to-image retrieval with the reference image implying "what should preserve". Instead, we explicitly encourage the consensus between compositors and leverage the consensus to rectify the single prediction, which is aligned with this work [18] exploring the consistent and complementary correlations of multi-modal data. Refer to Sec. 3.2 for more details.
## 3. Method
This section describes the Consensus Network in detail. Sec. 3.1 introduces the overall framework of the network. Sec. 3.2 elaborates on the consensus module consisting of four distinct compositors with diverse knowledge, and the triplet ambiguity resolution. Sec. 3.3 discusses Css-Net and some recent and relevant works.
### Overview of Consensus Network
As illustrated in Fig. 3 (a), the Consensus Network consists of three components: the image encoder, the text encoder, and the consensus module. The image encoder, \(F_{img}\), extracts mid-level and high-level representations of the input images as:
\[f_{r}^{m},f_{r}^{h}=F_{img}(I_{r}), \tag{1}\]
where \(I_{r}\) is the reference image, and \(f_{r}^{m},f_{r}^{h}\in\mathbb{R}^{C_{in}\times(H\times W)}\) refer to the mid-level and high-level image feature, respectively (_i.e._, output from block3 and block4 of the ResNet [25]). Note that, since \(f_{r}^{m}\) and \(f_{r}^{h}\) are not used in the same compositor, the symbols with the superscript \(m\) in the subsequent formulas correspond to \(f_{r}^{m}\) rather than \(f_{r}^{h}\), and vice versa. \(C_{in}\times(H\times W)\) represents the shape of the feature maps. For brevity, we do not distinguish between different shapes of the image feature maps. The text encoder, denoted as \(F_{text}\)
extracts the features of the relative caption as follows:
\[\mathbf{f_{s}}=F_{text}(S), \tag{2}\]
where \(S\) denotes the relative caption, \(\mathbf{f_{s}}\in\mathbb{R}^{C^{\prime}_{in}\times L}\) refers to the word-level representation, and \(L\) is the number of words in the caption.
After extracting the image and text features, the consensus module transforms the reference image features with the corresponding text features into the composed features. It consists of four distinct compositors possessing different knowledge. These compositors at different depths of the image encoder can be grouped into two types. Specifically, given the reference image feature \(\mathbf{f_{r}}\) and the text feature \(\mathbf{f_{s}}\), the composed query \(\mathbf{\hat{g}}\) can be obtained by either an image-text compositor or a text-image compositor. The image-text compositor has the residual form of \(\mathbf{\hat{g_{IT}}}=\mathbf{f_{r}}+\mathbf{comp}(\mathbf{f_{r}},\mathbf{f_{s}})\), which focuses on "what should change" to the reference image, while the text-image compositor has the residual form of \(\mathbf{\hat{g_{TI}}}=\mathbf{f_{s}}+\mathbf{comp}(\mathbf{f_{s}},\mathbf{f_{r}})\) and emphasizes "what should preserve" based on the text-to-image retrieval. Here, \(\mathbf{comp}\) represents a function to fuse \(\mathbf{f_{r}}\) and \(\mathbf{f_{s}}\). Considering both the performance and computational efficiency, the two text-image compositors \(F^{m}_{TI}\), \(F^{h}_{TI}\), shown in Fig. 3 (b), take the word-level representation \(\mathbf{f_{s}}\) along with the average pooled reference image features \(\mathbf{pool}(\mathbf{f_{r}}^{m}),\mathbf{pool}(\mathbf{f_{r}^{h}})\) as input, respectively, which are given by:
\[\left\{\begin{array}{l}\mathbf{\hat{g_{TI}}}^{\mathbf{m}}=F^{m}_{TI}(\mathbf{f_{s}},\mathbf{ pool}(\mathbf{f_{r}^{m}}))\\ \mathbf{\hat{g_{TI}}}=F^{m}_{TI}(\mathbf{f_{s}},\mathbf{pool}(\mathbf{f_{r}^{h}})),\end{array}\right. \tag{3}\]
where \(\mathbf{\hat{g_{TI}}}^{\mathbf{m}}\), \(\mathbf{\hat{g_{TI}}}^{\mathbf{h}}\in\mathbb{R}^{C}\) are the composed features from text-image compositors. Similarly, the image-text compositors \(F^{m}_{IT}\), \(F^{h}_{IT}\), shown in Fig. 3 (c) take the intermediate image feature maps, \(\mathbf{f_{r}^{m}}\), \(\mathbf{f_{r}^{h}}\) along with the pooled sentence-level text representation \(\mathbf{pool}(\mathbf{f_{s}})\) as input,
Figure 3: The overview of the Consensus Network. Given a reference image and a relative caption, we first extract the mid-level image feature \(f^{m}_{r}\) and high-level image feature \(f^{h}_{r}\) with the image encoder \(F_{img}\), and the text representation \(\mathbf{f_{s}}\) with the text encoder \(F_{text}\). Then, we fuse the text representation with either the mid-level or high-level image feature using compositors. The solid blue lines represent mid-level image features, while dotted blue lines indicate high-level image features and solid green lines denote text features. The text-image compositor \(F_{TI}\) has the residual form of \(\mathbf{f_{s}}+F(\mathbf{f_{s}},\mathbf{f_{r}})\), which takes the word-level text representation \(\mathbf{f_{s}}\) and the average pooled reference image feature \(\mathbf{pool}(\mathbf{f_{r}})\) as input. The image-text compositor \(F_{IT}\) has the residual form of \(\mathbf{f_{r}}+F(\mathbf{f_{r}},\mathbf{f_{s}})\) taking the intermediate feature map of the reference image \(\mathbf{f_{r}}\) and the average pooled sentence-level text feature \(\mathbf{pool}(\mathbf{f_{s}})\) as input. Each compositor generates its own composed feature. We use a simple attention-based multi-modal non-local block for the text-image compositor and employ CoSMo [35] for the image-text compositor. Finally, we match the composed features with the target image feature to train the model. The projector block consists of an averaging pooling layer (Avgpool) and a multilayer perceptron (MLP).
which are given by:
\[\left\{\begin{array}{l}\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{m}}=F_{IT}^{m}( \mathbf{f}_{\mathbf{f}}^{\mathbf{m}},pool(\mathbf{f}_{\mathbf{s}}))\\ \hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{h}}=F_{IT}^{h}(\mathbf{f}_{\mathbf{f}}^{h},pool(\mathbf{f}_{\mathbf{ s}})),\end{array}\right. \tag{4}\]
where \(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{m}}\), \(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{h}}\in\mathbb{R}^{C}\) are the composed features from image-text compositors. The target image features \(\mathbf{f}_{\mathbf{f}}^{\mathbf{m}}\), \(\mathbf{f}_{\mathbf{f}}^{\mathbf{h}}\) are obtained from the same image encoder \(F_{img}\) as the reference image features. Then the projector blocks (composed of an average pooling layer and a multilayer perceptron (MLP)) are employed to acquire the target features: \(\mathbf{g}_{\mathbf{TI}}^{\mathbf{m}}\), \(\mathbf{g}_{\mathbf{IT}}^{\mathbf{m}}\), \(\mathbf{g}_{\mathbf{IT}}^{\mathbf{m}}\), and \(\mathbf{g}_{\mathbf{IT}}^{\mathbf{h}}\). The four compositors are trained by reducing the distance between the composed features and their corresponding projected target features within the embedding space.
In the next section, we will explain how these diverse compositors with different knowledge in the consensus module learn to relieve the triplet ambiguity problem.
### Consensus Module
To address the triplet ambiguity, we propose the consensus module that consists of four distinct compositors with different knowledge. These compositors are trained to generate the composed query \(\hat{\mathbf{g}}\) that is close to the corresponding target image feature \(\mathbf{g}\) in the feature space. At the evaluation stage, compositors independently compute the similarity between each composed query and all candidate target images in the gallery and collaboratively rank the whole gallery by aggregating the given similarities with different weights. We initially discuss the design of ensuring each compositor acquires distinct knowledge, followed by elucidating how compositors learn from each other to reduce their biases learned on noisy triplets.
#### 3.2.1 Pyramid Training for Image-Text Compositor
We develop a pyramid training paradigm for image-text compositors, which is inspired by the finding [37, 42] that the image features of high-resolution are semantically weak, while the image features of low-resolution are semantically strong. Through exploring the different spatial information of the reference image, the two image-text compositors \(F_{IT}^{m}\) and \(F_{IT}^{h}\) independently learn unique knowledge by leveraging the batch-based classification loss, as given by:
\[\mathcal{L}_{IT}^{m}=-\log\frac{\exp(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{m}}\cdot\mathbf{ g}_{\mathbf{IT},+}^{\mathbf{m}})}{\sum_{j=1}^{B}\exp(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{m}} \cdot\mathbf{g}_{\mathbf{IT},j}^{\mathbf{m}})} \tag{5}\]
and
\[\mathcal{L}_{IT}^{h}=-\log\frac{\exp(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{h}}\cdot\mathbf{ g}_{\mathbf{IT},+}^{\mathbf{h}})}{\sum_{j=1}^{B}\exp(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{h}} \cdot\mathbf{g}_{\mathbf{IT},j}^{\mathbf{h}})}, \tag{6}\]
where \(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{m}}\) and \(\hat{\mathbf{g}}_{\mathbf{IT}}^{\mathbf{h}}\) are mid-level and high-level composed features from two image-text compositors (Eq. 4). \(\mathbf{g}_{\mathbf{IT},+}^{\mathbf{m}}\) and \(\mathbf{g}_{\mathbf{IT},+}^{\mathbf{h}}\) are corresponding target features obtained from different projectors. The independent batch-based classification loss makes each image-text compositor learn from the interactions between text and different spatial information of the image, which enables these compositors to hold unique knowledge.
#### 3.2.2 Auxiliary knowledge from Text-Image Compositor
The text-image compositor is a brand-new framework for generating the composed feature from the reference image and text, which is seldom referred to in previous works. It offers additional knowledge due to its distinct design from the image-text compositor. The text-image compositor mainly focuses on the text-to-image retrieval with the reference image implying "what should preserve", while the image-text compositor finds "what should change" in the reference image. We use two symmetric text-image compositors at the same depths of the image encoder to capture different knowledge. The compositors reuse the features from the image encoder \(F_{img}\) and the text encoder \(F_{text}\) with minimal cost. We also apply a batch-based classification loss for each compositor \(F_{TI}^{m}\) and \(F_{TI}^{h}\), as follows:
\[\mathcal{L}_{TI}^{m}=-\log\frac{\exp(\hat{\mathbf{g}}_{\mathbf{TI}}^{\mathbf{m}}\cdot\mathbf{ g}_{\mathbf{TI},+}^{\mathbf{m}})}{\sum_{j=1}^{B}\exp(\hat{\mathbf{g}}_{\mathbf{TI}}^{\mathbf{m}} \cdot\mathbf{g}_{\mathbf{TI},j}^{\mathbf{m}})} \tag{7}\]
and
\[\mathcal{L}_{TI}^{h}=-\log\frac{\exp(\hat{\mathbf{g}}_{\mathbf{TI}}^{\mathbf{h}}\cdot\mathbf{ g}_{\mathbf{TI},+}^{\mathbf{h}})}{\sum_{j=1}^{B}\exp(\hat{\mathbf{g}}_{\mathbf{TI}}^{\mathbf{h}} \cdot\mathbf{g}_{\mathbf{TI},j}^{\mathbf{h}})}, \tag{8}\]
where \(\hat{\mathbf{g}}_{\mathbf{TI}}^{\mathbf{m}}\) and \(\hat{\mathbf{g}}_{\mathbf{TI}}^{\mathbf{h}}\) are composed features from two text-image compositors (Eq. 3), respectively. \(\mathbf{g}_{\mathbf{TI},+}^{\mathbf{m}}\) and \(\mathbf{g}_{\mathbf{TI},+}^{\mathbf{h}}\) are corresponding target features obtained from different projector blocks.
#### 3.2.3 Collaborative Consensus Learning
The triplet ambiguity problem causes the compositors to learn from noisy triplets and introduces biases. To mitigate this problem, we use the Kullback Leibler divergence loss (KL loss) for two image-text compositors. The KL loss enables the compositors to learn collaboratively from each other, reducing biases and reaching a consensus. This approach balances the preservation of distinct knowledge and the achievement of consensus. By enhancing cooperation and knowledge sharing, our method is more robust to the triplet ambiguity problem. Specifically, we denote the resulting posterior probability of \(F_{IT}^{m}\) as \(\mathbf{p}^{\mathbf{m}}\) and that of \(F_{IT}^{h}\) as \(\mathbf{p}^{\mathbf{h}}\). We set a target probability \(\mathbf{p}^{\mathbf{w}}\) as the weighted sum of both \(\mathbf{p}^{\mathbf{m}}\) and \(\mathbf{p}^{\mathbf{h}}\), which is given by:
\[\mathbf{p}^{\mathbf{w}}=\lambda_{1}\cdot\mathbf{p}^{\mathbf{m}}+\lambda_{2}\cdot\mathbf{p}^{\mathbf{h}}, \tag{9}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are two weight coefficients, and thus the KL loss is formulated as:
\[\mathcal{L}_{KL}=D_{KL}(\mathbf{p}^{\mathbf{m}}||\mathbf{p}^{\mathbf{w}})+D_{KL}(\mathbf{p}^{\mathbf{h }}||\mathbf{p}^{\mathbf{w}}), \tag{10}\]
where \(D_{KL}\) is the KL divergence distance. The batch-based classification loss and KL loss play complementary roles in our approach. The application of KL loss minimizes individual biases of compositors with distinct knowledge. It is not essential to incorporate additional KL loss for the two text-image compositors, given their similarities in input. Specifically, both text-image compositors receive pooled reference image features with identical dimensions and share the same text representations. Consequently, the primary function of these text-image compositors is to act as auxiliary decision-makers during joint inference, addressing the triplet ambiguity issue. The final loss for training is the sum of the above loss functions:
\[\mathcal{L}=L_{TI}^{m}+L_{IT}^{h}+L_{TI}^{m}+\mathcal{L}_{TI}^{h}+\mathcal{L}_{ KL} \tag{11}\]
#### 3.2.4 Joint Inference
We train four distinct compositors to independently learn different knowledge from the data triplets and enable the knowledge transfer to reduce biases learned on noisy triplets. At the evaluation step, we involve each compositor in decision-making to further minimize individual bias. Specifically, we use each compositor to independently generate composed features and
measure the similarity between any composed feature and target feature. The resulting similarity matrices are denoted as \(p_{II}^{m}\), \(p_{II}^{h}\), \(p_{TI}^{m}\), \(p_{TI}^{h}\in\mathbb{R}^{n_{1}\times n_{2}}\), where \(n_{1}\) and \(n_{2}\) are the number of queries and target images in the gallery. The final similarity matrix for ranking the gallery is the weighted sum of four similarity matrices from distinct compositors:
\[P=\alpha_{1}\cdot P_{IT}^{m}+\alpha_{2}\cdot P_{IT}^{h}+\alpha_{3}\cdot P_{ TI}^{m}+\alpha_{4}\cdot P_{TI}^{h}, \tag{12}\]
where \(\alpha_{1}\ldots\alpha_{4}\) are weight coefficients. Note that a common practice that concatenates multiple composed features as one query is a special case that all \(\sigma\) are equal to 1.
### Discussions
VAL [6] and CLVC-Net [58] are most relevant to our Css-Net. Although VAL employs hierarchical matching strategies, our Css-Net diverges fundamentally in three respects: 1) Facilitating knowledge sharing among compositors at various depths for consensus, as opposed to independent compositors of VAL; 2) Omitting the low-level compositor to enhance performance and efficiency; 3) Implementing a weighted sum during evaluation, enabling adjustable influence of compositors. CLVC-Net incorporates global-wise and local-wise learning through two distinct models, akin to a model ensemble. Conversely, compositors in Css-Net utilize the same encoders but acquire unique knowledge from data triplets, employing a co-training strategy that renders it both effective and efficient. In conclusion, Css-Net represents a novel language-guided image retrieval approach that leverages compositors to learn diverse knowledge from noisy triplets, shares knowledge across compositors to minimize biases, and distinguishes itself from VAL and CLVC-Net.
## 4. Experiments
This section consists of four parts. Sec. 4.1 describes the experimental setup. Sec. 4.2 presents the empirical evidence of the triplet ambiguity problem. Sec. 4.3 conducts some diagnostic experiments for our Consensus Network. Sec. 4.4 evaluates the performance of our method and compares it with the state-of-the-art methods. More qualitative results are provided in Supplementary Material.
### Experimental Setup
#### 4.1.1. Datasets
We evaluate our method on three large-scale language-guided image retrieval datasets: Shoes [2], FashionIQ [60], and Fashion200k [53].
The Shoes dataset [2] is originally crawled from _like.com_ for attribute discovery. It is then annotated in the form of a triplet for dialog-based interactive retrieval. We follow VAL [6] to use \(10,000\) training samples and \(4,658\) evaluation samples.
The FashionIQ dataset [60] is a language-based interactive fashion retrieval dataset with \(77,684\) images across three categories: Dresses, Tops&Tees, and Shirts. It includes \(18,000\) triplets from \(46,609\) training images, each containing a reference image, a target image, and two descriptive natural language captions. The evaluation procedure follows VAL [6] and CoSMo [35] for a fair comparison.
The Fashion200k dataset [23] contains over \(200k\) fashion images from various websites and is for attribute-based product retrieval. With descriptive attributes for each product, \(172k\) images are used for training and \(33,480\) test queries for evaluation, following VAL and CoSMo methods. Attributes generate relative descriptions using an online-processing pattern. As shown in Figure 2, we observe that Fashion200k also meets the triplet ambiguity problem.
#### 4.1.2. Implementation Details
We modify CoSMo [35] as our baseline by replacing LSTM [16] with RoBERTa [39] as the text encoder. ResNet-50 [25] serves as the image encoder for Shoes and FashionIQ datasets, while ResNet-18 [25] is used for Fashion200k. Image encoders are pretrained on ImageNet. Embedding space dimension \(C\) is \(512\), and the output sizes of image feature maps \(C_{in}\times(H\times W)\) for ResNet50 are \(1024\times(14\times 14)\) and \(2048\times(7\times 7)\). Text feature shape is \(C_{in}^{T}\times L\), with \(C_{in}^{T}\) being \(768\) and \(L\) is the sentence length. During training, we set \(\lambda_{1}=10\) and \(\lambda_{2}=1\), while evaluation uses \(\alpha_{1}\ldots\alpha_{4}=1,0.5,0.5,0.5\). We adopt the standard evaluation metric in retrieval, _i.e._, Recall@K, denoted as R@K for short.
We use Adam [32] as the optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). On Shoes and FashionIQ datasets, the batch size is \(30\) and the base learning rate of the text encoder and other modules are \(2e-6\) and \(2e-5\), respectively. On Fashion200k, the batch size is \(126\) and the base learning rate for the text encoder and other modules are \(2e-6\) and \(2e-4\), respectively. We adopt the warm-up scheme for the first \(5\) epochs. The learning rate decays at epoch \(35\) and epoch \(45\) by a factor of \(10\), and the total number of epochs is \(50\).
### Triplet Ambiguity Verification
#### 4.2.1. Global-wise v.s. Batch-based Optimization
To verify the negative impacts from the noisy triplets as shown in Fig. 2, we quantitatively compare between global-wise and batch-based optimization objectives. In particular, we mainly adopt \(\bullet\) Batch-based Classification (BBC): Limited negatives in the current batch are involved, and \(\bullet\) Global-wise Classification (GWC): Mining more negative samples in the training set for comparison.
If the data triplet does **NOT** have ambiguity, the global-wise classification has the potential to be a comparable even better choice since it uses more negative samples in the training set and potentially learns a better metric, which is consistent with some findings in metric learning [26, 50, 56] and self-supervised learning [4, 24]. Specifically, Consider a composed query \(q\) and a set of features/prototypes of the candidate target images \(\{k_{0},k_{1},\ldots\}\), there is one true match denoted \(k_{+}\) in the candidates. The two losses are given by:
\[\mathcal{L}_{BBC}=-\log\frac{\exp(q\cdot k_{+}))}{\sum_{i=1}^{B}\exp(q\cdot k_{ i}))} \tag{13}\]
and
\[\mathcal{L}_{GWC}=-\log\frac{\exp(q\cdot k_{+}))}{\sum_{i=1}^{N}\exp(q\cdot k_{ i}))}, \tag{14}\]
where \(B\) is the batch size, and \(N\) is the number of IDs (classes) in the training set. The only difference between them is that \(\mathcal{L}_{GWC}\) involves more negative counterparts, which results in high false negative rates if the triplet ambiguity does exist. We conduct experiments on the Shoes dataset [2] using two losses, respectively, under the same settings of CoSMo [35]. We observe that batch-based methods outperform global-wise methods by a large margin, as shown in Fig. 4. The experimental results confirm our triplet ambiguity assumption: the training data contains many noisy triplets (_i.e._, false negative samples) as Sec. 1 discusses, which makes learning on noisy triplets challenging. Although batch-based classification suffers less
from triplet ambiguity, the single compositor still faces some noisy negative triplets in the batch and produces a sub-optimal solution.
#### 4.2.2 Label Smoothing
We first briefly illustrate one intuitive way we consider to alleviate the triplet ambiguity problem: label smoothing. The motivation is that there are many false negative samples due to the triplet ambiguity and label smoothing could alleviate the overfitting to the annotated true match. In label smoothing, the label \(\mathbf{y}=[y_{1},\dots y_{n}]\) is not a hard one-hot label rather than a soft one-hot label, which is given by:
\[y_{i}=\left\{\begin{array}{c}1\;(if\;i=c)\\ 0\;(if\;i\neq c)\\ \end{array}\right.\Longrightarrow y_{i}=\left\{\begin{array}{c}1-\epsilon\;( if\;i=c)\\ \frac{\epsilon}{B-1}\;(if\;i\neq c),\\ \end{array}\right. \tag{15}\]
where \(y_{i}\) is the label for class \(i\), \(c\) is the corresponding class of the query, \(B\) is the batch size, and \(\epsilon\) is a hyperparameter for label smoothing and is set to be \(0.1\). We use label smoothing for both the batch-based classification and the global-wise classification, and perform the experiments on the Shoes dataset, which are presented in Fig. 4. The experimental results indicate that label smoothing deteriorates the performance of batch-based classification but enhances the performance of global-wise classification. This is because \(\bullet\) global-wise classification is severely affected by triplet ambiguity since there are always false negative samples during learning, while batch-based classification is affected only when noisy negative triplets are in the batch; \(\bullet\) Label smoothing could alleviate the triplet ambiguity but also introduce another problem that many true negative target samples are assigned weights to learn, which impairs the model training for batch-based classification. The experimental results also verify the effectiveness of KL loss used in this work.
### Diagnostic Experiments
#### 4.3.1 Pyramid Training
In Sec. 3.2.1, we present the design of the pyramid training, which exploits the image features from the mid-level and high-level blocks of the image encoder. In this section, we verify its effectiveness by comparing it with different designs. Table 1 reports the experimental results. Our base model is \(F_{II}^{m}+F_{II}^{h}\) used in Css-Net, which applies pyramid training on mid-level and high-level features. We conduct experiments on two variants for pyramid training: 1) \(F_{II}^{l}+F_{II}^{h}\), which uses the image features from block2 and block4 of the ResNet, and 2) \(F_{IT}^{l}+F_{IT}^{m}+F_{IT}^{h}\), which uses three image-text compositors at three blocks to generate the composed query. Both variants perform worse than Css-Net, e.g., \(-2.55\%\) and \(-0.48\%\) on the R@10 evaluation metric. However, they both surpass \(F_{IT}^{h}\) using only one image-text compositor at block4. These results indicate that 1) the low-level image feature is too semantically weak for pyramid training, and 2) groups perform better than individuals.
#### 4.3.2 Efficacy of Model Designs
Table 2 shows the effectiveness of our core idea, which uses four different compositors with KL loss to address the triplet ambiguity problem. We make three observations from the table. First, employing image-text compositors at other layers of the image encoder (_i.e._, \(\mathcal{L}_{IT}^{m}\) in Eq. 6) can reduce the triplet ambiguity problem and improve the performance significantly (\(77.35\%\rightarrow\)\(79.63\%\) at R@50 metric). This indicates that two image-text compositors can benefit from the interactions between the relative caption and different spatial information of the reference image. Second, adding a new compositor framework, text-image compositor, to this task (_i.e._, \(\mathcal{L}_{II}^{m}+\mathcal{L}_{II}^{h}\) in Eq. 7&8) can further
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Shoses} \\ \cline{2-4} & R@1 & R@10 & R@50 \\ \hline \(F_{IT}^{h}\) & 17.27 & 52.26 & 77.35 \\ \(F_{IT}^{l}+F_{IT}^{h}\) & 18.24 & 52.14 & 78.12 \\ \(F_{IT}^{l}+F_{IT}^{m}+F_{IT}^{h}\) & 18.81 & 54.21 & 79.55 \\ \(F_{IT}^{m}+F_{IT}^{h}\) & 19.10 & 54.69 & 79.63 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison of various pyramid training methods on the Shoes dataset, which are trained and evaluated independently.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{\(\mathcal{L}_{IT}^{m}\)} & \multirow{2}{*}{\(\mathcal{L}_{IT}^{h}+L_{IT}^{m}\)} & \multirow{2}{*}{\(\mathcal{L}_{KL}\)} & \multicolumn{3}{c}{Shoses} \\ \cline{2-4} & Eq. 6 & 7\&8 & Eq. 10 & R@1 & R@10 & R@50 \\ \hline \hline Baseline: only \(\mathcal{L}_{IT}^{h}\) (Eq. 5) & & 17.27 & 52.26 & 77.35 \\ \hline \hline ✓ & & & 19.10(+1.83) & 54.69(+2.43) & 79.63(+2.28) \\ ✓ & ✓ & & 19.47(+2.20) & 54.63(+2.37) & 80.46(+3.11) \\ ✓ & ✓ & ✓ & 20.13(+2.86) & 56.81(+4.55) & 81.32(+3.97) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Efficacy of model designs. The experiments are conducted on the Shoes dataset under the same setting.
Figure 4. Comparison between the batch-based classification and the global-wise classification on the Shoes dataset. Batch-based classification discriminates different objects within the batch, while global-wise classification distinguishes all categories, introducing more ambiguous negatives. Obviously, the global-wise classification significantly degrades the performance since more false negative samples from triplet ambiguity are involved.
improve the performance (79.63% \(\rightarrow\) 80.46% at R@50 metric). This demonstrates the advantage of the novel text-image compositors. Third, applying an extra KL loss for the posterior probability from two image-text compositors (\(\mathcal{L}_{KL}\) in Eq. 10) can enhance the performance notably (80.46% \(\rightarrow\) 81.32% at R@50 metric). This suggests that the KL loss enables two image-text compositors to share and learn from their respective knowledge, thus minimizing the biases.
#### 4.3.3 Effect of Joint Inference
At the evaluation stage, Css-Net makes compositors jointly make the decision as introduced in Sec. 3.2.4. Table 3 shows the experimental results. It is observed that joint inference surpasses every single compositor and verifies our motivation that groups perform better than individuals and could be used to reduce their own biases mainly caused by triplet ambiguity.
### The Effectiveness of Our Method
We present the experimental results in Table 4, Table 5, and Table 6. We could make two observations: **(1) We adopt a competitive baseline with few modifications.** As mentioned in Sec. 4.1, we adopt the CoSMo as our baseline and replace the LSTM with a more robust text encoder: RoBERTa, and observe consistent improvement. For example, on the FashionIQ dataset, our baseline improves CoSMo by 4.68% R@10 on average, and surpasses CoSMo by 3.90% R@10 on the Shoes dataset. We infer that RoBERTa is more robust than LSTM [28] to more accurately capture the textual information. However, our baseline is slightly lower than the reported results of CoSMo on Fashion200k, as the authors do not provide sufficient implementation details for reproducing. This also limits comparing our method with CQBIR [62], whose baseline uses faster RCNN [15] as a different image encoder. Nevertheless, our method is more effective than CQBIR on FashionIQ and Shoes, where the triplet ambiguity problem is more serious. **(2) The proposed Css-Net could further improve and advances the state of the art on such a strong baseline, verifying the effectiveness of Css-Net.** For example, Table 4 shows Css-Net improves retrieval accuracy on all FashionIQ subsets. Compared to the baseline, it gains +2.70% R@10 on Dress, +4.48% R@10 on Shirt, and +5.68% R@10 on TopTe. Compared to previous works, our method brings overall improvements (e.g., +2.77% R@10 and +6.67% R@50 on average by CLIP4Cir). The improvements are significant and empirically validate the effectiveness of Css-Net for handling the triplet ambiguity problem. Besides in Table 5, Css-Net surpasses the state-of-the-art (CLVC-Net) on the Shoes dataset, achieving improvements of +2.49% R@1 and +2.42% R@10, which further demonstrates that Css-Net is robust across different
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Fashion200k} \\ \cline{2-3} & R@1 & R@10 & R@50 \\ \hline MRN [31] & 13.4 & 40.0 & 61.9 \\ FiLM [45] & 12.9 & 39.5 & 61.9 \\ TIRG [53] & 14.1 & 42.5 & 63.8 \\ VAL [6] & 21.2 & 49 & 68.8 \\ DCNet [30] & - & 46.9 & 67.6 \\ CoSMo [35] & 23.3 & 50.4 & 69.3 \\ CLVC-Net\(\dagger\)[58] & 22.6 & **53.0** & **72.2** \\ ARTEMIS [10] & 21.5 & 51.1 & 70.5 \\ \hline Baseline & 20.9 & 47.7 & 67.8 \\ Css-Net & 22.2 & 50.5 & 69.7 \\ Css-Net\(\dagger\) & **23.4** & 52.0 & 72.0 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Quantitative results on Fashion200k dataset. The best results are in bold. The symbol \(\dagger\) denotes model ensemble method.
\begin{table}
\begin{tabular}{l c c c c c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Image Encoder} & \multicolumn{2}{c|}{Dress} & \multicolumn{2}{c|}{Shirt} & \multicolumn{2}{c|}{Toptee} & \multicolumn{2}{c}{Average} \\ \cline{3-10} & & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 \\ \hline MRN [31] & ResNet-152 & 12.32 & 32.18 & 15.88 & 34.33 & 18.11 & 36.33 & 15.44 & 34.28 \\ FiLM [45] & ResNet-50 & 14.23 & 33.34 & 15.04 & 34.09 & 17.30 & 37.68 & 15.52 & 35.04 \\ TIRG [53] & ResNet-17 & 14.87 & 34.66 & 18.26 & 37.89 & 19.08 & 39.62 & 17.40 & 37.39 \\ VAL [6] & ResNet-50 & 21.12 & 42.19 & 21.03 & 43.44 & 25.64 & 49.49 & 22.60 & 45.04 \\ DCNet [30] & ResNet-50 & 28.95 & 56.07 & 23.95 & 47.30 & 30.44 & 58.29 & 27.78 & 53.89 \\ CoSMo [35] & ResNet-50 & 26.45 & 52.43 & 26.94 & 52.99 & 31.95 & 62.09 & 28.45 & 55.84 \\ CLVC-Net [58] & ResNet-50\(\times\)2 & 29.85 & 56.47 & 28.75 & 54.76 & 33.50 & 64.00 & 30.70 & 58.41 \\ ARTEMIS [10] & ResNet-50 & 27.16 & 52.40 & 21.78 & 54.83 & 29.20 & 43.64 & 26.05 & 50.29 \\ CLIP4Cir [1] & ResNet-50 & 31.73 & 56.02 & 35.77 & 57.02 & 36.46 & 62.77 & 34.65 & 58.60 \\ \hline Baseline & ResNet-50 & 30.95 & 56.98 & 31.48 & 59.98 & 36.97 & 67.31 & 33.13 & 61.42 \\ Css-Net & ResNet-50 & **33.65** & **63.16** & **35.96** & **61.96** & **42.65** & **70.70** & **37.42** & **65.27** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Quantitative results of language-guided image retrieval on the FashionIQ dataset. The best results are in bold. They symbol \(\ast\) marks an updated version by the same authors. The symbol \(\dagger\) indicates that this method deploys model ensemble.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Shoes} \\ \cline{2-4} & R@1 & R@10 & R@50 \\ \hline MRN [31] & 11.74 & 41.70 & 67.01 \\ FiLM [45] & 10.19 & 38.89 & 68.30 \\ TIRG [53] & 12.6 & 45.45 & 69.39 \\ VAL [6] & 16.49 & 49.12 & 73.53 \\ CoSMo [35] & 16.72 & 48.36 & 75.64 \\ DCNet [30] & 53.82 & 79.33 \\ CLVC-Net\(\dagger\)[58] & 17.64 & 54.39 & 79.47 \\ ARTEMIS [10] & 18.72 & 53.11 & 79.31 \\ \hline Baseline & 17.27 & 52.26 & 77.35 \\ Css-Net & **20.13** & **56.81** & **81.32** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Quantitative results on the Shoes dataset. The best results are in bold. The symbol \(\dagger\) denotes model ensemble method.
datasets. Table 6 presents Fashion200k results. Although our baseline is below the reported results of CosMo because of insufficient implementation details for reproduction, Css-Net brings a considerable improvement (_e.g._, +2.8% R@10 over the baseline ) and is still competitive with many state-of-the-art works especially when applying the model ensemble (_e.g._, +4.3% R@10 by the baseline).
## 5. Conclusion
We present a Consensus Network (Css-Net) for language-guided image retrieval. Css-Net aims to relieve the inherent triplet ambiguity problem, which arises when the dataset contains multiple false-negative candidates that match the same query text. This problem stems from annotators overlooking fine-grained details of the images and describing only simple properties. The resulting noisy triplets significantly compromise the metric learning objective. To alleviate this problem, Css-Net employs a consensus module with four diverse compositors that possess different knowledge and can learn mutually during training and infer collaboratively when evaluation. Specifically, Css-Net adopts a pyramid training paradigm and auxiliary text-image compositor design that endow each compositor with unique knowledge. Css-Net also utilizes a KL loss that facilitates the learning among the compositors and reduces their biases learned on noisy triplets. Our experiments show that Css-Net is a competitive method on three benchmarks, demonstrating its effectiveness and robustness. Moreover, Css-Net is orthogonal and complementary to most existing methods, and can further enhance their performance. As future work, we plan to extend our method to real-world applications that involve learning from noisy triplets.
|
2302.10161
|
Direct Laser Cooling of Polyatomic Molecules
|
Over the past decade, tremendous progress has been made to extend the tools
of laser cooling and trapping to molecules. Those same tools have recently been
applied to polyatomic molecules (molecules containing three or more atoms). In
this review, we discuss the scientific drive to bring larger molecules to
ultralow temperatures, the features of molecular structure that provide the
most promising molecules for this pursuit, and some technical aspects of how
lasers can be used to control the motion and quantum states of polyatomic
molecules. We also present opportunities for and challenges to the use of
polyatomic molecules for science and technology.
|
Benjamin L. Augenbraun, Loic Anderegg, Christian Hallas, Zack D. Lasner, Nathaniel B. Vilas, John M. Doyle
|
2023-02-20T18:48:12Z
|
http://arxiv.org/abs/2302.10161v1
|
# Direct Laser Cooling of Polyatomic Molecules
###### Abstract
Over the past decade, tremendous progress has been made to extend the tools of laser cooling and trapping to molecules. Those same tools have recently been applied to polyatomic molecules (molecules containing three or more atoms). In this review, we discuss the scientific drive to bring larger molecules to ultralow temperatures, the features of molecular structure that provide the most promising molecules for this pursuit, and some technical aspects of how lasers can be used to control the motion and quantum states of polyatomic molecules. We also present opportunities for and challenges to the use of polyatomic molecules for science and technology.
keywords: ultracold molecules, laser cooling, polyatomic molecules +
# Direct Laser Cooling of Polyatomic Molecules
Benjamin L. Augenbraun
Laic Anderegg
Christian Hallas
Zack D. Lasner
Department of Physics, Harvard University, Cambridge, MA 02138, USA Harvard-MIT Center for Ultracold Atoms, Cambridge, MA 02138, USA [email protected]
Nathaniel B. Vilas
Department of Physics, Harvard University, Cambridge, MA 02138, USA Harvard-MIT Center for Ultracold Atoms, Cambridge, MA 02138, USA [email protected]
John M. Doyle
###### Abstract
We present a new laser cooling technique for laser cooling in a \(1.5\)-\(1.
2.3.5 Dependence on spin and electronic angular momenta * 2.3.6 Hyperfine effects and nuclear spin statistics * 2.4 Transitions * 2.4.1 Electronic transitions * 2.4.2 Vibrational transitions * 2.4.3 Measuring vibrational branching ratios * 2.4.4 Repumping transition spectroscopy * 2.4.5 Designing an optical cycle * 2.4.6 Rotational transitions * 2.5 Perturbations
* 3 Experimental techniques
* 3.1 Cryogenic buffer-gas beams
* 3.2 Optical cycling
* 3.3 Optical forces
* 3.4 Transverse cooling
* 3.5 Molecular deceleration
* 3.5.1 Radiative slowing
* 3.5.2 Zeeman-Sisyphus deceleration
* 3.6 Magneto-optical trapping
* 3.7 Sub-Doppler cooling
* 3.7.1 Grey molasses
* 3.7.2 \(\Lambda\)-enhanced grey molasses
* 3.7.3 Single-frequency cooling
* 3.8 Optical trapping
* 3.9 Preparation and coherent control of single quantum states
* 4 Outlook and challenges
* 4.1 Toward larger molecules
* 4.2 Other molecular motifs
* 4.3 Challenges and possibilities for other polyatomic molecules
* 5 Conclusion
* 6 Acknowledgments
## 1 Introduction
Detailed understanding of polyatomic molecules (those containing three or more atoms) is central to such diverse fields as chemistry, biology, and interstellar science. Beyond the inherent interest in their structures and interactions, physicists also hope to fully control polyatomic molecules at the single quantum state level for next-generation explorations. The diversity of electronic structures, geometries, and atomic constituents present in polyatomic molecules may provide powerful building blocks for quantum information processing and precision tests of fundamental physics, e.g. searching for dark matter or for new particles that help explain the matter-antimatter asymmetry of the universe. However, using complex molecules for these applications requires us to confront one of their most defining--and daunting--characteristics: their immensely rich and varied internal structures. Attempts to tame polyatomic molecules, for example by controlling their internal (e.g. vibrational/rotational) and external (motional) states have a long history both in atomic, molecular, and optical (AMO) physics and in physical chemistry. It is the purpose of this review to describe recent advances that have introduced direct laser cooling as a new element in the toolkit of polyatomic molecular control.
The already achieved exquisite control over certain quantum systems has been realized in no small part by using optical photons--this is a hallmark of modern quantum science and physical chemistry. The most recent wave of advances with atoms and molecules relies on the ability to cool, control, and detect molecules efficiently (and, ideally, nondestructively). Optical cycling, a process in which molecules are made to rapidly and repeatedly scatter many hundreds or thousands of photons, can be an effective way to carry out these tasks. Using photon cycling, scientists have exploited the mechanical effects of light to prepare and interrogate individual atoms and diatomic molecules in pristine and dynamically controllable traps (Barredo et al. (2016); Endres et al. (2016)). Optical photons also allow scientists to probe the fragile quantum effects that form the heart of modern quantum technologies (Haroche (2013); Wineland (2013)). The creation of quantum gases of atoms and the production of useful architectures for quantum computing also rely on these experimental feats. These wide ranging impacts span many frontiers of quantum science, as well as cold chemistry, and precision searches for new fundamental (particle) physics.
### Polyatomic molecules for quantum science
Ultracold molecules are a promising platform for quantum simulation and quantum information processing due to their large electric dipole moments and the intrinsically long coherence times achievable in low-lying rotational states. Heteronuclear molecules have molecule-frame electric dipole moments, typically on the order of several debye (D), that can be accessed in the electronic ground state, eliminating the need for the short-lived excited electronic levels often employed in atomic systems. The molecules may interact via the electric dipole-dipole interaction, whose long-range and anisotropic behavior enables access to a diverse variety of Hamiltonians for quantum simulation, as well as enabling entangling gates for quantum information applications. While not always required to achieve interactions, it can be advantageous to realize a molecular dipole moment in the laboratory frame by aligning the atoms or molecules with an external electric field (either DC or microwave).
While polyatomic molecules, compared to diatomic molecules, possess greater complexity and additional degrees of freedom that need to be controlled, many of the structures present in polyatomic molecules have no analogue in atoms or diatomic molecules. One especially appealing feature of polyatomic molecules is the existence of parity-doubled states in the ground electronic manifold. These "parity doublets" comprise nearly-degenerate pairs of quantum states with opposite parity that can be mixed by a small DC electric field (often \(\lesssim 100\) V/cm depending on the molecule), enabling the molecule to be easily polarized in the laboratory frame. Moreover, because this method of polarizing the molecule does not mix end-over-end rotational levels (which requires much larger electric fields, typically \(>\)1 kV/cm), the polarized molecules contain states with positive, negative, and near-zero lab-frame dipole moment (see Fig. 1).
The existence of parity-doubled states is a general feature of molecules that have a nonzero projection of the total angular momentum along the molecule-frame dipole moment. In particular, for a molecule with total angular momentum \(J\), whose projection onto the laboratory \(Z\) axis is \(M\) and whose projection onto the molecular symmetry axis is \(K\), the states \(2^{1/2}|\pm\rangle=|J,K,M\rangle\pm(-1)^{J-K}|J,-K,M\rangle\) have opposite parity. States with \(K\neq 0\) are universally present in polyatomic molecules: linear polyatomic molecules have angular momentum about the internuclear axis in vibrational bending modes, while nonlinear polyatomic molecules have nonzero moments of inertia about the symmetry axis even in the electronic and vibrational
ground state. The degeneracy of these parity doublets is lifted by a range of mechanisms, including Coriolis interactions, nuclear spin-rotation interactions, hyperfine interactions, or molecular asymmetry (Klemperer et al. (1993)). The resulting energy splitting can be as large as tens of MHz (notably in linear polyatomic molecules) or as small as tens of Hz or kHz (e.g., in singlet symmetric top molecules).
For the parity doublet states \(|\pm\rangle\) described above, the Hamiltonian under the influence of an external DC electric field \(\mathcal{E}\) is
\[H=\begin{pmatrix}-\hbar\Delta/2&-d\mathcal{E}\\ -d\mathcal{E}&\hbar\Delta/2\end{pmatrix} \tag{1}\]
where \(\Delta\) is the parity doublet splitting and
\[d\mathcal{E}\equiv\langle+|\vec{d}\cdot\vec{\mathcal{E}}|-\rangle=d_{0} \mathcal{E}\frac{KM}{J(J+1)} \tag{2}\]
where \(d_{0}\) is the permanent dipole moment of the molecule. The energy eigen
Figure 1: Molecular polarization as a function of applied laboratory electric field. Parameters are typical of YbOH molecules. Blue lines show alignment in a state without parity doubling (\(N\leq 2\), \(|M_{N}|\leq 1\) plotted). Pink lines show alignment of \(N=1\) sublevels in a vibrational level that has parity doubling, e.g. the fundamental bending vibrational state. We indicate two electric-field regimes: one where mixing occurs within a single rotational state and another where mixing occurs among many rotational states.
values are therefore
\[E_{\pm}=\pm\frac{1}{2}\hbar\Delta\left[1+4\left(\frac{d\mathcal{E}}{\hbar\Delta} \right)^{2}\right]^{1/2} \tag{3}\]
At low electric fields \(d\mathcal{E}/\hbar\Delta\ll 1\) the states split apart quadratically, while at high electric fields \(d\mathcal{E}/\hbar\Delta\gg 1\) the molecule becomes aligned in the laboratory frame and follows linear Stark shifts:
\[E\approx-d_{0}\mathcal{E}\frac{KM}{J(J+1)}, \tag{4}\]
where \(K\) and \(M\) are signed quantities. The structure of the molecule in this "polarized" regime is amenable to a number of interesting quantum simulation and quantum information applications.
#### 1.1.1 Quantum information processing
Since the seminal proposal by DeMille (2002), ultracold polar molecules have generated significant interest as a platform for quantum information applications, see e.g., Yelin et al. (2006); Ni et al. (2018); Sawant et al. (2019). The key element of these proposals is the fact that polar molecules have many long-lived rotational and vibrational degrees of freedom which enable storage of quantum information, while dipole-dipole interactions enable transfer of information and entanglement between individual molecules. Recent progress toward this goal has been realized with diatomic molecules by Holland et al. (2022) and Bao et al. (2022), who demonstrated dipolar coupling and entanglement between CaF molecules trapped in a tweezer array.
The linear Stark shifts discussed above for polyatomic molecules make them especially appealing for certain quantum computing and entanglement schemes, including those proposed by Andre et al. (2006); Wei et al. (2011); Yu et al. (2019); Zhang and Liu (2017). For example, Wei et al. (2011) studied theoretically entanglement generation in polar symmetric top molecules and identified two sets of polarized states that could be used as qubits in a quantum computer. They additionally proposed a method for realizing a CNOT entangling gate between two such molecules. Interestingly, it was also pointed out that polarized symmetric top molecules share certain similarities with the NMR quantum computing platform (see, e.g., Cory et al. (2000)). However, molecuels in optical tweezer arrays offer the possibility of isolating
individual qubits and controlling entanglement on demand more cleanly than has been done with liquid-phase NMR and with intrinsic scalability.
Yu et al. (2019) proposed a quantum computing scheme harnessing the parity-doublet structure of symmetric top molecules, wherein the qubit states are contained within the \(M=0\) manifold, which eliminates first-order electric field sensitivity, thereby improving robustness to external perturbations. Interactions are switched on using a third state in either the \(M=+1\) or \(M=-1\) manifold, and entanglement is generated using an interaction blockade mechanism. Using the linear Stark shift structure to switch dipole-dipole interactions on and off in this manner is a key advantage of polyatomic molecules.
The large number of rotational and vibrational degrees of freedom in polyatomic molecules also has potential advantages for quantum information applications. For instance, error-correcting logical qubits could be constructed from coherent superpositions of rotational states, as proposed by Albert et al. (2020). The large number of internal states in polyatomic molecules could also make them uniquely amenable to "qudit" schemes where multiple bits of information are stored in the same molecule, as described by Tesch and de Vivie-Riedle (2002); Sawant et al. (2019). Such schemes could significantly increase the speed and fidelity of molecular quantum computers by performing the majority of gates using single-molecule operations, and only coupling molecules via dipole-dipole interactions when necessary. Full cooling and control of larger polyatomic molecules with many nuclear spins could also enable platforms where NMR-based quantum computing is performed within a single molecule, while scalability is achieved by coupling individual molecules with dipole-dipole interactions. More theoretical work is required to explore the feasibility of such schemes.
#### 1.1.2 Quantum simulation
The long range and anisotropic dipolar interactions between polar molecules lend themselves to quantum simulation applications, in particular to simulation of quantum magnetism models where molecules pinned in a lattice act as effective spins (Carr et al. (2009); Bohn et al. (2017)). Wall et al. (2015a) provide a review of this topic. To date, much of this work has been focused on diatomic molecules, which are naturally suited to simulation of Heisenberg XY models, as first demonstrated experimentally by Yan et al. (2013) using KRb molecules. In theoretical work by Micheli et al. (2006), it was shown that microwave-dressed diatomic molecules with unpaired elec
tron spins can be used to simulate even more general spin models, though the technical requirements of this proposal have not yet been realized in present day experiments.
Compared to diatomic molecules, polyatomic molecules are naturally suited to simulation of a greater diversity of quantum magnetism models, such as those described by Wall et al. (2013, 2015b, 2015a). For instance, the unique rotational structure of molecules with parity doublet structure enables simulation of XYZ spin models via microwave dressing, as proposed by Wall et al. (2015b). Critically, the technical requirements of this proposal are significantly reduced compared to the method of Micheli et al. (2006) for simulating an XYZ Hamiltonian with diatomic molecules, which requires more microwave frequencies and significantly smaller lattice spacings to compensate for the molecules' sparser internal structure. Another benefit of large polyatomic molecules may be their small fine and hyperfine splittings, which can be made comparable to the dipole-dipole interaction energy at larger molecular spacings. For instance, operating in the regime where the dipolar interaction energy is comparable to the spin-rotation interaction energy in molecules with an unpaired spin would enable simulation of a diverse array of lattice spin models, as described in Micheli et al. (2006). These small energy-level splittings could, however, present a challenge for achieving full control of individual quantum states in the molecule, e.g., due to off-resonant excitations during control pulses.
Wall et al. (2013) have also proposed to use the (nearly) linear Stark shifts present in many polyatomic molecules as a way to simulate lattice spin models involving polarized dipoles. Because this proposal involves using electric dipole moments to simulate magnetic dipoles, the experimental interaction strengths can be several orders of magnitude larger. Importantly, this analogy can be extended to molecular states with arbitrarily large \(J\), enabling quantum simulation of magnetic dipoles with integer spin \(S=J\). These features could have interesting applications in simulation of lattice spin models as well as in the study of bulk dipolar gases (see Lahaye et al. (2009) for a review on the subject), which have previously been studied using magnetic atoms with much smaller interaction energies.
### Precision measurements
Polyatomic molecules hold further promise for a variety of precision measurements probing the Standard Model (SM) and beyond-Standard-Model (BSM) physics (see Hutzler (2020) for a focused review). At least three
distinctive features of polyatomic molecules can be leveraged for improved precision measurements. The first such feature is the closely-spaced pairs of opposite-parity states described above, which can be directly mixed by electric fields to orient molecules in the lab frame, or brought to degeneracy via magnetic fields.
Mixing opposite-parity states to orient a molecule in the lab frame is useful for electron electric dipole moment (eEDM) searches, as well as Schiff moment and magnetic quadrupole moment searches operating in a similar manner. The proposals by Kozyryev and Hutzler (2017); Maison et al. (2019); Oleynichenko et al. (2022); Yu and Hutzler (2021); Zakharova and Petrov (2021) describe experiments using polyatomic molecules in which this capability is useful. The basic principle of an eEDM measurement using oriented molecules is as follows. If the electron possesses an electric dipole moment, \(\vec{d}_{e}\), then it must be oriented along or against the electron spin, so that \(|\vec{d}_{e}\cdot\vec{S}|\neq 0\). In a given molecular state, the energy shift associated with the eEDM is \(\langle H_{d_{e}}\rangle=d_{e}\langle\vec{S}\cdot\vec{\mathcal{E}}_{\rm eff}\rangle\), where the "effective electric field" vector \(\vec{\mathcal{E}}_{\rm eff}\) is a state-dependent constant oriented in the molecular frame, for example along an internuclear axis. In a parity eigenstate, the electron spin has no average orientation in the molecular frame, and \(\langle\vec{S}\cdot\vec{\mathcal{E}}_{\rm eff}\rangle\) vanishes. The simplest way to obtain a non-vanishing eEDM interaction is to orient the molecule in the lab frame so that \(\vec{\mathcal{E}}_{\rm eff}||\hat{Z}\), where \(\hat{Z}\) is the lab \(z\)-axis, and to simultaneously orient the electron spin along or against the same axis (e.g., via angular momentum selection rules on electronic transitions) so that \(M_{S}\neq 0\). In this ideal case, \(\langle\vec{S}\cdot\vec{\mathcal{E}}_{\rm eff}\rangle\) will have maximal magnitude and \(\langle H_{d_{e}}\rangle\) can be probed. More generally, as long as the parity of a molecular state is at least partially mixed, then \(\langle\vec{S}\cdot\vec{\mathcal{E}}_{\rm eff}\rangle\neq 0\) and the eEDM can be measured via \(\langle H_{d_{e}}\rangle\). Heteronuclear diatomic molecules in \({}^{2}\Sigma\) electronic states have rotational states of both positive and negative parity, but they are generally spaced by tens of GHz and require electric fields on the order of tens of kV/cm to saturate the energy shifts associated with the eEDM. By contrast, the parity doublets generically found in polyatomic molecules can be fully mixed at fields at or below 1 kV/cm. Furthermore, in contrast to rotational states, the structure of parity doublets found in polyatomic molecules, for example \(K\)-doublets in symmetric top molecules or \(\ell\)-doublets in vibrational bending modes of polyatomic molecules, enables the orientation of molecules to be spectroscopically reversed in a fixed external electric field. This feature can also be found in \(\Lambda\)- or \(\Omega\)-doublets of diatomic molecules, and it has already been a valuable tool
for systematic error rejection in ThO and HfF\({}^{+}\), such as the experiments described in ACME Collaboration (2018); Cairncross et al. (2017). The special feature of polyatomic molecules is that such parity doublets can be obtained irrespective of the electronic structure of the selected polyatomic species.
Another case where near-degenerate opposite-parity states are useful is in probing intrinsic parity-violating Standard Model (SM) effects such as the vector electron-axial nucleon electroweak current coupling and the nuclear anapole moment. The total parity-violating interaction in a given electronic state is characterized by the constant \(W_{p}\). By determining the value of \(W_{p}\) in multiple nuclei of the same molecular species, the contribution of each SM effect could be independently determined. A sensitive method to probe \(W_{p}\) is Stark interference (see DeMille et al. (2008)), where an electric dipole transition drives a molecule between opposite-parity states separated by energy \(\Delta\). The population transfer contains an interference term between the driving electric field \(E_{0}\) and the parity-violating interaction \(W\) (which is proportional to \(W_{p}\) but contains additional factors from the molecular state). By comparing measurements with opposite phases of the driving electric field, a measured quantity proportional to \(W/\Delta\) can be obtained. Thus the experimental signal is enhanced when the states under consideration are brought to near-degeneracy. Whereas rotational states of diatomic molecules can be brought to near-degeneracy with Tesla-scale magnetic fields, Norrgard et al. (2019b) showed that in a large class of linear polyatomic molecules, degeneracy can be achieved with fields of 10 mT or less, dramatically reducing the experimental complexity of operating an experiment within the bore of a superconducting magnet. By measuring nuclear-spin-independent parity-violating effects in light molecules, where calculations of Standard Model effects are not prohibitively challenging, parity-violating interactions in the SM may be probed and, perhaps, distinguished from BSM effects.
A second feature of polyatomic molecules that can be exploited in precision measurements of BSM physics is the multiplicity of rotational and vibrational modes. Whereas diatomic molecules have one rotational mode and one vibrational mode, every polyatomic molecule contains at least three vibrational modes and up to three rotational modes. Thus accidental near-degeneracies between rovibronic states can be commonly found at low energies (e.g., below 1000 cm\({}^{-1}\)), and are nearly guaranteed at higher rovibronic energies where the density of states increases. As described by Jansen et al. (2014), these near-degeneracies are useful for probing potential variation of the proton-to-electron mass ratio, \(\mu\equiv m_{p}/m_{e}\), over time: as \(\mu\) changes, each
rovibrational level shifts since rotational and vibrational energies depend directly on the masses of atomic nuclei according to \(\delta E=\partial E/\partial\mu\times\delta\mu\). (Here, changes in physical quantities associated with changes in \(\mu\) are indicated by the prepended symbol \(\delta\).) Pure rotational energies and anharmonic vibrational energies obey \(\delta E=-E\times\delta\mu/\mu\), while pure harmonic vibrational energies obey \(\delta E=-\frac{1}{2}E\times\delta\mu/\mu\). In each case, the _absolute_ energy sensitivity \(\delta E\) to \(\mu\) variation scales with the overall energy \(E\). In cases of accidental near-degeneracies, it is possible to obtain \(\omega\approx 0\) even when the absolute frequency sensitivity of a transition \(\delta\omega\equiv\delta E_{2}-\delta E_{1}\) does not vanish because \(E_{1}\) and \(E_{2}\) depend differently on \(\mu\). For example, rovibronic transitions between near-degenerate states can exploit the relatively large absolute frequency shifts of vibrational energy levels (\(E\sim 10\) THz) while being amenable to the technical convenience of lower-frequency microwave sources (\(\omega\sim 10\) GHz) where stable frequency references are readily available and certain systematic errors like Doppler shifts are suppressed. Transitions where \(\delta\omega/\omega\gg 1\) have been used to set limits on \(\mu\) variation in molecules including methanol, ammonia, and KRb (where a degeneracy between an excited vibrational state and a metastable electronic state was used by Kobayashi et al. (2019)). Laser-cooled polyatomic molecules possess convenient rovibronic near-degeneracies to sensitively probe \(\mu\) variation in a platform offering ultracold temperatures, long trap lifetimes, and full quantum control. Kozyryev et al. (2021) identified a promising near-degeneracy in the energy levels of SrOH that could be used for such an experiment.
The third, and final, feature of polyatomic molecules that we note for precision measurements is chirality. Because chirality requires three distinct molecular axes, it can only be found in molecules with at least four atoms. Parity-violating effects in the Standard Model are predicted to result in energy splittings between chiral molecules at the level of \(\sim\)mHz to Hz in various species of experimental interest (Cournol et al. (2019)). In principle, even in the absence of parity violation, any chiral molecule could convert to its enantiomer via the tunneling of some set of nuclei through a vibrational energy barrier, resulting in a double-well-type energy structure and associated splitting \(\Delta E_{\pm}\) between molecular eigenstates. In many molecules such as hydrogen peroxide (HOOH) or the chiral isotopologue of ammonia (NHDT), the tunneling splitting dominates parity-violating effects, \(\Delta E_{\pm}\gg\Delta E_{\rm pv}\), and the symmetry breaking between left- and right-handed molecules occurs only "_de facto_," i.e. due to initial conditions described by Quack et al. (2022). Nevertheless, in this case it is possible to measure the
effect of parity-violating interactions, for example by observing an initial parity eigenstate acquire a non-zero amplitude of an opposite-parity eigenstate upon free evolution (Quack (1986)). The other limiting case is where the parity-violating energy shifts are large compared to the tunneling splitting, \(\Delta E_{\rm pv}\gg\Delta E_{\pm}\), so that the symmetry breaking between enantiomers is "_de lege_," i.e. due to intrinsic dynamics of the molecular energies. In this case, the energies of two enantiomers can be directly spectroscopically measured (e.g., via high-sensitivity infrared absorption experiments) and compared. Most molecules with heavier constituents, including species of interest such as CHFClBr and S\({}_{2}\)Cl\({}_{2}\), are expected to exhibit "_de lege_" parity violation due to the large reduced tunneling mass (Quack et al. (2008)). Chiral molecules are also sensitive to parity-violating cosmic fields associated with certain dark matter candidates, as pointed out by Gaul et al. (2020). Thus precise measurements of enantiomeric energy splittings can be a definitive probe of parity-violating weak interactions and beyond-Standard Model interactions in suitably chosen molecules. Laser-cooled polyatomic chiral molecules (for example CaOCHDT, a chiral analogue of CaOCH\({}_{3}\)) could potentially enable measurements of these parity-violating effects for the first time due to the possibility of longer interaction times and full quantum control (Augenbraun et al. (2020)).
### Collisions and chemistry
Collisions of polyatomic molecules--both with atoms and other molecules--are of great scientific interest to a number of disciplines across physics and chemistry. A distinction is typically made between two different temperature regimes, namely "cold" collisions at temperatures of \(\lesssim\)1 K, and the "ultracold" collision regime, defined by temperatures sufficiently low that only a single partial wave participates in the collision. The ultracold collision regime is sometimes defined informally as \(\lesssim\)1 mK, though the actual temperature for collisions to only include a single partial wave can be much lower for heavier molecules. In addition, there are subtleties involved in defining the single-partial-wave regime for dipolar collisions where all partial waves may contribute even at low collisional energies, see Chomaz et al. (2023). Numerous techniques have been developed for studying collisions at cold temperatures, and cold collision dynamics for a number of molecules, including polyatomic species, have already been investigated. These experiments have typically relied on beam-based approaches, though a variety of different experimental techniques have been employed, including slowing techniques to
reduce collisional temperatures, e.g., Stark (van de Meerakker and Meijer (2009)), Zeeman (Plomp et al. (2020)), and "cryofuge" (Wu et al. (2017)) deceleration. Trapping of the molecular species--from slowed or buffer-gas-cooled (Hummon et al. (2011)) molecular beams--has allowed for increased molecular interaction times, and has lead to novel studies of cold dipolar collisions. Stark deceleration followed by magnetic trapping has, for example, been used to study dipolar collisions between OH and ND\({}_{3}\) molecules in a magnetic trap (Sawyer et al. (2011)), and cryofuge deceleration has recently been combined with electric trapping to study dipolar collisions between CH\({}_{3}\)F molecules (Koller et al. (2022)). Collision studies at cold temperatures play a particularly important role in astrophysics, specifically for studying the astrochemical reactions that result in the molecular compositions observed in interstellar environments, see Herbst and van Dishoeck (2009). For more extensive reviews of cold collision studies, we refer interested readers to the articles by Toscano et al. (2020); Heazlewood and Softley (2021). The ultracold regime remains largely unexplored due to the difficulty in producing ultracold molecules, especially for polyatomic molecules. The extension of direct laser cooling techniques to polyatomic molecules could potentially provide a path towards realizing collision studies at ultracold temperatures. Below we provide a few key highlights of some of the novel collision dynamics and chemistry that are expected to arise at ultracold temperatures and which could potentially be explored with ultracold polyatomic molecules.
In the ultracold regime, the de Broglie wavelength of the colliding molecules exceeds their classical size, requiring a quantum-mechanical description of the collision process. Molecular interactions in this regime are therefore characterized by quantum statistics and quantum threshold behavior, and a host of different quantum collision and chemistry phenomena are expected to arise as a result (Carr et al. (2009)). For example, due to the delocalized wavelike nature of molecules in this regime, long-range dipolar interactions are expected to be essential for determining collision rates between polar molecules at ultracold temperatures. This bears important implications for chemical reaction rates. While reaction rates are "classically" expected to increase with temperature, many chemical reactions instead proceed at much accelerated rates in the limit of absolute zero (Richter et al. (2015)). Some of these quantum effects are already starting to be explored with ultracold diatomic molecules. Seminal experiments with ultracold KRb molecules created by photoassociation of laser-cooled atoms have, for example, probed long-range interactions in ultracold KRb\(-\)KRb (Ospelkaus et al. (2010); Ni
et al. (2010); Hu et al. (2021)) and KRb\(-\)Rb (Nichols et al. (2022)) collisions. Ultracold collision experiments have since also been performed with a range of other bialkali species (Ye et al. (2018); Gregory et al. (2021); Bause et al. (2021)). Similar experiments could be imagined with polyatomic molecules, which would grant still broader access to new quantum collision and chemistry phenomena. As an example, it has been proposed that collective many-body effects in a Bose degenerate gas of triatomic molecules may lead to a "Bose-stimulated" photodissociative process in which branching to either of two decay channels, \(ABC\to AB+C\) or \(ABC\to A+BC\), can be significantly amplified (Moore and Vardi (2002)). Ultracold collision experiments with polyatomic molecules would also, along with experiments with diatomic molecules, provide important experimental benchmarks for new scattering theories and quantum chemistry at ultracold temperatures.
An essential implication of the quantum nature of the ultracold regime is that both elastic and inelastic collision rates are expected to have a strong dependence on the exact molecular quantum states (Carr et al. (2009)). External fields, too, can have strong and potentially very different effects on elastic and inelastic rates, for example by changing the relative orientation of two colliding molecules (Tscherbul and Krems (2006); Tscherbul et al. (2009); Brouard et al. (2014); Tscherbul and Krems (2015)). This provides the capability for separately "tuning" elastic as well as inelastic collision rates, which, as we discuss below, may be important for the realization of degenerate gases of polyatomic molecules. In the context of chemical reactions, this provides a path to controlled chemistry at ultracold temperatures in which chemical reactions can be very precisely studied and, possibly, engineered (Krems (2008); Balakrishnan (2016); Bohn et al. (2017)). These points are especially pertinent to polyatomic molecules, whose many internal degrees of freedom may provide additional tools for external field control, and, in particular, allow for easy orientation of the molecules. Recent experiments with CaF molecules have demonstrated that direct laser cooling in combination with optical trapping is a feasible approach for realizing the level of control required to characterize the quantum state and field dependencies of collision rates (Cheuk et al. (2020); Anderegg et al. (2021)). Optical trapping of ultracold polyatomic molecules additionally opens the door to probing collisional dynamics in confined geometries in which collision dynamics take on qualitatively distinct behavior compared to that of an unconfined gas (Carr et al. (2009)).
Finally, understanding the collisional processes of polyatomic molecules at
ultracold temperatures is likely to have a fundamental impact on the potential realization of Bose or Fermi degenerate polyatomic gases. In particular, the experiments with bialkali molecules mentioned earlier have demonstrated that, in the ultracold regime, even molecules that are not chemically reactive can exhibit large inelastic collision losses due to forming long-lived complexes in so-called "sticky collisions" (Bause et al. (2022)). This prevents efficient evaporative cooling, which is the typical approach used for creating degenerate atomic gases (Pethick and Smith (2002)). The tunability of the reaction rates mentioned above is in this regard important and can enhance the elastic-to-inelastic collision ratio by several orders of magnitude using external fields. Several "shielding" techniques have already been demonstrated for diatomic molecules (Matsuda et al. (2020); Valtolina et al. (2020); Anderegg et al. (2021); Li et al. (2021)). Recently, Schindewolf et al. (2022) showed that quantum degeneracy could be reached for diatomic species using this shielding technique. Similar methods are likely required for the realization of degenerate gases of polyatomic molecules. A scheme for field-induced shielding in collisions between CaOH molecules has been proposed by Augustovicova and Bohn (2019). Alternative prospects for collisional cooling using sympathetic molecule-atom collisions have also recently been explored by W\(\acute{o}\)jcik et al. (2019) for complex polyatomic molecules such as benzene and azulene.
### Experimental approaches besides direct laser cooling
Achieving any of the diverse goals summarized above requires exquisite control over the molecules to be probed. In order to reap the benefits of polyatomic molecules, we must cool them to cold (\(\lesssim\)4 K) or ultracold (\(\lesssim\)1 mK) temperatures. This cooling is required to "compress" the molecular population into a small number of quantum states and to slow their thermal velocities to tens of meters per second; the former increases an experiment's signal-to-noise ratio and the latter enables long interrogation times to improve achievable precision and control. Because polyatomic molecules contain many internal degrees of freedom, cooling them can be a very difficult task. Many research groups have tackled this problem, and we review some relevant methods here. A variety of approaches have been explored to realize this control.
A very successful method of producing ultracold diatomic molecules involves laser cooling atoms and then binding these pre-cooled atoms together (using photoassociation, magnetoassociation, etc.); see Ni et al. (2008); Molony
et al. (2014); Park et al. (2015); Guo et al. (2016); Rvachov et al. (2017); Cairncross et al. (2021). While this has produced a number of ultracold diatomic molecules in single quantum states, including degenerate gases, it is not clear it can be generalized to produce _polyatomic_ molecules. However, very recent work by Yang et al. (2022) has shown some evidence for production of triatomic molecules in a mixture of \({}^{23}\)Na\({}^{40}\)K and \({}^{40}\)K. It is also unclear whether this exciting result can be extended to other species (or larger ones), especially those containing difficult-to-cool atomic species such as O, C, and/or H.
Optoelectrical Sisyphus cooling, described by Zeppenfeld et al. (2012); Prehn et al. (2016), uses state-dependent energy shifts and repeated microwave transitions/optical pumping steps to cool molecules as they move through an electric trap. Energy is removed by ensuring molecules move away from the trap's center along a "steep" potential and return to the center along a "shallow" potential. This technique relies on the linear Stark shifts that can be achieved in symmetric or asymmetric top molecules. It has been used to produce trapped samples of CH\({}_{3}\)F and H\({}_{2}\)CO at \(\sim\)mK temperatures and could potentially be used to observe molecule-molecule collisions in the trap. To date, the molecules that have been cooled using this method do not offer convenient optical transitions, so vibrational transitions with relatively long lifetimes have been used instead. The method could likely be adapted to make use of the optical transitions offered by some of the molecules discussed in this review, speeding up the cooling rates considerably.
### Direct laser cooling
Direct laser cooling is a promising approach because it may be widely applicable to diverse structures of molecules and brings with it the possibility of high-efficiency quantum state preparation and readout using optical photons. The direct laser cooling approach is the focus of this review paper. Figure 2 presents a schematic overview of an idealized molecular laser cooling experiment. Because many of these techniques were honed in the context of diatomic molecules, the interested reader should refer to the reviews by Hutzler et al. (2012); Tarbutt (2018); McCarron (2018); Fitch and Tarbutt (2021).
#### 1.5.1 Forming closed cycling transitions
The photon scattering process is one in which molecules go through a series of photon absorption and spontaneous emission cycles that can be
described as a Bernoulli sequence. Suppose we have applied laser repumpers such that a molecule has a probability \(p\) to decay to a state that is _not_ addressed by laser light. The probability \(P_{n}\) for a molecule to experience \(n\) absorption-emission cycles is given by \(P_{n}=(1-p)^{n}\). The average number of photons scattered by molecules is \(\bar{n}=\frac{1}{1-p}\). We often refer to this as the "photon budget," and it sets the (exponential) scale for how many photons can be scattered before significant fractions of the population are lost. For example, if laser slowing requires 10,000 photon scattering events and we would like 90% of the initial population to remain after slowing, we require \(p\approx 10^{-5}\). Clearly, understanding branching ratios as small as 1 part in \(10^{5}\) can be crucial to achieve efficient laser cooling.
A typical laser cooling experiment requires scattering many thousands of optical photons. To repeatedly scatter this number of photons without the molecules accumulating in states that do not couple to applied laser light ("dark states"), it is necessary to form a "closed cycling transition." In a closed cycle, a molecule that is driven to an electronically excited state is guaranteed to decay back to the same state that it started in. In reality, no optical transition is fully closed and the molecule has a finite probability to decay to a state that is different from the initial state. Such a molecule needs to be "repumped" into the cycling transition. The higher the probability of decaying to other states, the more repumping transitions must be addressed.
Figure 2: Overview of an idealized molecular laser cooling sequence. Molecules are produced in a cryogenic buffer-gas beam (CBGB) source, decelerated, trapped in a magneto-optical trap, and then transferred into a conservative trap for a particular science goal. Reproduced from Augenbraun (2020).
This quickly becomes an experimental limitation, as each decay typically requires a separate laser to be added to the experiment. This points to the importance of selecting atoms and molecules whose branching ratios are favorable. The general guidelines for selecting molecules and transitions for laser cooling were first pointed out by Di Rosa (2004), who highlighted the need for (1) strong transitions, (2) highly diagonal vibrational branching, and (3) no intermediate (i.e., metastable electronic) states between the two states used for optical cycling.
The electronic transitions driven in laser cooling are typically electric dipole (E1) transitions. Since the dipole operator is odd under parity transformations, the parity of the excited and ground states must be opposite. As a result, selection rules for changes to the total angular momentum \(J\) are \(\Delta J=0,\pm 1\) (with \(J^{\prime}=0\) to \(J=0\) forbidden). We can use these selection rules to our advantage by cycling from \(J=1\) to \(J^{\prime}=0\), as originally recognized by Stuhl et al. (2008), which must decay back to \(J=1\), leading to rotational closure.
One consequence of driving from \(J=1\) to \(J^{\prime}=0\) to attain rotational closure is the presence of dark states. This is a generic problem in the laser cooling of molecules, where the excited state often has the same number of sublevels (or fewer) than the ground state. This means that for any fixed laser polarization, dark states exist. Molecules that collect in these dark states can be returned to bright states in a number of different ways, most commonly using DC magnetic fields, polarization modulation, or microwave pulses (Berkeland and Boshier (2002)).
#### 1.5.2 Effects of multilevel systems on laser cooling
For a system that can decay to multiple states, the prototypical two-level scattering rate equation is modified. For resonant light where all the transitions are driven with equal intensity, Williams et al. (2017) show that the maximum achievable scattering rate is modified to
\[R_{\rm scat}^{\rm max}=\frac{n_{e}}{n_{g}+n_{e}}\Gamma \tag{5}\]
where \(n_{e}\) is the number of excited states and \(n_{g}\) is the number of ground states. As an example, in CaF there are 12 Zeeman sublevels in the ground state, 4 in the excited state, and 12 in the \(v\)=1 ground state. Hence \(n_{e}=4\) and \(n_{g}=24\). If lasers are tuned to drive both \(\tilde{X}(v=0)\) and \(\tilde{X}(v=1)\) to the \(\tilde{A}\) state, the achievable scattering rate will reduce by a factor of about
4. This decrease in the scattering rate would significantly hinder the laser cooling of a molecule. A commonly employed technique to circumvent this limitation is to repump the molecules through a different excited state than the one used for the "main" transition. This approach is also crucial for polyatomic molecules as more states must be repumped in a laser cooling scheme.
With closed cycling transitions established, diatomic molecules have been laser slowed and cooled (Shuman et al. (2009, 2010); Barry et al. (2012); Zhelyazkova et al. (2014); Hemmerling et al. (2016); Truppe et al. (2017c); Lim et al. (2018)) and trapped in red-detuned magneto-optical traps (Hummon et al. (2013); Barry et al. (2014); McCarron et al. (2015); Norrgard et al. (2016); Steinecker et al. (2016); Chae et al. (2017); Truppe et al. (2017a); Anderegg et al. (2017); Williams et al. (2017); Collopy et al. (2018)) and blue-detuned magneto-optical traps (Burau et al. (2022)), cooled to sub-Doppler temperatures (Truppe et al. (2017a); Cheuk et al. (2018); Caldwell et al. (2019); Ding et al. (2020)), and loaded into magnetic (Williams et al. (2018); McCarron et al. (2018)) and optical traps (Anderegg et al. (2018); Langin et al. (2021); Wu et al. (2021); Lu et al. (2022)) and tweezers (Anderegg et al. (2019); Lu et al. (2022); Holland et al. (2022); Bao et al. (2022)).
## 2 Molecular structure for laser cooling experiments
Compared to diatomic molecules, polyatomic molecules can have significantly more complex internal structures. Here, we review the structure of polyatomic molecules as is relevant to laser cooling and trapping experiments. We generally work within the Born-Oppenheimer approximation, separating the electronic, vibrational and rotational motions, but molecular physics beyond the Born-Oppenheimer approximation is also introduced as necessary to understand its impact on laser cooling and trapping experiments.
### Electronic structure
For the molecules of interest to laser cooling experiments, the largest relevant energy scale corresponds to excitation of a valence electron that is used for optical cycling (changing the principle quantum number). Typical electronic excitation energies for several classes of molecules are in the visible region of the electromagnetic spectrum (400-800 nm). Understanding the origin and nature of these electronic states is needed to predict which molecules are favorable for laser cooling experiments.
We will focus, for the moment, on molecules in which an alkaline-earth atom, \(M\), bonds to some electronegative ligand, \(L\) (e.g., F, OH, OCH\({}_{3}\), SH, etc.), because these molecules are expected to have the structure desired for laser cooling (Kozyryev et al. (2016a); Augenbraun et al. (2020a); Isaev et al. (2017); Ivanov et al. (2019); Isaev and Berger (2016)). As explained by Ellis (2001), the alkaline-earth atoms tend to form ionically-bonded molecules due to their low ionization energies. Whether \(M\) transfers one or both of its valence electrons to the bonding partner depends on whether \(L\) can form singly or doubly charged anions. For the examples of \(L\) listed above, singly-charged anions form, so \(M\) retains one electron even after forming a bond.
The relevant structural details can largely be extracted from a simple picture involving just three ingredients: a positively charged metal ion, \(M^{2+}\), an optically active "valence" electron near the metal, and a negatively charged ligand, \(L^{-}\). The presence of an unpaired electron leads to low-lying, metal-centered electronic excitations that can be used for optical cycling and laser cooling. One would naively expect the unpaired electron to have dominant \(s\) orbital character in the electronic ground state. More detailed ligand-field
Figure 3: Illustration of orbital mixing reducing the interaction between valence electron and negatively charged ligand. (a) Mixing of \(s\sigma\) and \(p\sigma\) orbitals to generate the \(\tilde{X}\,^{2}\Sigma^{+}\) state. (b) Mixing of \(p\pi\) and \(d\pi\) orbitals to generate the \(\tilde{A}\,^{2}\Pi\) state. In both cases, the rightmost image shows quantum chemical calculations of the electronic distribution confirming this simple orbital mixing picture, using CaOH as an example. Reproduced from Augenbraun (2020).
theory calculations (e.g., those by Rice et al. (1985); Allouche et al. (1993)) show that this is largely true, but also that the interaction with the negatively charged ligand deforms the valence electron to minimize electron-electron repulsion (Ellis (2001)). For example, in the ground state this deformation is realized via mixing of \(s\sigma\) and \(p\sigma\) orbitals on \(M\), a process shown schematically in Fig. 3. The orbital notation will be described more in the next paragraph. It has been found that in CaF, the ground electronic state (\(\tilde{X}\,^{2}\Sigma^{+}\)) arises from a mixture of approximately 80% of the \(4s\sigma\) orbital and about 20% of the \(4p\sigma\) orbital, while the lowest electronic state \(\tilde{A}\,^{2}\Pi\) is made up of about 70% \(4p\pi\) and 25% \(3d\pi\) character (Rice et al. (1985)). Similar values are found for Ba-containing monohalides (Allouche et al. (1993)) and larger monomethoxide species (Augenbraun et al. (2021b)).
An energy level diagram of the low-lying electronic states can be constructed from these ideas (Dick (2007); Ellis (2001)). The basic idea is to consider the Hamiltonian \({\bf H}={\bf H}_{M^{+}}+{\bf H}_{L^{-}}+{\bf H}^{\prime}\), where \({\bf H}_{M^{+}}\) and \({\bf H}_{L^{-}}\) describe the energy levels of the free ions and \({\bf H}^{\prime}\) describes the interaction between the optically active valence electron and the ligand (Rice et al. (1985); Allouche et al. (1993)). When the ligand is treated as a point charge perturbation, it has three qualitative effects on the spectrum (Dick (2007)):
1. It can shift the atomic ion's energy levels.
2. It can split the \(m_{l}\) components of each atomic \(nl\) state. We can think of this as arising from strong electrostatic forces along the bond, producing a Stark effect that resolves \(m_{l}\) components along the bond axis. As is typical for the Stark effect, we do not resolve \(+m_{l}\) and \(-m_{l}\) components, and therefore label states as \(\lambda=|m_{l}|=0,1,2,\ldots\). States with \(\lambda=0,1,2,\ldots\) are denoted by \(\sigma,\pi,\delta,\ldots\), respectively.1 Footnote 1: We are using lower-case letters here because we are describing the single valence electron. Below, we will use capital letters to describe the total electronic state.
3. It can mix orbitals obeying the selection rule \(\Delta m_{l}=0\). This may seem innocuous at first, but is actually quite important because this effect means each molecular orbital is a linear combination of atomic ion orbitals. Not only does this greatly affect the ordering of molecular states, it also means the molecular parameters of a given state will appear as an "average" of the atomic states that it comprises.
Figure 4(i-iii) shows the development of the energy levels as these effects are sequentially added. This diagram also shows how we name electronic
Figure 4: Correlation of low-lying electronic states from atomic ion (\(M^{+}\)) to linear (\(C_{\infty v}\)) and nonlinear (\(C_{s}\)) molecules. The labeled regions correspond to the following qualitative processes: (i) Shifting of atomic ion levels, (ii) Splitting of atomic \(m_{l}\) components, (iii) Mixing of energy levels with the same \(m_{l}\), (iv) Cylindrical symmetry breaking due to a nonaxial ligand. Modeled after diagrams in Ellis (2001); Dick (2007).
states. For a linear molecule (\(C_{\infty v}\) point group symmetry), we label electronic states by letters: \(\tilde{X}\) for the ground state, \(\tilde{A},\tilde{B},\ldots\) for the first, second,..., electronically excited states, respectively. Note that (for historical reasons) the alphabetical ordering usually, but does not always, match the energetic ordering of energy levels.2 In addition, because in most cases the electronic states can be identified by their electronic orbital angular momentum (\(\Lambda\)) and spin multiplicity (\(2S+1\)), we use the labeling scheme \({}^{2S+1}\Lambda\). For states with definite projection of spin onto the molecular axis, we can also add a subscript (\(\Omega\)) to specify the _total_ angular momentum projection on the internuclear axis (see also discussion of angular momentum coupling below).
Footnote 2: Electronic states with multiplicity different from than the ground state are labeled by lower case letters \(\tilde{a},\tilde{b},\ldots\), usually in order of increasing energy.
This description has so far assumed the ligand can be treated as a point charge, meaning the \(M^{+}L^{-}\) system has cylindrical symmetry about the \(M-L\) axis. Such symmetry is exact only for linear molecules (diatomic or polyatomic). If \(ML\) is a nonlinear molecule but still retains some axial symmetry (e.g., \(\mathrm{MOCH}_{3}\)) then the picture is largely unchanged except that there will be no formal distinction between the symmetries of degenerate electronic states (e.g., for \(C_{3v}\) symmetry, \(\Pi,\Delta,\ldots\) states are all classified as having \(E\) symmetry). If the ligand breaks the axial symmetry (e.g., MSH), then the orbital degeneracy described in steps (2) and (3) just above is no longer guaranteed, and degenerate electronic states can split. For example, a \(\Pi\) electronic state of CaOH will correlate to states of \(A^{\prime}\) and \(A^{\prime\prime}\) symmetry for CaSH, as shown in Fig. 4(iv). Figure 5 shows the clear role that asymmetry of the ligand has on slightly deforming the valence electron that remains localized on the metal optical cycling center. Despite the asymmetry, in all cases plotted the optical cycling properties are preserved.
### Vibrational structure
During laser cooling of a polyatomic molecule, a number of vibrational states may be excited due to the lack of a perfectly closed cycling transition. These states are generally close to the bottom of the potential energy surface,
Figure 5: Molecular orbitals of the lowest several electronic states for Ca-containing molecules as symmetry is systematically lowered from \(C_{\infty v}\) (CaOH) to \(C_{3v}\) (CaCH\({}_{3}\)) to \(C_{2v}\) (CaNH\({}_{2}\)) to \(C_{s}\) (CaSH). The distortion of the valence electron as a function of ligand asymmetry is clearly visible. Panels (a-d) show, respectively, the HOMO, LUMO, LUMO+1, and LUMO+2 for CaOH; (e-h) show these for CaCH\({}_{3}\); (i-l) for CaNH\({}_{2}\); and (m-p) for CaSH. Figure reproduced from Augenbraun et al. (2020a).
which can be Taylor expanded as
\[V=V_{0} +\frac{1}{2}\sum_{i=1}^{3N}\sum_{j=1}^{3N}\left(\frac{\partial^{2}V}{ \partial q_{i}\partial q_{j}}\right)_{q_{i}=q_{j}=0}q_{i}q_{j}\] \[+\frac{1}{3!}\sum_{i}\sum_{j}\sum_{k}\left(\frac{\partial^{3}V}{ \partial q_{i}\partial q_{j}\partial q_{k}}\right)_{0}q_{i}q_{j}q_{k}+\ldots \tag{6}\]
Here we express the potential energy as a function of the \(3N\) nuclear coordinates \(q_{i}\), which describe the 3-dimensional motion of all \(N\) nuclei. To a good approximation, the low-lying vibrational states in this potential may be described within the harmonic approximation, in which case the vibrational Hamiltonian is (Demtroder (2003); Bernath (2017))
\[H_{v}=\sum_{i=1}^{3N-6(5)}\left(-\frac{\hbar^{2}}{2}\frac{\partial^{2}}{ \partial Q_{i}^{2}}+\frac{1}{2}\lambda_{i}Q_{i}^{2}\right) \tag{7}\]
where \(Q_{i}\) is a mass-weighted normal coordinate describing the \(i\)th normal mode of vibration and \(\sqrt{\lambda_{i}}\) is the vibrational frequency of the mode. Notice that for a polyatomic molecule there are \(3N-6\) (or \(3N-5\) for a linear molecule) normal vibrational coordinates; the other \(6(5)\) normal coordinates are taken up by the 3 translational and \(3(2)\) rotational degrees of freedom of the molecule.
Vibrations along each of the normal coordinates are fully separable, so the vibrational wavefunction can be expressed as
\[\psi(Q_{1},Q_{2},\ldots,Q_{3N-6})=\prod_{i=1}^{3N-6}\psi_{v_{i}}(Q_{i}) \tag{8}\]
where \(\psi_{v_{i}}(Q_{i})\) is the simple harmonic oscillator wavefunction (Bernath (2017)) with vibrational quantum number \(v_{i}\). The energy of the state in the harmonic approximation is
\[E_{v}=\sum_{i=1}^{3N-6}\hbar\omega_{i}\left(v_{i}+\frac{1}{2}\right) \tag{9}\]
Reintroducing the anharmonic terms in the potential energy surface has two effects. The first is to couple the harmonic oscillator eigenstates from Eq. 8, as discussed below. The second effect is to alter the state energies,
which can be expressed as (Demtroder (2003); Bernath (2017); Herzberg (1966))
\[G(v_{1},v_{2},\ldots,v_{p})=\sum_{i}\omega_{i}\left(v_{i}+\frac{d_ {i}}{2}\right) +\sum_{j\leq i}x_{ij}\left(v_{i}+\frac{d_{i}}{2}\right)\left(v_{j}+ \frac{d_{j}}{2}\right)\] \[+\sum_{j\leq i}g_{ij}\ell_{i}\ell_{j}+\ldots \tag{10}\]
where \(\omega_{i}\) is the frequency of the \(i\)th mode, \(d_{i}\) is its degeneracy, \(x_{ij}\) and \(g_{ij}\) are anharmonicity constants, and \(\ell\) is a quantum number describing the vibrational angular momentum along the molecular axis.
#### 2.2.1 Vibrational state notation
**Linear triatomic molecules.** Linear triatomic molecules, including the alkaline earth monohydroxides CaOH, SrOH, and YbOH, have 4 normal vibrational modes, one of which (the bending mode) is doubly degenerate. The vibrational state is labeled by \((v_{1}v_{2}^{\ell}v_{3})\), where \(v_{1}\) is the quantum number for the symmetric stretching mode, \(v_{2}\) describes the bending mode, \(v_{3}\) describes the antisymmetric stretching mode, and \(\ell\) gives the vibrational angular momentum in the bending mode. Note that this angular momentum arises because linear combinations of the degenerate bending vibrations along two perpendicular (linear) axes can be formed which have elliptical trajectories. This can be thought of as producing a nuclear orbital angular momentum about the molecular axis. Doubly degenerate bending modes such as this one are formally described by the 2D harmonic oscillator, and \(\ell\) can take the values \(v_{2},v_{2}-2,\ldots,-v_{2}+2,-v_{2}\) (see Bernath (2017)).
**Larger polyatomic molecules.** For larger molecules, labeling vibrational states as \((v_{1},v_{2},\ldots,v_{N})\) becomes unwieldy. Instead, we use the notation \(i_{v_{i}}\), where \(i\) labels the vibrational mode and \(v_{i}\) is the vibrational quantum number for that mode. Only modes with \(v_{i}\neq 0\) are labeled, and \(i\) is enumerated from 1 to \(n\), where \(n\) is the total number of modes. Typically, the modes are ordered first by the symmetry of the vibrational mode and then in order of decreasing energy, as described in Herzberg (1966). For example, in the symmetric top molecule \(\text{CaOCH}_{3}\), the vibrational state \(4_{1}\) corresponds to one excitation of the 4th vibrational mode (Ca-O stretch), while \(3_{1}4_{2}\) corresponds to 2 excitations of the Ca-O stretch mode and one excitation of the O-C stretch mode (3rd vibrational mode). In this notation, unspecified vibrational modes are assumed to have \(v_{i}=0\).
#### 2.2.2 Anharmonic coupling
As mentioned above, one effect of the anharmonic terms in the potential energy surface (Eq. 6) is to mix the harmonic oscillator wavefunctions, such that the eigenstates of the vibrational Hamiltonian are admixtures of different harmonic oscillator basis states. This effect is most prominent for vibrational states of the same symmetry that are nearby in energy, and is often referred to as _Fermi resonance_. The Fermi resonance interaction is described in detail in Hougen (1962).
One example of Fermi resonance occurs in CaOH, whose bending mode frequency is approximately half the stretching mode frequency. This means that, for example, the (100) and (02\({}^{0}\)0) states are nearly degenerate and strongly mixed by a cubic term in the potential energy surface, \(V_{122}=k_{122}q_{1}q_{2}^{2}\), which mixes states with \(|\Delta v_{1}|=1\) and \(|\Delta v_{2}|=2\). Here \(k_{122}\) is an anharmonic force constant. One practical implication of this interaction is that excited electronic states that would decay to \(\widetilde{X}(100)\) can also decay to \(\widetilde{X}(02^{0}0)\) at an enhanced rate; this is important for understanding vibrational branching ratios in polyatomic molecules (see Sec. 2.4.2). In this example, (100) mixes with (02\({}^{0}\)0) but not (02\({}^{2}\)0) because states with different \(\ell\) have different symmetry. For example, in a \(\Sigma\) electronic state, \(\ell=0\) vibrational levels have \(\Sigma\) symmetry, \(\ell=1\) vibrational levels have \(\Pi\) symmetry, and so on. For a more detailed discussion of symmetry in polyatomic molecules, see Bunker and Jensen (2006).
### Rotational structure
The rotational structure of molecules can be described, to leading order, by a rigid rotor model. The Hamiltonian is \(H_{\rm rot}=\frac{R_{a}^{2}}{2I_{a}}+\frac{R_{b}^{2}}{2I_{b}}+\frac{R_{c}^{2}}{ 2I_{c}}\), where \(a,b,c\) denote the three principal axes in the molecular body-fixed frame, \(I_{a}\leq I_{b}\leq I_{c}\) are their corresponding moments of inertia, and \(R_{a},R_{b},R_{c}\) are the corresponding projections of the molecule's rotational angular momentum. It is useful to introduce the molecular constants \(A=(2I_{a})^{-1}\), \(B=(2I_{b})^{-1}\), and \(C=(2I_{c})^{-1}\). The energy level structure resulting from this Hamiltonian depends on the relative values of \(I_{a}\), \(I_{b}\), and \(I_{c}\). We describe each case in turn, initially neglecting the effects of electronic, nuclear, and vibrational angular momentum. In this scenario, the quantum number \(N=J-S\) is equivalent to the rigid-body rotational angular momentum \(R\). We will denote the rotational angular momentum by \(N\) to make the notation compatible with the more general case treated later.
#### 2.3.1 Linear molecules
A linear molecule has \(I_{a}=0\) and \(I_{b}=I_{c}\) so that \(B=C\). All diatomic molecules, and a small but important class of polyatomic molecules including alkaline-earth hydroxides like CaOH and SrOH, are linear. In the limit that \(I_{a}\to 0\), any angular momentum about the \(a\)-axis requires infinite energy, so we take \(N_{a}=0\). Then \(H_{\rm rot}=B(N_{b}^{2}+N_{c}^{2})=BN^{2}\). The energy levels of this Hamiltonian form a quadratic ladder with eigenvalues \(E_{N}=BN(N+1)\). Each \(N\) manifold contains \(2N+1\) degenerate states, distinguished by the quantum number \(M\equiv N_{Z}\), the projection of \(N\) on the lab-fixed \(Z\)-axis.
#### 2.3.2 Spherical top molecules
In the special case that \(I_{a}=I_{b}=I_{c}\), so that \(A=B=C\), the Hamiltonian also reduces to the form \(H_{\rm rot}=BN^{2}\), again with eigenvalues \(E_{N}=BN(N+1)\). In this case, however, the additional structure leads to a set of \((2N+1)^{2}\) degenerate states, distinguished by independent quantum numbers \(M\) and \(K\equiv N_{a}\). Spherical top molecules include species such as CH\({}_{4}\) (methane), SF\({}_{6}\) (sulfur hexafluoride), and C\({}_{60}\) (buckminsterfullerene), but to our knowledge no laser-coolable spherical top molecules have been proposed to date and we do not consider them further here.
#### 2.3.3 Symmetric top molecules
A molecule with exactly two equal moments of inertia is classified as a symmetric top. This case is further subdivided into the prolate ("cigar-shaped") symmetric top, where \(I_{a}<I_{b}=I_{c}\), and the oblate ("pancake-shaped") symmetric top, where \(I_{a}=I_{b}<I_{c}\). We explicitly consider the prolate case first; in the oblate case, one must only substitute \(a\to c\) and \(A\to C\) in all formulas. Then the Hamiltonian is \(H_{\rm rot}=BN^{2}+(A-B)K^{2}\), where as before \(K\equiv N_{a}\). The energy levels are \(E_{N,K}=BN(N+1)+(A-B)K^{2}\), with the restriction that \(K\leq N\). There are \(2N+1\) degenerate states with energy \(E_{N,0}\), distinguished by \(M\). When \(K\neq 0\), there are \(2(2N+1)\) degenerate states distinguished by both \(M\) and the sign of \(K=\pm|K|\). Examples of prolate symmetric top molecules include CH\({}_{3}\)F (methyl fluoride) and the laser-coolable species CaOCH\({}_{3}\) (calcium monomethoxide). Examples of oblate molecules include C\({}_{6}\)H\({}_{6}\) (benzene) and NH\({}_{3}\) (ammonia); to date, no oblate symmetric top molecules have been proposed for laser cooling. For molecules where \(A\gg B\), typical of known laser-coolable molecules, there is a quadratic ladder of widely separated "\(K\)-stacks," each of which contains a quadratic ladder of more finely separated \(N\) states.
#### 2.3.4 Asymmetric top molecules
In an asymmetric top molecule, all moments of inertia are unequal. The resulting energy level structure is quite complex, and no projection of \(N\) on the molecule-frame axes is a good quantum number. For each value of \(N\), there are \(2N+1\) non-degenerate states. The degree of asymmetry can be characterized by the Ray's asymmetry parameter, \(\kappa=\frac{2B-A-C}{A-C}\). In the prolate (oblate) symmetric top limit, \(B=C(A)\), we obtain \(\kappa=1(-1)\). In the maximally asymmetric case, \(B=(A+C)/2\), we obtain \(\kappa=0\). Eigenstates can be designated by labels \(N_{K_{a}K_{c}}\), where \(K_{a}\) and \(K_{c}\) are the projections of \(N\) on the \(a\)-axis and \(c\)-axis in the limit that a molecule is deformed to the prolate and oblate symmetric top limits, respectively. In the general case, \(-1<\kappa<1\), neither \(K_{a}\) nor \(K_{c}\) are rigorously good quantum numbers.
Laser-coolable asymmetric top molecules proposed in the literature to date (for example, CaSH, CaOCHDT, and SrOC\({}_{6}\)H\({}_{5}\)) are prolate (\(\kappa<0\)), though there is no apparent reason that oblate molecules (\(\kappa>0\)) are necessarily inconsistent with laser cooling. An example of an oblate asymmetric top molecule is C\({}_{4}\)H\({}_{4}\)N\({}_{2}\) (pyrimidine). In the case of an asymmetric top molecule near the prolate symmetric top limit, \(\kappa\approx-1\), the energy level structure closely resembles that of a prolate symmetric top. A similar situation holds for asymmetric top molecules near the oblate limit.
#### 2.3.5 Dependence on spin and electronic angular momenta
The ground states of proposed laser-coolable polyatomic molecules generally have vanishing electronic angular momentum, and can be described in a Hund's case (b) basis where \(N\) is a good quantum number. Corrections to the rotational structure described above occur due to electron spin, nuclear spin, and vibrational angular momentum (for example in the bending vibrational mode of a linear triatomic molecule). The last of these shifts the overall energy of the ladder of eigenstates but does not otherwise change the rotational energy progression.
Coupling of the electron spin and molecular rotation is generally more complicated. The spin-rotation Hamiltonian takes the general form
\[H_{\rm SR}=\frac{1}{2}\sum_{\alpha,\beta}\epsilon_{\alpha\beta}(N_{\alpha}S_{ \beta}+S_{\beta}N_{\alpha}), \tag{11}\]
where \(\alpha,\beta\) takes values \(a,b,c\) and \(\epsilon\) is a symmetric tensor described in, e.g., Hirota (1985). In a linear molecule, only the components \(\epsilon_{bb}=\epsilon_{cc}\equiv\gamma\) take
nonzero values and the simpler form
\[H_{\rm SR}=\gamma N\cdot S \tag{12}\]
is obtained. For linear molecules with \(S=1/2\), this term mixes adjacent \(N\) levels, and both \(N\) and \(S\) cease to be good quantum numbers, though \(\vec{J}=\vec{N}+\vec{S}\) is preserved. When \(\gamma\ll B\), as is typical of the alkaline-earth pseudohalides, the effect is to split \(N\) into a pair of states with \(J=N\pm 1/2\).
The situation is more complicated for nonlinear molecules. In a symmetric top molecule, the spin-rotation Hamiltonian (neglecting contributions that can mix different values of \(\Lambda\) in a \({}^{2}E\) state) reduces to
\[H_{\rm SR}=\epsilon_{aa}N_{a}S_{a}+\frac{1}{4}(\epsilon_{aa}+\epsilon_{bb})( N_{+}S_{-}+N_{-}S_{+}), \tag{13}\]
assuming the prolate case (see Hougen (1980)). The asymmetric top case is generically much more complicated, and spin-rotation interactions can mix states that differ in \(N\) by up to 1, and differ in \(K\) by up to 2 (Hirota (1985)). In alkaline-earth pseudohalides, the ground-state spin-rotation splittings are typically no larger than \(\sim\)100 MHz at low \(N\), and increase with larger \(N\). In most laser-coolable molecules used so far, the size of the spin-rotation constant is dominated by a second-order perturbation via spin-orbit coupling rather than the direct coupling of electron spin and molecular rotation. Excited states with spin-rotation structure, for example the \(\tilde{B}\,^{2}\Sigma^{+}\) state in MOH molecules, can have spin-rotation constants comparable to the rotational constant due to the closer proximity to electronic states with \(\Delta\Lambda=\pm 1\). Because excited-state spin-rotation structure is large compared to the natural linewidth, and because the ground-state spin-rotation splittings are \(\sim\)100 MHz or less, the spin-rotation structure is easily addressed with frequency sidebands added to the laser beam, e.g. using acousto-optic or electro-optic modulation.
In states with orbital angular momentum, for example a \({}^{2}\Pi_{1/2}\) electronic state of a linear molecule, electrostatic interactions couple the orbital angular momentum \(L\) to the internuclear axis and spin-orbit interactions couple \(L\) with \(S\). In this case, \(N\) is not a good quantum number. The rotational energies follow a quadratic ladder in \(J\) as \(H_{\rm rot}=BJ(J+1)\), with each \(J\) value split into a pair of opposite-parity states by a \(\Lambda\)-doubling Hamiltonian, which arises from spin-orbit mixing with states of different \(\Lambda\). In symmetric top and asymmetric top molecules, similar interactions split pairs of opposite-parity states in electronic manifolds with non-zero orbital angular momentum.
#### 2.3.6 Hyperfine effects and nuclear spin statistics
In many of the polyatomic laser cooling experiments pursued to date, the valence electron responsible for optical cycling is localized on the spin-0 nucleus of an alkaline-earth (or alkaline-earth-like) metal atom, and a spin-0 oxygen nucleus serves as a "linker" to a ligand such as H and CH\({}_{3}\). As a result, the nuclei with non-zero spin are far from the optically active valence electron, and hyperfine interactions are on the order of only \(\lesssim 1\) MHz in both the ground and excited electronic states. Hyperfine interactions therefore play a negligible role in laser cooling, though they might be important in single-quantum-state preparation and readout as needed for many applications (see Sec. 3.9).
Some applications, for example studying nuclear-spin-dependent parity violation (Norrgard et al. (2019a)) or nuclear magnetic quadrupole moments (Hutzler (2020)), require the optically active electron to be localized near a nucleus with spin \(I>0\). This complicates the structure by splitting each \(J\) level into \(2I+1\) states of distinct \(F\) levels. Generally, this should have no substantial effect on laser cooling as long as each \(F\) level is optically addressed by either spectral broadening or the addition of frequency sidebands to optical cycling lasers.
Another consequence of nuclear spins in polyatomic molecules stems from the connection between nuclear spin symmetry and molecular rotation that arises due to symmetry requirements on the total wavefunction, as described in Bunker and Jensen (2006). In the simplest case of a molecule composed of two identical atoms with vanishing nuclear spin, some rotational levels do not exist because the wave function must be even under exchange of the identical nuclei-for example, the ground \({}^{3}\Sigma_{g}^{-}\) manifold of \({}^{16}\)O\({}_{2}\) only has odd \(J\) levels. Generically, rovibronic levels occur with a "statistical weight" that corresponds to the number of nuclear spin states that give the required totally symmetric state. For example, in CaOCH\({}_{3}\) the "para" nuclear spin configuration (in which two hydrogen nuclear spins are aligned) occurs in \(K=1,2,4,5,\ldots\) rotational states, but the "ortho" spin configuration (in which all nuclear spins are aligned) occurs in \(K=0,3,\ldots\) rotational states. Because of the weak hyperfine coupling in CaOCH\({}_{3}\), conversion between para and ortho configurations is highly suppressed; because of the lack of inter-conversion, the two nuclear spin configurations behave as though they were essentially independent species. Experimentally, laser cooling schemes for both nuclear spin isomers were demonstrated by Mitra et al. (2020) where
the isomer cooled was selected spectroscopically, by driving transitions out of either the \(K=0\) or \(K=1\) rotational state.
### Transitions
In the following sections, we describe the properties and intensities of transitions between the various energy levels present in polyatomic molecules.
#### 2.4.1 Electronic transitions
Transition intensities can be calculated from the square of the transition moment integral (Bernath (2017))
\[\mathbf{M}=\int\psi^{\prime}(\mathbf{r},\mathbf{R})^{*}\,\boldsymbol{\mu}\, \psi(\mathbf{r},\mathbf{R})\,d\tau, \tag{14}\]
where integration with respect to \(\tau\) implies integrating over all electronic and nuclear coordinates, and single primes denote excited states. Invoking the BO approximation, we can write \(\psi(\mathbf{r},\mathbf{R})=\psi_{n}(\mathbf{R})\psi_{e}(\mathbf{r};\mathbf{R})\). Then the transition moment integral becomes
\[\mathbf{M}=\int\psi_{n^{\prime}}(\mathbf{R})^{*}\left(\int\psi_{e^{\prime}}^{*} (\mathbf{r};\mathbf{R})\,\boldsymbol{\mu}\,\psi_{e}(\mathbf{r};\mathbf{R})\,d \tau_{e}\right)\psi_{n}(\mathbf{R})\,d\tau_{n}. \tag{15}\]
We have separated integration over all nuclear coordinates (\(d\tau_{n}\)) and electronic coordinates (\(d\tau_{e}\)). Let us now define an electronic transition dipole moment as \(\boldsymbol{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}(\mathbf{R})=\langle \psi_{e^{\prime}}|\boldsymbol{\mu}|\psi_{e}\rangle\), and we explicitly denote its dependence on the nuclear coordinates. We can imagine expanding \(\boldsymbol{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}(\mathbf{R})\) in a Taylor series about some value of \(\mathbf{R}\) and retaining only the first (constant) term, which we will denote by \(\boldsymbol{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}\).3 This simplifies Eq. 15 and allows us to write
Footnote 3: More details about the choice of point on which to center the expansion can be found in the textbook by Bernath (2017). Often, the value of \(\boldsymbol{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}\) is inferred from measurements or \(\boldsymbol{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}(\mathbf{R})\) can be found via _ab initio_ calculations and used in conjunction with numerical vibrational wavefunctions to compute the necessary integral.
\[\mathbf{M} = \boldsymbol{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}\int\psi _{n^{\prime}}(\mathbf{R})^{*}\psi_{n}(\mathbf{R})\,d\tau_{n}. \tag{16}\]
The transition moment has factored into an electronic and nuclear portion, as would be expected from the BO approximation treatment. If we explicitly
include both vibrational and rotational nuclear motions, we can write the intensities of rovibronic transitions as (Bernath (2017))
\[I_{(e^{\prime}v^{\prime}J^{\prime})\rightarrow(evJ)}=\left|\mathbf{\mathcal{R}}_{e^{ \prime\prime}}^{e^{\prime}}\right|^{2}q_{v^{\prime}\to v}S_{J^{ \prime\prime}}^{J^{\prime}}, \tag{17}\]
where \(\mathbf{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}\) is the electronic transition dipole moment between electronic states \(e^{\prime}\) and \(e\), \(q_{v^{\prime}\to v^{\prime\prime}}\) is a Franck-Condon factor (FCF) between vibrational states \(v^{\prime}\) and \(v\), and \(S_{J^{\prime\prime}}^{J^{\prime}}\) is a Honl-London factor between states \(J^{\prime}\) and \(J\). We can interpret this factorization as stating that the intrinsic strength of some transition is set by \(\left|\mathbf{\mathcal{R}}_{e^{\prime\prime}}^{e^{\prime}}\right|^{2}\) while the distribution of that intensity among vibrational and rotational lines are set by \(q_{v^{\prime}\to v^{\prime\prime}}\) and \(S_{J^{\prime\prime}}^{J^{\prime}}\).
#### 2.4.2 Vibrational transitions
Unlike rotational transitions, where angular momentum conservation rigorously constrains the allowed decay channels, in vibrational transitions no selection rules are absolute. Instead, vibrational branching is governed in the Born-Oppenheimer approximation by the overlap of the nuclear wave func
Figure 6: The Franck-Condon principle, as illustrated for diatomic molecules using CaF. (a) The \(X\,^{2}\Sigma^{+}\) and \(B\,^{2}\Sigma^{+}\) electronic orbitals of the prototypical laser cooling molecule CaF. Localization of the electron density near the Ca metal in both the ground and excited states leads to favorable Franck-Condon overlap. (b) Vibrational wavefunctions of CaF. (c) The corresponding vibrational decay strengths for the \(X\,^{2}\Sigma^{+}-B\,^{2}\Sigma^{+}\) and \(X\,^{2}\Sigma^{+}-C\,^{2}\Pi\) transition of CaF.
tions of the initial and final vibronic states, as characterized by the Franck-Condon Factor,
\[q_{v^{\prime}\to v^{\prime\prime}}=\left|\int\psi_{e,v^{\prime}}^{*}\psi_{g,v}d \tau_{n}\right|^{2}. \tag{18}\]
where \(v^{\prime}\) denotes a vibrational state in the excited electronic manifold \(e\), and \(v\) denotes a vibrational state in the ground electronic manifold \(g\). If the Franck-Condon factors of an electronic transition are "diagonal" then \(q_{v^{\prime}\to v^{\prime\prime}}\approx\delta_{v^{\prime},v}\); in other words, a vibrational level \(e(v^{\prime})\) decays (almost) only to \(g(v)\). Geometrically, this will occur when all bond lengths, bond angles, and vibrational constants are approximately identical between the ground and excited electronic states. In practice, off-diagonal FCFs are more sensitive to small fractional changes in bond lengths compared to similarly small fractional changes in harmonic constants. This can be understood from a simple model of wave function overlap between displaced 1D harmonic oscillators with distinct harmonic constants. An example of this, for the diatomic molecule CaF, is shown in Fig. 6.
The vibrational branching ratios from a given vibronic excited state are proportional to the FCFs, but contain an additional factor of \(\omega_{v^{\prime},v}^{3}\):
\[b_{v^{\prime}\to v}=\frac{q_{v^{\prime}\to v}\omega_{v^{\prime},v}^{3}}{\sum_{ v}q_{v^{\prime}\to v}\omega_{v^{\prime},v}^{3}}. \tag{19}\]
Thus, transitions to lower-lying vibrational states are slightly favored, relative to what one might expect from considering only the FCF. In practice, the VBRs and FCFs are quantiatively similar, but only the VBR is important for determining and achieving a nearly-closed optical cycle.
In the Born-Oppenheimer (BO) approximation, the symmetry of a vibrational state is conserved in a vibronic transition. For example, in linear molecules like CaOH and SrOH, \(|\ell|\) would not be expected to change. In practice, non-BO perturbations induce weak \(|\ell|\)-changing transitions at the level of \(\sim\)\(10^{-3}\) vibrational branching probability; see Sec. 2.5 for details.
#### 2.4.3 Measuring vibrational branching ratios
Vibrational branching ratios can be measured in any of several ways. The simplest method conceptually is to add repumping lasers (lasers that address vibrational loss channels and return them to the set of bright states) one at a time and observe the total fluorescence collected. To determine the diagonal vibrational branching ratio, one can measure the fluorescence induced when
Figure 7: Demonstration of optical cycling with the polyatomic molecule SrOH. Molecular beam fluorescence with (blue) and without (red) addressing both spin-rotation components of the optical cycling rovibronic transition. When both spin-rotation components are addressed, population cycles between the ground \(N=1\) and excited \(J^{\prime}=1/2\) manifolds, and repeated photon scattering increases the detected fluorescence by over an order of magnitude. Reproduced from Kozyryev et al. (2016b).
addressing all spin-rotation and hyperfine components of a rotationally-closed vibronic transition, and then compare this to the fluorescence induced when only a single quantum state is addressed (for which typically \(\sim\)1 photon is scattered before populating an unaddressed quantum state, depending only on theoretically well-known rotational branching ratios). See Fig. 7 for an example in SrOH. In a similar manner, observing the fluorescence collected when repumping \(v\) in an optical cycle, as opposed to exhausting a lossy optical cycle without repumping \(v\), reveals the probability that \(v\) is populated among all possible loss channels. By sequentially adding repumpers, in principle all excited vibrational branching ratios can be determined in this way. In practice, this method may be suitable to measure vibrational branching ratios in a long-lived trap (e.g., a MOT), but it is impractical in a molecular beam for VBRs smaller than \(\sim\)1% because molecules exit the fluorescence region before exhausting a highly closed optical cycle.
An improved method to measure vibrational branching ratios below \(\sim\)1% is to optically cycle \(\sim\)100 photons in an "interaction region," e.g. by addressing the diagonal vibronic transition and dominant one or two repumping pathways, and to measure the fraction of molecular population that is recovered to low-lying states when repumping a state \(v\) in a "cleanup region" (between the interaction region and fluorescence detection region). If recovery of \(\sim\)1% of the molecular population can be resolved after \(\sim\)10\({}^{2}\) photons are scattered in the interaction region, then vibrational branching ratios of \(\sim\)10\({}^{-4}\) can be measured. This method was used by Baum et al. (2021) to determine an optical cycling scheme for CaOH with approximately \(5\times 10^{3}\) photons scattered per molecule.
A major limitation of the methods described above is that a vibrational branching ratio can only be measured for a state with known high-resolution repumping transitions. Most polyatomic molecules that might be of interest for future laser-cooling experiments have little preexisting spectroscopic data, especially in excited vibrational states. Therefore, an alternative method should be used to screen a potential laser-coolable molecule for favorable VBRs without requiring high-resolution spectroscopy of up to a dozen vibrational states.
The standard approach to measure VBRs without directly repumping vibrationally excited states is dispersed laser-induced fluorescence (DLIF). See Fig. 8 for a typical experimental configuration. Molecules in a molecular beam are driven to an excited state of interest, \(e\), and subsequently fluoresce to the vibronic ground states \(\{v_{i}\}\) with probabilities \(\{P_{i}\}\). The fluorescence
Figure 8: Dispersed laser-induced fluorescence apparatus, reproduced from Zhang et al. (2021). CaOH and YbOH molecules are produced in a CBGB via ablation of a metal precursor in the presence of water vapor. 40 cm downstream, molecules are excited via an optical cycle with 50\(-\)100 photons scattered per molecule on average, increasing the fluorescence yield. Fluorescence is collimated by an in-vacuum lens and directed toward a Czerny-Turner monochromator, which disperses light emitted in different vibronic transitions (generally separated by many nm in wavelength) onto different regions of an EMCCD camera.
Figure 9: Dispersed fluorescence measurements of YbOH \(\tilde{A}(000)\), showing 89% probability of decay on the diagonal vibronic transition, with progressively weaker decays to higher-lying vibrational states. Decays with strength as small as \(2(1)\times 10^{-5}\) relative probability can be measured due to the increased fluorescence yield arising from optical cycling excitation. Reproduced from Zhang et al. (2021).
is collected and focused into a spectrometer, in which a diffraction grating is used to spatially disperse the fluorescence wavelengths. The light is then imaged onto an EMCCD, producing an image that shows the relative intensity of each decay feature. Example data are shown for emission from the \(\tilde{A}(000)\) state of YbOH in Fig. 9.
With \(\sim\)\(10^{4}\) ablation pulses, vibronic decays with relative probabilities of \(10^{-2}-10^{-3}\) may be detected. By optically cycling molecules in the detection region, up to 100 photons may be scattered per molecule, directly increasing the number of fluorescence photons collected. In cases where the detection sensitivity is limited by camera read-noise or clock-induced-charge, the sensitivity of the measurement is increased proportionally to the number of photons scattered per molecule. In this way, branching ratios on the order of \(10^{-5}\) have been measured for CaOH, YbOH, and SrOH by Zhang et al. (2021); Lasner et al. (2022). The unambiguous assignment of weak decays is greatly aided by high-quality theoretical predictions, which must account for perturbations like those described in Sec. 2.5.
DLIF measurements of VBRs are a powerful method to directly observe all decay channels (inside an observed wavelength range) without requiring high-resolution spectroscopy for optical pumping and repumping of many vibrational states. It is especially important for measuring vibrational branching ratios of polyatomic molecules, where numerous vibrational states may have VBRs around \(10^{-5}-10^{-4}\), a level that is weak in an absolute sense but sufficiently strong to require repumping during radiative slowing and magneto-optical trapping. For a polyatomic molecule with many vibrational modes, it is not necessarily clear in the absence of measurements what the dominant loss channel will be for an optical cycle scattering many more than \(\sim\)100 photons per molecule. This makes a brute-force search for vibrational repumpers inefficient in all but the exceptional cases where the molecule is extensively well-understood from the outset.
#### 2.4.4 Repumping transition spectroscopy
Even in the best-studied laser-coolable molecules such as CaOH and SrOH, the vibrational states that become populated after a molecule has scattered around \(10^{4}\) photons have not all been fully analyzed in the literature. It is thus necessary to conduct spectroscopic searches for repumping pathways. These measurements are often conducted in one of two ways: fluorescence in a molecular beam, or absorption in a cryogenic buffer-gas cell.
When repumper spectroscopy is conducted using laser-induced fluores
cence from a molecular beam (see Fig. 10), a vibrationally excited state can be prepared either by optically cycling until vibrational dark states are populated, or by direct off-diagonal excitation that decays preferentially to the vibrational state of interest in the ground electronic manifold. Due to the rotationally-closed nature of the transition used for optical cycling, often only the \(N=1\) level of the target vibrational state is populated. (Additional rotational levels may be populated in nonlinear polyatomic molecules where the rotational selection rules are slightly relaxed; see Tab. 1.) The frequency of a laser in a downstream "clean-up" region can then be scanned to repump population into (often lower-lying) detectable states, which are excited in a region even farther downstream. Fluorescence in the detection region is observed on a PMT or EMCCD camera. When the frequency of the repumping laser passes through a resonance, an increase in detected population will be observed. Since CBGBs sometimes do not efficiently thermalize the molecular vibrational distribution, in some cases a molecular beam may have sufficient natural population of a vibrationally excited state to detect, provided the correct repumping frequency is addressed in the clean-up region. The same geometry shown in Fig. 10 can also be used to search for excited vibrational states in the electronically excited manifold by scanning the frequency of a laser in the interaction region and observing a dip in the detected population of low-lying vibrational states.
This pump-repump method is best-suited to discovering the frequency of _specific_ rovibronic transitions, since in practice at most a few states can be
Figure 10: Beamline configuration for pump-repump spectroscopy of vibrationally excited states. An upstream interaction populates vibrationally excited states either through optical cycling or direct off-diagonal excitation. A clean-up region recovers population into a detected state (or set of states), which is excited by a known laser transition to produce fluorescence that is imaged onto an EMCCD camera. When a laser in the interaction region is scanned over an excited state with the clean-up lasers off or out of resonance, a dip in detected fluorescence is observed. This enables the discovery of excited-state repumping pathways. When a laser in the clean-up region is scanned over a repumping transition, detected fluorescence is partially recovered.
simultaneously detected in the downstream region. This means that, for a given experimental configuration, only a few transitions (including the "laser cooling transition") in the clean-up region can produce a spectroscopic signal, dramatically simplifying the data analysis. A corresponding disadvantage, however, is that the signal is spectroscopically sparse, and a good initial estimate of the repumping frequency is required. This estimate can be made either from the observed decay wavelengths obtained via DLIF, or by high-quality theoretical predictions (e.g., made by extrapolating from the known positions of lower-lying vibrational states). Repumper frequency uncertainties obtained by high-resolution DLIF measurements are typically on the order of 5 cm\({}^{-1}\), but can be improved by using a diffraction grating with higher line density or by narrowing the width of the entrance slit to the spectrometer.
A related technique can be used to locate repumping transitions after a MOT has been achieved. Namely, one can apply laser light near a suspected repumping transition in the MOT region, with all other trapping laser beams on. While scanning the frequency of the (unknown) repumping laser, one monitors the MOT lifetime. When a resonance is reached, the lifetime of the MOT should increase. This is analogous to the pump-repump method but with an interaction time of tens or even hundreds of ms, which would be impractical in a molecular beam.
An alternative approach to identifying vibrational repumpers is high-sensitivity absorption measurements inside a buffer gas cell (Pilgram (2023)). Frequency-modulated (FM) absorption spectroscopy has been used to observe the weak \(\Delta v=-2\) repumping transition for states as high as (300) in YbOH. Unlike the pump-repump spectroscopy described above, all thermally populated rotational states will be probed in this way. This may be desirable in order to make a full spectroscopic assignment of the molecular transitions and constants in a vibronic transition, or to more quickly locate a spectroscopically active frequency region (since a denser "forest" of lines appears, compared with the signal in pump-repump measurements). On the other hand, the spectrum must be fully assigned in order to identify which transitions are connected to the laser cooling state. Aside from these broader considerations, the experimental signal-to-noise ratio may favor either the pump-repump or high-sensitivity absorption measurements depending on factors like the vibrational quenching efficiency in the buffer gas cell, the degree of scattered light suppression in the downstream fluorescence region, the strengths of the probed transitions, and other technical
factors.
#### 2.4.5 Designing an optical cycle
With known vibrational branching ratios (e.g., from high-resolution DLIF measurements), one must carefully design the optical cycle. Several factors are important:
* The number of lasers should be minimized, to reduce cost and experimental complexity
* The excited state that is coupled to the ground vibrational state-e.g., (000) in an MOH molecule-should not be coupled to any other vibrational states if possible, so as to maximize the photon scattering rate and optical forces (see Sec. 1.5.2)
* Lasers should be at convenient optical wavelengths and available at high power
* Optical transitions should be strong (e.g., \(\Delta v=-1\) repumpers) to increase the optical pumping or repumping rates for fixed laser powers
* Decays to states without existing spectroscopy should be eliminated where possible
Not all of these factors are mutually compatible, and some balancing is required (e.g., the number of lasers is generally minimized when only a single excited state is used). Typically, only one or two "diagonal enough" excited states are available to choose from for the dominant excitation, and one of these may result in significantly fewer states populated after the \(\sim 10^{4}\) photon scattering events required for magneto-optical trapping. For example, in SrOH, both \(\tilde{A}(000)\) and \(\tilde{B}(000)\) appear to have reasonably diagonal VBRs, but high-resolution DLIF measurements show that \(\tilde{B}(000)\) populates \((03^{1}0)\), \((12^{0}0)\), \((13^{1}0)\), and \((05^{1}0)\) at the \(\sim 10^{-4}\) level; none of these are significantly populated by \(\tilde{A}(000)\), as demonstrated in Lasner et al. (2022). Therefore, many additional repumpers would need to be added to scatter \(10^{4}\) photons primarily through \(\tilde{B}(000)\), as compared to scattering the same number of photons through \(\tilde{A}(000)\). For this reason, an optical cycle coupling \(\tilde{A}(000)\leftarrow\tilde{X}(000)\) is favored. Because vibrationally excited states are populated only after approximately 20 photons are scattered through \(\tilde{A}(000)\), the \(\tilde{B}(000)\) state may be used as a repumping pathway without limiting the
optical cycle to less than \(1.5\times 10^{4}\) photons scattered. Wherever possible, the optical cycle proposed in Lasner et al. (2022) favors repumping through the \(\widetilde{B}\) manifold due to the much more inexpensive and convenient high-power sum-frequency generation (SFG) sources around \(630-650\) nm compared with \(690-710\) nm. The only exception is that the (\(02^{2}0\)) state cannot be repumped through \(\widetilde{B}\) due to a near-vanishing transition strength, and is instead coupled to \(\tilde{A}(100)\). Wherever possible, the strongest transition that decreases vibrational quantum numbers from the ground to excited state is chosen, under the constraint that \(\tilde{A}(000)\) may not coupled strongly to any state except \(\tilde{X}(000)\). The resulting optical cycle is shown in Fig. 11, along with the optical cycle used to magneto-optically trap CaOH.
To model the number of scattered photons before loss to unaddressed vibrational states occurs, it is useful to construct an absorbing Markov chain model similar to that described in Baum et al. (2021): the states of the Markov chain represent vibrational levels in the electronic ground manifold. For a given optical cycling scheme, the transition probabilities from a state \(v\) are given by the VBRs of the excited state to which \(v\) is coupled. Each step of the Markov chain represents a single photon scatter. Unaddressed vibrational states "transition" only to themselves, and are absorbing states in the Markov chain. In this model, it is straightforward to calculate many properties of interest to laser cooling, including the average number of scattered photons before an absorbing (i.e., dark vibrational) state is reached, how many times each transient state is visited on average, and the distribution of population among dark states.
#### 2.4.6 Rotational transitions
Rotational transitions follow a generally well-behaved set of selection rules governed by angular momentum algebra. Unless otherwise stated, in this section we consider only electric dipole (E1) transitions. Higher-order transitions (e.g. M1, E2, etc.) are significantly weaker and are typically irrelevant for molecular laser cooling.
The general formula for generating a closed cycling transition in molecules was first proposed by Stuhl et al. (2008), and relies on rotational and parity selection rules (\(\Delta J=0,\pm 1\); \(\Delta p=\pm 1\); \(J=0\to J^{\prime}=0\) forbidden). The key observation was that, while a traditional "type-I" \(J\to J+1\) transition cannot be rotationally closed (since a rotational level with \(J+2\) necessarily exists in the ground state), driving the "type-II" \(J=1\to J^{\prime}=0\) transition (or its closest analogue within the molecule of choice) forms a closed tran
sition. One consequence of this choice of transition is that there are more ground states than excited states, \(m_{J}>m_{J}^{\prime}\), meaning that there will be dark states regardless of the laser polarization chosen. These can be remixed with magnetic fields or polarization modulation (Berkeland and Boshier (2002)).
All polyatomic molecules that have been directly laser cooled (and most polyatomics proposed for laser cooling) have a single valence electron and therefore have spin-doublet ground states (\(S=1/2\)). In this case, the rotational angular momentum \(N\) couples to the electron spin to form a total angular momentum quantum number \(J\), which takes half-integer values.4 In such a system, the electric dipole selection rules \(\Delta J=0,\pm 1\) and \(J=0\nrightarrow J^{\prime}=0\) always apply, as well as the requirement that the parity of the state changes, \(\Delta p=\pm 1\). In this case, the excited state used to achieve rotational closure is no longer \(J^{\prime}=0\) but \(J^{\prime}=1/2\).
Footnote 4: Here we ignore hyperfine degrees of freedom, which are often unresolved in polyatomic molecules, though the selection rules discussed below can be easily generalized to include hyperfine structure.
In addition to the universal selection rules on \(J\) and parity, other selection rules exist for specific molecular geometries and angular momentum coupling cases. These are summarized in Table 1. The selection rules are sorted by
Figure 11: (a) Optical cycling scheme proposed for laser cooling and trapping SrOH, reproduced from Lasner et al. (2022). (b) Optical cycling scheme used to produce a MOT of CaOH.
molecule geometry, as well as the axis along which the transition dipole moment is induced. For molecules with cylindrical symmetry, transitions can be either parallel (\(\parallel\)) or perpendicular (\(\perp\)), meaning the transition dipole moment is induced either along the principal molecular axis or perpendicular to it, respectively. For asymmetric top molecules, a transition can be induced along any of the three molecular axes, as described further below. Finally, different selection rules apply depending on the coupling of angular momenta within each electronic state involved. Typically, in nondegenerate states with zero electronic angular momentum Hund's case (b) applies, where both \(N\) and \(J\) are good quantum numbers. However, degenerate states with electronic angular momentum \(\Lambda\neq 0\) are typically described by Hund's case (a), where the electron spin is strongly coupled to the molecular axis and \(N\) is no longer a good quantum number.
Rotational transition strengths are described by the Honl-London factors \(S_{J}^{J^{\prime}}\) in eqn. 17. They can be calculated using angular momentum algebra by taking matrix elements of the dipole operator (see, e.g., Hirota (1985)), or looked up in tables.
**Linear molecules.** In nondegenerate vibrational states (\(\ell=0\)) of \({}^{2}\Sigma^{+}\) linear polyatomic molecules like CaOH, SrOH, and YbOH, a closed optical cycling transition is formed by driving the \({}^{P}Q_{21}(J=1/2)\) and \(P_{1}(J=3/2)\)
\begin{table}
\begin{tabular}{c c c|l} \hline \hline Geometry & Transition axis & Basis\({}^{*}\) & Selection rules\({}^{*}\) \\ \hline Linear & \(\parallel\) & b \(\rightarrow\) b & \(\Delta\Lambda=0\); \(\Delta\ell=0\); \(\Delta N=0,\pm 1\) \\ Linear & \(\perp\) & b \(\rightarrow\) a & \(\Delta\Lambda=\pm 1\); \(\Delta\ell=0\) \\ \hline STM & \(\parallel\) & b \(\rightarrow\) b & \(\Delta K=0\); \(\Delta N=0,\pm 1\) \\ STM & \(\perp\) & b \(\rightarrow\) a & \(\Delta K_{R}=0\); \(\Delta K=\pm 1\) \\ \hline ATM & a type & b \(\rightarrow\) b & \(\Delta K_{a}=0\); \(\Delta K_{c}=\pm 1\); \(\Delta N=0,\pm 1^{**}\) \\ ATM & b type & b \(\rightarrow\) b & \(\Delta K_{a}=\pm 1\); \(\Delta K_{c}=\pm 1\); \(\Delta N=0,\pm 1^{**}\) \\ ATM & c type & b \(\rightarrow\) b & \(\Delta K_{a}=\pm 1\); \(\Delta K_{c}=0\); \(\Delta N=0,\pm 1^{**}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the main selection rules discussed in this review. All transitions follow the selection rules \(\Delta J=0,\pm 1\) and \(\Delta p=\pm 1\) in addition to those listed. Also note that at least one angular momentum quantum number must change, so transitions with \(\Delta K=\Delta N=\Delta J=0\) (or analogous) are forbidden.
\({}^{*}\)Note that the selection rules described here apply only when the states are exactly described by the basis listed; in many real scenarios the selection rules are only approximate. \({}^{**}\Delta N=2\) transitions may be allowed in ATMs where the electronic angular momentum is not fully quenched.
Figure 12: Closed cycling transitions in alkaline-earth monohydroxides (e.g. CaOH) for laser cooling on the \(\widetilde{X}^{2}\Sigma^{+}\rightarrow\widetilde{A}^{2}\Pi_{1/2}\) and \(\widetilde{X}^{2}\Sigma^{+}\rightarrow\widetilde{B}^{2}\Sigma^{+}\) electronic transitions. (a) For nondegenerate (\(\ell=0\)) vibrational levels of the ground state, two laser frequencies are required to address the \(J=1/2^{-}\) and \(J=3/2^{-}\) levels of the \(N=1\) rotational state. (b) For repumping the \(\widetilde{X}(010)\) bending mode, which has vibrational angular momentum \(|\ell|=1\), an additional color addressing the \(N=2,J=3/2^{-}\) state is required, regardless of which excited state is chosen. The dashed line refers to a transition which is allowed only when the excited state does not adhere to the case (b) limit, i.e. when \(N\) is not a good quantum number.
transitions, which address both spin-rotation components of the \(\widetilde{X}\,^{2}\Sigma^{+}(N=1^{-})\) ground state and excite them to either the \(\widetilde{A}\,^{2}\Pi_{1/2}(J^{\prime}=1/2^{+})\) or \(\widetilde{B}\,^{2}\Sigma^{+}(N^{\prime}=0,J^{\prime}=1/2^{+})\) state (Fig. 12a). The key ingredient for achieving rotational closure in this scheme is not the specific electronic state but the fact that it have a well-resolved \(J^{\prime}=1/2^{+}\) level. In this case the transition is guaranteed to be closed by the E1 selection rules \(\Delta J=0,\pm 1\) and \(\Delta p=\pm 1\).
The inclusion of hyperfine structure threatens to void these selection rules because the \((J^{\prime}=1/2,F=1^{+})\) level of the excited state may, in principle, decay to the \((N=3,J=5/2,F=2^{-})\) ground state level. However, this requires that the hyperfine interaction mix the \(N=1\) and \(N=3\) states. The mixing fraction is on the order of \(\sim c^{2}/(10B)^{2}\sim 10^{-10}\) for CaOH, where \(c\) is the dipolar hyperfine constant and \(B\) is the rotational constant. While this decay mechanism is negligible for CaOH and alkaline-earth monohydroxides with similar structure, it may be of some significance for species with large hyperfine structure, e.g., with a nuclear spin on the optical cycling center (see, e.g., Pilgram et al. (2021)).
In degenerate vibrational states (\(\ell\neq 0\)) rotational closure is complicated by the appearance of parity doublets in the ground state (Fig. 12b). In this case, the excited \(J^{\prime}=1/2^{+}\) state can not only decay to \(N=1^{-}\) as before, but also to the \(N=2,J=3/2^{-}\) sublevel. For the linear polyatomic molecules which have been laser cooled to date, the most important cases are \(\ell=1\) and \(\ell=2\). For example, in CaOH, YbOH, and SrOH, the \(\widetilde{X}(01^{1}0)\) and \(\widetilde{X}(02^{2}0)\) states typically require repumping at or above the \(10^{-4}\) level (Zhang et al. (2021); Lasner et al. (2022)). As shown in Fig. 12, \(\ell=1\) states require both an \(N=1^{-}\) and an \(N=2^{-}\) repumping laser. Unlike the spin-rotation splitting between the \(J=1/2\) and \(J=3/2\) states in \(N=1\), which are typically spaced by \(\sim 10-100\) MHz and can be addressed with rf modulation (e.g. AOMs or EOMs), the \(N=1\) and \(N=2\) repumpers in \(\ell=1\) bending modes are split by 10s of GHz in the alkaline-earth monohydroxides, meaning that either two separate lasers or high frequency EOMs need to be used to bridge the gap. The \(\ell=2\) states, meanwhile, require just a single repumping laser, which addresses the \(N=2,J=3/2^{-}\) state.
For \(\ell\neq 0\) bending modes, the excited state used for repumping also merits careful consideration. The primary concern is that many candidate states are best described by a Hund's case (b) basis, meaning that both \(N^{\prime}\) and \(J^{\prime}\) are good quantum numbers. Accordingly, while the \(N=1^{-}\) ground state levels can always be repumped through the excited
state, in some cases repumping of \(N=2^{-}\) is forbidden by an approximate \(\Delta N=0,\pm 1\) selection rule. For example, in the laser cooling scheme used for CaOH by Vilas et al. (2022), \(\widetilde{X}(01^{1}0)(N=1^{-})\) is repumped through the \(\widetilde{B}\,^{2}\Sigma^{+}(000)(N^{\prime}=0,J^{\prime}=1/2^{+})\) state, but \(N=2^{-}\) is not because of \(\Delta N\) selection rules. Instead, the \(N=2,J=3/2^{+}\) state is repumped through the \(\widetilde{A}(010)\) electronic manifold, which has two components \(\mu^{2}\Sigma^{(+)}\) and \(\kappa^{2}\Sigma^{(-)}\) (Fig. 12b; this state is described in detail in Li and Coxon (1995)). These states are intermediate between case (a) and case (b), so \(\Delta N\) selection rules are somewhat weakly enforced; however, the \(\kappa^{2}\Sigma^{(-)}\) is the best candidate for \(N=2^{-}\) repumping because its \(J^{\prime}=1/2^{+}\) state has dominantly \(N^{\prime}=1\) character, while the \(\mu^{2}\Sigma^{(+)}(J^{\prime}=1/2^{+})\) state has predominantly \(N^{\prime}=0\) character. In CaOH, the \(\mu^{2}\Sigma^{(+)}(J^{\prime}=1/2^{+})\) state has only a \(\sim 7\%\) transition strength to \(N=2^{-}\), while the \(\kappa^{2}\Sigma^{(-)}(J^{\prime}=1/2^{+})\) state connects in approximately equal proportion to \(N=1^{-}\) and \(N=2^{-}\) in the \(\ell=1\) bending modes.
In practice, it is often necessary to repump degenerate bending modes using transitions that do not satisfy the \(\Delta\ell=0\) selection rule (see section 2.4.5). In this case, rotational transition strengths are challenging to calculate because they may be altered by vibronic perturbations. These must be considered on a molecule-by-molecule basis. For repumping the \(\ell=1\) bending modes in CaOH, it is empirically known that repumping on the \(\widetilde{X}^{2}\Sigma^{+}(010)(N=1^{-})\rightarrow\widetilde{B}^{2}\Sigma ^{+}(000)(N^{\prime}=0,J^{\prime}=1/2^{+})\) transition is sufficiently strong (Baum et al. (2020, 2021); Vilas et al. (2022)). However, in CaOH the \(N=2^{-}\) states are repumped using \(\Delta\ell=0\) transitions (Vilas et al. (2022)).
**Symmetric top molecules.** The rotational cycling transitions for \({}^{2}\!A_{1}\) symmetric top molecules (STMs) (e.g. CaOCH\({}_{3}\) or YbOCH\({}_{3}\)) are shown in Fig. 13. In addition to the \(J\) and parity selection rules discussed above, there are also \(K\) selection rules for STMs.
For parallel transitions between two nondegenerate electronic states (e.g. \(\widetilde{X}\,^{2}\!A_{1}\rightarrow\widetilde{B}\,^{2}\!A_{1}\)), the selection rule \(\Delta K=0\) holds. It is useful to divide the molecule into "\(K\) stacks", each of which is well isolated during optical cycling. A canonical \(N=1^{-}\to N^{\prime}=0^{+}\) cycling transition can therefore be formed in the \(K=0\) stack, while cycling in the \(K=1\) stack requires an additional \(N=2\) repumping laser, as shown in Fig. 13b.
For perpendicular transitions between a nondegenerate and a degenerate electronic state (e.g. \(\widetilde{X}\,^{2}\!A_{1}\rightarrow\widetilde{A}\,^{2}\!E\)), the photon adds one unit of electronic
angular momentum (\(\Lambda\)) about the molecular axis, while the rigid body rotation of the molecule (i.e. \(K_{R}\), the projection of \(R\) onto the molecular axis) is unchanged. Therefore \(\Delta K=\Delta K_{R}+\Delta\Lambda=\pm 1\), but \(\Delta K_{R}=0\).5 In this case we can instead divide the structure into isolated "\(K_{R}\) stacks", noting that \(K=K_{R}\) in the nondegenerate ground state. A closed \(N=1^{-}\to J^{\prime}=1/2^{+}\) cycling transition can be found in the \(K_{R}=0\) stack, while cycling in the \(K_{R}=1\) stack requires an \(N=2\) repumping laser because each rotational level contains parity doublets (Fig. 13a). Note that the degenerate excited state is typically well described by a Hund's case (a) basis, so \(N^{\prime}\) is not a good quantum number. See Herzberg (1966); Brown (1971); Cerny et al. (1993); Brazier and Bernath (1989) for additional details on rotational selection rules
Figure 13: Rotationally closed cycling transitions for symmetric top molecules with structure similar to CaOCH\({}_{3}\) on the (a) \(\widetilde{X}^{2}A_{1}\to\widetilde{A}^{2}E\) and (b) \(\widetilde{X}^{2}A_{1}\to\widetilde{B}^{2}A_{1}\) electronic transitions. Cycling transitions exist within both the \(K_{R}=0\) stack and the \(K_{R}=1\) stack, though in the latter case an \(N=2\) rotational repumping laser is required.
and \(K_{R}\) stacks in STMs.
The selection rules described above have been tested in experiments with CaOCH\({}_{3}\)(Mitra et al. (2020)), where \(\sim\)120 photons were scattered in the \(K_{R}=0\) stack and \(\sim\)30 photons were scattered in the \(K_{R}=1\) stack. It is empirically unknown whether the \(K_{R}\) selection rules hold beyond this number of photons.
**Asymmetric top molecules.** In asymmetric top molecules (ATMs), there are three types of electronic transitions to consider, corresponding to induced dipole moments along either the \(a\), \(b\), or \(c\) principal axis of the molecule. These are analogous to parallel and perpendicular transitions in linear molecules or STMs, where parallel transitions become \(a\) (\(c\))-axis transitions for prolate (oblate) molecules, and the other two axes play the role of perpendicular transitions. Each transition axis has its own selection rules, shown in Tab. 1, and closed cycling transitions for each transition type are shown in Fig 14. The transition dipole moment \(\mu\) will in general have a projection onto each of the principal axes and inherit some of each of the selection rules (Augenbraun et al. (2020)). It is therefore advisable to choose molecules whose transition dipole moments are well aligned with the molec
Figure 14: Rotationally closed cycling transitions for \(a\), \(b\), and \(c\)-type bands in asymmetric top molecules, as described in the text. Dashed lines correspond to transitions that are allowed in ATMs with unquenched electronic angular momentum, which is expected to be true for molecular geometries near the symmetric-top limit. Reproduced from Augenbraun et al. (2020).
ular axis to limit the number of rotational states that require repumping.
### Perturbations
We have thus far described the energy eigenstates of polyatomic molecules using a basis of well-defined quantum numbers. These, in turn, led to strict selection rules governing vibrational and rotational branching. In real molecules, however, there are mechanisms that perturbatively couple these basis states, e.g. via mixing of electronic and vibrational angular momentum. The result is an effect known as "intensity borrowing": nominally forbidden transitions become allowed because the energy eigenstates of the molecule contain a small admixture of basis states with different quantum numbers and/or symmetry. The effect is illustrated schematically in Fig. 15. While these effects are typically small (\(\sim 10^{-3}\) level or below), they can become important when forming cycling transitions capable of scattering many thousands of photons, or when certain vibronic levels of electronically excited states have energy gaps that are "accidentally" small.
While perturbations can take many forms and must in general be considered on a molecule-to-molecule basis, below we will discuss a few known effects for molecules presently being laser cooled or proposed for laser cooling.
The Renner-Teller (RT) effect describes vibronic mixing between the electronic angular momentum \(\Lambda\) and the vibrational angular momentum \(\ell\) in
Figure 15: Schematic illustration of how perturbations among electronically excited states can lead to an “intensity borrowing” effect that induces nominally forbidden transitions (or increases the intensities of transitions that were expected to be very weak).
linear polyatomic molecules with degenerate vibrational modes. In particular, it allows \(\Lambda\) and \(\ell\) to change while conserving the total spinless angular momentum projection \(K=\Lambda+\ell\). The physical origin of this effect is that bending vibrations reduce the cylindrical symmetry of the molecule and can therefore break the degeneracy of the in-plane and out-of-plane electronic orbitals in states with \(\Lambda>0\). See Hirota (1985) for a detailed description of this interaction.
The Renner-Teller effect is responsible for vibrational branching that violates the \(\Delta\ell=0\) selection rule in linear polyatomic molecules, as previously studied in detail for CaOH, SrOH, BaOH, and YbOH (Baum et al. (2021); Zhang et al. (2021); Lasner et al. (2022); Kinsey-Nielsen et al. (1986)). In these molecules, there are two important types of RT-induced branching. The first is \(\Delta\ell=\pm 1\) branching enabled by first-order RT coupling, which mixes states according to the selection rule \(\Delta\Lambda=-\Delta\ell=\pm 1\). This enables direct vibronic coupling between the \(\widetilde{A}^{2}\Pi(000)\) and \(\widetilde{B}^{2}\Sigma^{+}(01^{1}0)\) states in alkaline earth monohydroxides, thereby allowing \(\widetilde{A}(000)\) to decay directly to \(\ell=1\) ground states (e.g. \(\widetilde{X}(01^{1}0)\)) via intensity borrowing from the \(\widetilde{B}(01^{1}0)\) state. Likewise, \(\widetilde{B}(000)\) can decay to \(\widetilde{X}(01^{1}0)\) via RT coupling with \(\widetilde{A}(01^{1}0)\). This coupling typically contributes at the \(\sim 10^{-3}\) to \(10^{-4}\) range in the alkaline earth monohydroxides (Baum et al. (2021); Zhang et al. (2021); Lasner et al. (2022)). A smaller effect mixes \(\widetilde{A}(000)\) and \(\widetilde{A}(01^{1}0)\) directly via contributions from both first-order RT and spin-orbit coupling, but it is not discussed further here. See Baum et al. (2021); Zhang et al. (2021) for more details.
The second effect of RT mixing is \(\Delta\ell=\pm 2\) branching induced by second-order RT coupling, which mixes states according to the selection rule \(\Delta\Lambda=-\Delta\ell=\pm 2\). This term directly couples \(\widetilde{A}^{2}\Pi(000)\) to \(\widetilde{A}^{2}\Pi(02^{2}0)\), as it can mix the \(|\Lambda=1,\ell=0\rangle\) and \(|\Lambda=-1,\ell=2\rangle\) components of the \(\widetilde{A}\) state. Observed decays to \(\widetilde{X}(02^{2}0)\) and \(\widetilde{X}(12^{2}0)\) in alkaline earth monohydroxides are attributed to this mechanism (Baum et al. (2021); Zhang et al. (2021); Lasner et al. (2022); Vilas et al. (2022)).
An analogous interaction, called the (pseudo-)Jahn-Teller (JT) effect, is possible in nonlinear symmetric top molecules. A detailed review of Jahn-Teller physics is provided by Barekholtz and Miller (1998). For the purposes of laser-coolable polyatomic molecules such as CaOCH\({}_{3}\) or YbOCH\({}_{3}\), we can regard the (pseudo-)JT effect in the \({}^{2}E\) state of a nonlinear molecule as analogous to the RT effect in a \({}^{2}\Pi\) linear molecule. This effect can lead to mixing between a \({}^{2}E\) and \({}^{2}A_{1}\) electronic state which alters the vibrational
structure of the \({}^{2}E\) state and changes the vibronic emission intensities associated with spontaneous emission from this state. In a molecule such as \(\mathrm{CaOCH_{3}}\) or \(\mathrm{YbOCH_{3}}\), the first electronically excited state (\(\tilde{A}\,^{2}\!E\)) is affected by mixing with the \(\tilde{B}\,^{2}\!A_{1}\) level. Especially due to second-order spin-orbit-vibronic coupling, this interaction can dramatically increase the intensity of decays to vibrational bending modes that would have been symmetry-forbidden within the BO approximation. Experimentally, these decays have been observed in \(\mathrm{CaOCH_{3}}\)(Augenbraun (2020)) and in \(\mathrm{YbOCH_{3}}\)(Augenbraun et al. (2021b)). In both cases, it was possible to model the intensity of decay to vibrational bending modes on the basis of quantum chemical predictions of the JT parameters provided by Paul et al. (2019). In both cases, the nominally symmetry-forbidden decays were significantly stronger than decays to symmetry-allowed levels of the same vibrational mode, directly indicating the role that vibronic coupling plays in this process.
These observations point to two important considerations in the selection of laser-coolable nonlinear molecules. First, one must consider beyond-BO-approximation effects when deciding whether simple estimates of FCFs are justified in selecting a molecule for future experimentation. Second, experimental measurements of vibrational branching ratios are crucial to identify weak decays that violate expectations based on molecular symmetry. See Sec. 2.4.3 for details on such measurements.
## 3 Experimental techniques
### Cryogenic buffer-gas beams
Almost all applications of laser-cooled molecules benefit from long interaction times, achieved either through a trap (enabling, in principle, arbitrarily long hold times) or a molecular beam propagating over a large distance (tens of centimeters to a few meters, enabling probe times of \(\sim\)1\(-\)100 ms). In both cases, a large flux of initially slow molecules is essential. In order to trap molecules, any motion at the time of production must be removed in a manner whose difficulty typically scales with the initial momentum or kinetic energy, whereas for a fixed beam line the interaction time is inversely proportional to the beam velocity.
An additional requirement for a practical molecular source is a low rotational temperature. Rotationally excited states of small molecules like CaF and SrOH obtain significant thermal population at temperatures of \(\sim\)1 K,
and the population in any single quantum state at room temperature is suppressed by orders of magnitude compared to the low-temperature limit. This problem grows exponentially for large asymmetric top molecules, with multiple rotational modes that have small rotational constants (owing to the large moments of inertia).
Buffer-gas cooling in closed cells was developed by the atomic hydrogen community in the late 1970s/early 1980s. Early examples with molecules include the De Lucia group, which studied CO molecules (and, later, polyatomic molecules) in the presence of a \(\sim\!4\) K He buffer gas, as in Messer and De Lucia (1984); Willey et al. (1988); Mengel and De Lucia (2000). The Doyle group pioneered the use of buffer-gas cooling to load atoms and molecules into (superconducting) magnetic traps in Doyle et al. (1995); Wein
Figure 16: A representative cryogenic buffer gas cell, modified from Mitra et al. (2020). Hot reagent CH\({}_{3}\)OH (methanol) gas is introduced via a capillary, and cold \({}^{4}\)He buffer gas is introduced via a second capillary. A pulsed Nd:YAG laser ablates a “target” of Ca metal, which reacts with methanol to form CaOCH\({}_{3}\) in the first-stage cell. Additional windows at the downstream end of the first-stage cell may be used to monitor molecular production via optical absorption measurements. The helium thermalizes the molecules and entrains them through a hole in the front of the cell. A second-stage cell, stood off from the first stage to lower the buffer gas density, reduces the forward velocity of the molecular beam. Buffer gas and molecules are extracted through the front of the second-stage cell.
stein et al. (1998); Campbell et al. (2007); Doret et al. (2009). The first molecular beam source generated from buffer-gas cells was developed in the Doyle group in collaboration with Prof. David DeMille, as described in Maxwell et al. (2005b). Hydrodynamic cryogenic buffer-gas beams (CBGBs) were created by Patterson and Doyle (2007) and developed by both the Doyle and DeMille groups and the Hinds/Tarbutt groups. Reviews by Hutzler et al. (2012); Barry et al. (2011) discuss the key features of such sources, which include low forward velocities and hydrodynamic enhancement effects that led to beams several orders of magnitude brighter than previous realizations.
A primary benefit of a CBGB (beyond the low velocity and high flux) is that it allows the experiment to separate molecule _production_ (occurring in a cryogenic region with poor optical access) from molecule _manipulation_ (usually occurring in a room-temperature, ultra-high vacuum region). The CBGB was a key technological development that enabled direct laser cooling of molecules, as supersonic beams produce velocities so high as to make the deceleration of molecules to trappable velocities extremely difficult.
A CBGB source typically consists of four nested regions. The innermost region is a "buffer gas cell," usually made of high-purity copper and cooled to \(\sim 1-4\) K. A representative example is shown in Fig. 16. There, stable reagent molecules (methanol) are introduced into the cell via a hot capillary, while Ca metal is ablated via a pulsed Nd:YAG laser (typically with \(\sim\)10\(-\)40 mJ/pulse). The ablated atoms and methanol gas react to form CaOCH\({}_{3}\). Simultaneously, He buffer gas is flowed through a second capillary that pre-cools the He gas before it enters the cell. Inside the cell, the He thermalizes with the cold walls of the cell and collisions between molecules and He atoms cool the molecules. The flow of helium entrains molecules through a hole in the front of the cell (typically \(\sim\)3-7 mm diameter). The helium and molecules then flow into a lower-density "second-stage cell" that is stood off from the first stage of the buffer gas cell with a small gap for buffer gas to escape. This lowers the buffer gas density in the second-stage cell and reduces the forward velocity of molecules, at the expense of some reduction in the molecular extraction (typically a factor of order unity). Molecules then emerge from the second cell with forward velocities in the range of \(\sim\)40\(-\)200 m/s, depending on the molecular mass, cell temperature, and buffer gas flow rate.
A number of variations on the buffer gas cell are used, depending on experimental requirements and the molecular species of interest. To produce cold beams of stable molecules such as ammonia or formaldehyde, no ablation is necessary (Buuren et al. (2009)). On the other hand, even complex radical
molecules can be produced directly from ablation of pressed-powder pellets (or "targets") of stable constituents, without reagent gases. For example, a mixture of SrH\({}_{2}\) and naphthol powders produces SrO-naphthyl radicals (Mitra et al. (2022)). Because of the versatility in molecular production methods, a wide variety of polyatomic molecules including both radical (Augenbraun et al. (2020); Baum et al. (2020); Mitra et al. (2020); Zhu et al. (2022); Mitra et al. (2022) and non-radical Maxwell et al. (2005a); Buuren et al. (2009); Herschbach (2009); Sawyer et al. (2011); Eibenberger et al. (2017); Spaun et al. (2016); Satterthwaite et al. (2019); Patterson and Doyle (2013); Piskorski (2014)) species can be created in a CBGB. Molecules as large as nile red (C\({}_{20}\)H\({}_{18}\)N\({}_{2}\)O\({}_{2}\)) have been produced and spectroscopically studied in a cryogenic buffer gas cell without forming helium-molecule clusters (Piskorski et al. (2014)).
Both helium and neon buffer gas are commonly used in CBGBs, with neon requiring cell temperatures above approximately 16 K to maintain a suitable vapor pressure. Depending on the requirements for molecular flux and the initial beam velocity, the second-stage cell may be omitted. The lowest forward velocities can be achieved by cooling the second stage with a He-3 pot at temperatures around or below 1 K (Augenbraun et al. (2021)). In that work, the heat loads arising from ablation and gas flows, in the range of tens to hundreds of mW, made it impractical to cool the first-stage cell using a He-3 refrigerator. Nevertheless, by cooling the second-stage alone, beams of Yb (Ca) with peak forward velocities as low as about 20 m/s (40 m/s) have been observed by Augenbraun (2020); Augenbraun et al. (2021); Sawaoka et al. (2022).
Surrounding the buffer gas cell is a cold box, usually constructed of high-purity copper and held at \(\sim\)4 K by a pulse tube cryocooler to serve as a cryopump. When helium buffer gas is used, charcoal sorbs are thermally anchored to the 4 K box in order to achieve adequate helium (cryo)pumping speeds; other buffer gasses, such as neon, are efficiently cryopumped directly using copper surfaces held at 4 K.
The cryopumping box is contained within another box, typically made of aluminum or copper, held at \(\sim\)50 K by the first stage of a pulse tube cryocooler. This box shields the cryopumping box and buffer gas cell from room-temperature black body radiation. The radiation shields are housed within a vacuum chamber at \(\sim\)10\({}^{-7}\) Torr or better.
We summarize several important molecular beam parameters under typical conditions in Tab. 2. Experiments seeking to trap molecules usually
operate toward the lower range of buffer gas flow rates, ablation energy, and ablation repetition rate, to enable lower temperatures and forward molecular beam velocities at the expense of beam brightness and experimental duty cycle.
### Optical cycling
The key ingredient to laser cooling is the near-closure of an optical cycle, so as to approximate an ideal two-level system, as depicted in Fig. 17. A molecule initially in its ground state absorbs a photon with energy \(\hbar\omega\), which excites the molecule to a higher quantum level and imparts a momentum recoil \(p_{\rm recoil}=\hbar k\). The molecular state subsequently spontaneously decays, ideally back to the ground state, emitting a photon in a random direction. By using optical cycling, the molecule's external motion can be controlled through various cooling schemes such as Doppler or polarization gradient cooling, \(\Lambda\)-enhanced grey molasses, etc. In real molecules, an optical cycle consists of many "ground" states (which are generally in the ground electronic state, but excited vibrationally or rotationally) and many excited states (which are electronically, vibrationally, and/or rotationally excited). Thus, any chosen pair of ground and excited states will fail to form a closed optical cycle.
However, by careful selection of a group of ground states and excited states (manifolds), a nearly-closed optical cycle can be formed. For example, in molecules like CaOH and SrOH, parity and angular momentum selection rules ensure that the excited \(J^{P}=1/2^{+}\) states in the lowest-lying excited electronic state \(\tilde{A}\,^{2}\Pi_{1/2}\) can _only_ decay to the \(N^{P}=1^{-}\) rotational manifolds in the ground electronic state \(\tilde{X}\,^{2}\Sigma^{+}\). Each \(J^{P}=1/2^{+}\) state contains four
\begin{table}
\begin{tabular}{|c|c|} \hline Parameter & Typical range \\ \hline \hline Forward velocity & 40\(-\)200 m/s \\ \hline Solid angle FWHM & 0.2\(-\)1 sr \\ \hline Rotational temperature & 1\(-\)4 K \\ \hline Brightness & 10\({}^{8}\)-10\({}^{11}\) sr\({}^{-1}\) pulse\({}^{-1}\) \\ \hline Ablation repetition rate & 1\(-\)50 Hz \\ \hline Ablation energy & 10\(-\)40 mJ \\ \hline Buffer gas flow rates & 2\(-\)40 sccm \\ \hline \end{tabular}
\end{table}
Table 2: Representative operating and performance parameters of molecular CBGBs.
optically unresolved hyperfine levels, while each \(N^{P}=1^{-}\) manifold contains a \(J=1/2\) manifold and a \(J=3/2\) manifold, which are split from each other by spin-rotation splittings of \(\sim\)10-100 MHz that are easily spanned with optical frequency modulation techniques. These \(J=1/2\) and \(J=3/2\) manifolds contain 4 and 8 hyperfine states, respectively. This nearly-closed molecular cycling transition contains 12 ground and 4 excited states. A scheme to achieve a rotationally and vibrationally closed optical cycle (and eventually laser cooling) of a polyatomic molecule was first discussed by Kozyryev et al. (2015, 2016) using the example of SrOH.
For any practical application of optical cycling or laser cooling, it is necessary to add repumping lasers to also address excited vibrational states of the ground electronic manifold, as when the molecule decays from the electronically excited state there are no selection rules on the vibrational quantum number. How many vibrational repumpers are required depends on the vibrational branching ratios (VBRs) and on the optical cycling scheme chosen, see Sec. 2.4.5. The exact optical cycling scheme required for a particular application depends on the details of the molecular structure. For example, the optical cycle used to laser cool the symmetric top molecule \(\mathrm{CaOCH}_{3}\), (with \(C_{3v}\) symmetry) is depicted in Mitra et al. (2020). Similarly, an overview of
Figure 17: Optical cycle for an ideal two-level system, reproduced from Augenbraun et al. (2020). Three phenomena repeat up to tens of thousands of times: (1) A photon is absorbed, driving a molecule from the ground to excited state and imparting a momentum recoil to the molecule, (2) subsequently, a photon is spontaneously emitted in a random direction, and (3) the molecule returns to its ground state.
the optical cycles required to achieve optical cycling with respect to the rotational structure in asymmetric tops are shown in Augenbraun et al. (2020).
### Optical forces
The radiative force on a molecule is determined by the momentum recoil, \(p_{\rm recoil}\), and the scattering rate \(\gamma\), as \(F_{\rm recoil}=p_{\rm recoil}\gamma\). Thus to achieve large optical forces, it is important to maximize the scattering rate by judicious selection of optical cycling transitions and saturation of all laser powers. The first demonstration of the radiation pressure force on a polyatomic molecule was performed with SrOH, with a single vibrational repumper, scattering \(\sim\)100 photons per molecule in a single direction orthogonal to the molecular beam propagation axis. In this experiment, Kozyryev et al. (2016) observed a resultant 0.2\({}^{\circ}\) deflection of the molecular beam. This work served as an initial proof of principle that polyatomic molecules could experience significant optical forces in a practical experimental configuration, and thus that direct laser cooling of polyatomic molecules should be possible. Extending optical cycling to \(\sim\)10\({}^{4}\) photon scattering events would allow for radiation pressure slowing of molecular beams and capture into a magneto-optical trap.
The magnitude of the radiation pressure force is determined by the spontaneous decay rate, \(\Gamma\), of the excited state. For a saturated two-level system, the force is \(F_{\rm recoil}=\hbar k\Gamma/2\). Larger optical forces can be applied using a coherent process, sometimes in combination with magnetic or electric field interactions, enabling many units \(p_{\rm recoil}\) of momentum transfer per photon
Figure 18: Comparison of molecular beam deflection via radiation pressure force (left) and bichromatic force (right) in SrOH, reproduced from Kozyryev et al. (2018). Under these conditions, the bichromatic force is greater by a factor of 3.7, as seen by the shift in the position of the molecular beam.
scattered. This approach was demonstrated for SrOH using the bichromatic force, in which two phase-locked laser beams (at different frequencies) pass through the molecular beam transversely and are retroreflected. Due to the presence of two laser frequencies, beat notes are formed and by tuning the distance between the molecular beam and retroreflecting mirror, the relative phase of the counterpropagating beats can be controlled. The resulting light field induces alternating cycles of stimulated absorption of a photon travelling in one direction, and stimulated emission of a photon travelling in the opposite direction. For a fixed detuning \(\delta\) between frequency components, the maximum force is set by \(F_{\mathrm{BCF}}=\hbar k\delta/\pi\), which can vastly exceed \(F_{\mathrm{recoil}}\) at large \(\delta\). Bichromatic force deflection was demonstrated using SrOH by Kozyryev et al. (2018). Calculations show that by employing four laser frequencies in a four-level system, which consists of two coupled two-level subsystems in bichromatic force configurations, large optical molasses forces should also be possible, as simulated by Wenz et al. (2020).
### Transverse cooling
A more efficient method of imparting forces to polyatomic molecules, for a fixed number of scattered photons, is Sisyphus cooling. Specifically, magnetically-assisted transverse 1D Sisyphus cooling has been demonstrated
Figure 19: Spatial distribution of a YbOH molecular beam along the transverse direction, in the presence of Sisyphus heating (\(\Delta=-1.8\Gamma\)) and cooling (\(\Delta=+1.8\Gamma\)). In the cooling configuration, the on-axis flux is increased compared to an unperturbed beam. The double-lobed structure in the heating configuration arises from a balancing effect between Sisyphus heating and conventional Doppler cooling. Reproduced from Augenbraun et al. (2020).
for SrOH, YbOH, and CaOCH\({}_{3}\)(Kozyryev et al. (2017); Augenbraun et al. (2020b); Mitra et al. (2020)). In all cases, the experimental configuration was very similar to that shown in Fig. 10, except that the lasers in the interaction region were partially overlapped with their retroreflections to form a standing wave. The bright molecular states (i.e., those addressed by the laser) are AC Stark shifted at the antinodes of the standing wave, but not at the nodes where the laser intensity vanishes. As a molecule in a bright state traverses the standing wave along the direction orthogonal to the primarily molecular beam direction, it gains or loses kinetic energy, depending on whether the laser detuning is red- or blue-detuned, respectively. To transversely cool a molecular beam, therefore, blue-detuned light is used. A molecule is preferentially likely to be optically pumped at higher laser intensities, and has a probability of order unity to populate a dark state, whose energy is unaffected by the laser light. As the molecule then traverses the region of the standing wave node, the bright and dark states come to near degeneracy and are mixed by magnetic fields, converting the dark state to a bright state with non-zero probability. The molecule is then free to "ride up" the potential hill again (in the blue-detuned configuration), losing more energy. The energy loss per photon scatter is limited only by the depth of the AC Stark shift, which is in turn limited by available laser power. At high laser intensities, therefore, Sisyphus cooling can be far more efficient per photon scatter at removing energy than conventional Doppler cooling.
The effect of Sisyphus cooling (or heating) can be observed by the reduced (or increased) thermal expansion of the molecular beam along the standing wave axis, as it propagates far downstream. An example of the effect of Sisyphus cooling on YbOH molecules, reported by Augenbraun et al. (2020b), is shown in Fig. 19. The authors also demonstrated Doppler cooling of the YbOH molecular beam, although the Sisyphus laser cooling produced a colder sample and was more efficient at cooling molecules on a per-photon basis. In future experiments, Doppler or sub-Doppler cooling could be used to increase the flux in molecular beam experiments or the loading efficiency into magneto-optical traps, as advocated by Alauze et al. (2021).
### Molecular deceleration
Common trapping techniques used in laser cooling experiments generally have capture velocities that are considerably lower than the forward velocities of molecular beams, even those produced in CBGBs. Molecular MOTs, in particular, can typically only trap molecules with speeds below \(\sim\)15 m/s
(Tarbutt and Steimle (2015); Williams et al. (2017); Langin and DeMille (2022)), while the forward velocities of molecular beams produced by CBGBs fall in the range of \(\sim\)50 m/s to several 100 m/s (Hutzler et al. (2012)). Under certain optimized conditions, CBGBs with output velocities as low as 30 m/s-50 m/s have been observed for species such as CaF or CaOH; see Augenbraun et al. (2021); Lu (2014). For heavier species, like YbOH, peak beam velocities as low as 20 m/s can be achieved, as shown in Augenbraun (2020). The molecules must be slowed as they travel from the beam source to the trap to a velocity at or below the capture velocity of the chosen trapping method. Significant work has been devoted to the slowing of atomic beams since the initial demonstration of slowing a beam of sodium atoms in 1981 by Phillips and Metcalf (1982). These efforts have helped guide recent efforts in slowing of molecular beams.
Slowing of atoms is often based on the radiation pressure forces due to a laser beam counterpropagating to the flow of atoms in an atomic beam. This laser addresses a closed electronic transition to continuously scatter photons. Importantly, as the atomic beam is slowed, the transition frequency of the cycling transition is shifted due to a gradually changing Doppler shift. This Doppler shift must be compensated during slowing, and three principal methods to accomplish this have been demonstrated with atoms. Two of these techniques, white-light slowing (Zhu et al. (1991)) and chirped slowing (Ertmer et al. (1985)) have been successfully extended to diatomic molecules (Barry et al. (2012); Zhelyazkova et al. (2014); Hemmerling et al. (2016); Yeo et al. (2015); Truppe et al. (2017)). In white-light slowing, the slowing light is frequency-broadened to create "white light" that covers the full range of the Doppler shift, whereas in chirped slowing, the frequency of the slowing light is narrow-band but rapidly shifted, or "chirped" to match the changing Doppler shift as the particles decelerate. Zeeman slowing (Phillips and Metcalf (1982)) is one of the most efficient and popular techniques used for atoms. This method uses the Zeeman shifts induced by a spatially varying magnetic field to compensate for the changing Doppler shift, keeping the laser light resonant with atoms as they are slowed. The type of transitions that are typically used for laser cooling of molecules, namely transitions in which \(J^{\prime}\geq J\), makes it challenging to adapt Zeeman slowing to molecules, although this approach is potentially viable and being pursued by Petzold et al. (2018, 2018). The review paper by Fitch and Tarbutt (2021) provides a comprehensive overview of laser deceleration of diatomic molecules.
#### 3.5.1 Radiative slowing
Due to the presence of many rotational and vibrational degrees of freedom, beams of polyatomic molecules are generally more difficult to decelerate than are atoms or diatomic molecules. Radiative slowing requires scattering of thousands of photons per atom/molecule, which is difficult to achieve for polyatomic molecules due to vibrational branching, as described previously. While sufficient optical cycling for radiative slowing has been achieved for diatomic molecules by repumping just 1 or 2 vibrational stretching modes, a much larger number of vibrational modes must be addressed for polyatomic molecules. By choosing to work with polyatomic molecules that have favorable VBRs, it is possible to use a reasonable number of laser wavelengths to scatter sufficiently many photons to achieve radiative slowing of a molecular beam, e.g., to the capture velocity of a magneto-optical trap. White-light slowing was demonstrated for CaOH molecules, as reported by Vilas et al. (2022) and reviewed in detail below.
In order to achieve radiative slowing of any species, a photon cycling scheme capable of scattering the required number of photons must first be established. Assuming a molecular mass of \(m\), an initial beam velocity of \(v_{\rm beam}\), and wavenumber \(k\) for the scattered photons, the number of photons needed for slowing is of order \(n_{\rm slowing}\sim v_{\rm beam}/v_{\rm recoil}=mv_{\rm beam}/\hbar k\). For smaller polyatomic molecules, typical values of \(n_{\rm slowing}\) are of order \(\sim\)10\({}^{4}\). Radiative slowing of a polyatomic molecule therefore will typically require repumping enough rovibrational decays to limit branching to dark states to the \(\sim\)10\({}^{-4}\) level. Considering CaOH and using the values \(m_{\rm CaOH}=57\) amu, \(v_{\rm beam}=140\) m/s, and \(k=2\pi/(626\) nm), the number of photons required for slowing is \(n_{\rm slowing}\sim\) 12,500. The electronic transition chosen for cycling in CaOH is the \(\widetilde{A}^{2}\Pi_{1/2}(000)(J^{\prime}=1/2,p^{\prime}=+)\leftarrow \widetilde{X}^{2}\Sigma^{+}(000)(N=1,p=-)\) transition, which is both rotationally closed and has favorable vibrational branching ratios: by repumping spontaneous decay to 11 rovibrational states, an average of \(\sim\)12,000 photons are scattered per molecule before a \(1/e\) fraction of the molecules has decayed to unaddressed dark states. In other words, about a \(1/e\) fraction of the molecules in the molecular beam will, in principle, scatter enough photons to be slowed to zero velocity. A diagram for the corresponding optical cycling scheme is shown in Fig. 11. In the slowing of CaOH molecules, all 11 transitions were addressed by separate repumping laser beams that were overlapped and coaligned with the main slowing laser beam counterpropagating to the molecular beam. This required combining a
total of 12 lasers of different wavelengths varying from 566 nm to 651 nm into a single beam, which was accomplished using a series of dichroic beamsplitters. The overlapped beams were passed through an electro-optic modulator (EOM) that produced a frequency-broadened spectrum able to address all velocity classes between about 0 m/s and \(\sim\)140 m/s (the initial velocity of the molecular beam).
The primary slowing force came from driving the \(\widetilde{A}^{2}\Pi_{1/2}(J^{\prime}=1/2,p^{\prime}=+)\leftarrow\widetilde{X}^{2 }\Sigma^{+}(N=1,p=-)\) transition. As will be discussed in Sec. 3.6, this is a type-II transition, characterized by the existence of dark states in the ground state manifold. For any chosen polarization of the slowing light, there exist dark states into which a molecule may be optically pumped; after such an optical pumping event, that molecule will stop scattering photons from the slowing light. These dark states must be destabilized for continued photon scattering during slowing. In the experiment described by Vilas et al. (2022), this was achieved by switching the polarization of the slowing light between two orthogonal polarizations. Crucially, the dark states of the two polarizations are distinct, so continued photon scattering can be achieved. The photon scattering rate achieved for CaOH molecules in the deceleration scheme was observed to be around 1\(-\)2 MHz, similar to that achieved for diatomic molecules.
Figure 20(a) shows a characteristic velocity profile for an unslowed beam of CaOH molecules, as well as the velocity profile of the molecular beam following slowing. While the population of molecules with initial velocities below 50 m/s is negligible, the slowed distribution shows significant accumulation of molecules at velocities as low as 10\(-\)20 m/s. Time-resolved measurements of the molecular beam with and without slowing light applied are shown in Fig. 20(b). Here, slowing of the molecular beam is evident from the late arrival of a significant portion of the slowed beam, as compared to the unslowed beam. Finally, we note that, while a large number of different laser frequencies was used to address the various vibrational repumping transitions, the total slowing power (\(\sim\)2.5 W) is similar to that used for diatomic molecules. This is because most of the repumping laser beams contained relatively low power (10\(-\)100 mW).
#### 3.5.2 Zeeman-Sisyphus deceleration
Despite the success of radiative slowing for CaOH molecules, the large number of photons that must be scattered off of each molecule to be slowed makes the method difficult to implement in general. Larger molecules, or
molecules with more complex level structure, may have insufficiently diagonal FCFs to support such a photon budget. For this reason, it is desirable to develop other methods of molecular beam deceleration that do not depend on the radiation pressure force. The alkaline-earth-containing polyatomic molecules that we have focused on are polar radicals, implying that interactions with either electric or magnetic fields can lead to energy-level shifts that could be used to manipulate molecular motion.
One example of such a method that has been experimentally demonstrated for laser-coolable polyatomic molecules is Zeeman-Sisyphus (ZS) deceleration. In a ZS decelerator, originally proposed by Comparat (2014) and expanded on by Fitch and Tarbutt (2016), one leverages the large energy shifts induced by Tesla-scale magnetic fields. The principle of the ZS decelerator is depicted in Fig. 21. The deceleration scheme is highly reminiscent of the process used by Lu (2014) to slow and load CaF molecules into a magnetic trap. In brief, molecules in a weak-field-seeking (WFS) state are incident on a region of increasing magnetic field magnitude and decelerate as they climb the potential. Near the magnetic field maximum, the molecules are optically pumped through an electronically excited state to a strong-field-seeking (SFS) state and continue to decelerate as they exit the
Figure 20: (a) Fraction of molecules detected as a function of velocity for both an unslowed (blue) and slowed (red) beam of CaOH molecules. Populations are detected by a Doppler-sensitive fluorescence signal. Inset: Accumulation of population at low velocities when slowing light is applied. (b) Time-resolved laser-induced fluorescence from unslowed (blue) and slowed (red) beams of CaOH molecules.
high-field region. In this way, an energy \(\Delta E_{\rm stage}\approx 2\mu_{B}{\cal B}_{\rm max}\) can be removed from molecules passing through each deceleration stage, where \(\mu_{B}\) is the Bohr magneton and \({\cal B}_{\rm max}\) is the maximum magnetic field in the high-field region. This process can be repeated to remove additional energy. Furthermore, the deceleration applies to all molecules regardless of their arrival time, and thus is effective for continuous (or long-pulsed) molecular beams. Because a fixed _energy_ is removed in each stage, the decelerator will slow to rest \(1\mu_{B}\) molecules of any mass produced at or below the same threshold temperature.
ZS deceleration was first tested experimentally using CaOH molecules by Augenbraun et al. (2021a). The experimental setup comprised a set of superconducting coils, shown in Fig. 22. Our particular decelerator comprises two magnets with \({\cal B}_{\rm max}\approx 2.8\) T, leading to \(\Delta E_{\rm stage}\approx 3.8\) K. The total energy removal for two stages (\(\sim\)7.6 K) is therefore well matched to CBGBs, which can have typical kinetic energies \(E_{\rm kin}\lesssim 8\) K. There is some overhead associated with superconducting coils (principally the use of a cryocooler). Nonetheless, the use of superconducting coils leads to a number of technical advantages as compared to permanent magnet designs. First and foremost,
Figure 21: Overview of the Zeeman-Sisyphus deceleration scheme. Molecules enter the magnetic field region in a weak-field-seeking state and decelerate as they travel toward the peak magnetic field. At the peak magnetic field, molecules are optically pumped to a strong-field-seeking (SFS) state and continue to decelerate. Near the field minimum, molecules are pumped back to the weak-field-seeking (WFS) state and the process can be repeated for additional deceleration.
stronger magnetic fields can be achieved over a larger bore. The superconducting design is necessary to achieve peak fields \(\mathcal{B}_{\text{max}}\lesssim 4\) T over bores a few cm in diameter. That is, a superconducting coil can simultaneously enable greater deceleration per stage and larger spatial acceptances. Second, the cryogenic apparatus required to support the superconducting coils naturally leads to excellent vacuum due to high-speed cryopumping. Third, superconducting coils can easily be designed with transverse optical access in order to drive laser transitions only at particular positions along the solenoids. Performing optical pumping in a spatially selective way eliminates concerns about "accidental" resonances along the slowing path (which could lead to population loss) and minimizes Zeeman broadening (which reduces the laser power requirements).
The results of ZS deceleration of CaOH under optimal optical pumping performance in all pumping regions is shown in Fig. 23 (Augenbraun et al. (2021a)). Compared to the unperturbed molecular beam, when the optical pumping light is applied the fraction of slow molecules rises to 24(3)% below 20 m/s and 3.5(5)% below 10 m/s. The fraction of slow molecules is therefore enhanced by at least two orders of magnitude following deceleration. Based on the calibration of the fluorescence collection and the estimated number of
Figure 22: Schematic of the Zeeman-Sisyphus decelerator (not to scale). Molecules are produced in a two-stage cryogenic buffer-gas beam. They travel through two superconducting magnets in Helmholtz configuration and are optically pumped in three deceleration pumping regions (D1, D2, D3) by transverse laser beams at 626 nm. State-preparation regions (S1 and S2) pump molecules into WFS states in order to populate magnetically guidable states. Molecules are detected via laser-induced fluorescence.
molecules in the unperturbed beam, this means that approximately \(3\times 10^{4}\) molecules per pulse are found in velocity classes capturable by traps (e.g., MOT or magnetic). The solid lines in Fig. 23 are the results of Monte Carlo simulations that take as input experimentally measured laser parameters and accurate, three-dimensional magnetic field profiles for both the superconducting coils and the magnetic guide. We find excellent agreement between the simulations and experimental results, indicating that the details of the slowing process are modeled accurately.
Based on measurements of the optical pumping into higher-lying vibrational states, it was found that fewer than 10 photons were scattered per molecule in the optical pumping steps to decelerate a CaOH molecule near the peak of the distribution by \(\Delta v_{f}\approx 35\) m/s. By contrast, the radiative force due to 10 scattered photons would slow a CaOH molecule by just \(\Delta v_{f}\approx 0.1\) m/s. This clearly indicates the promise of ZS deceleration to slow molecules for which radiation pressure force slowing would be impractically difficult due to to a limitation on the number of photon scattering events that can be realized. More recently, ZS deceleration has been extended to the complex polyatomic molecule YbOH by Sawaoka et al. (2022). Again
Figure 23: CaOH velocity distributions with (blue circles) and without (black squares) Zeeman-Sisyphus deceleration applied. Also shown (orange diamonds) is the velocity distribution when all molecules are pumped into WFS states in region S1 as they enter the decelerator. Solid lines are the results of Monte Carlo trajectory simulations that take into account the three-dimensional field profile inside the decelerator.
molecules with velocities below 20 m/s were produced.
### Magneto-optical trapping
The magneto-optical trap (MOT) is an essential tool in atomic physics and, in particular, a key ingredient for producing ultracold samples of atoms with sufficiently high numbers (and densities) to be useful for subsequent science experiments. In brief, MOTs are based on radiation forces from six orthogonal laser beams, combined with a quadrupole magnetic field to tune the lasers into/out of resonance depending on position (Phillips (1998); Chu (1998)). By choosing the polarization appropriately, the trapped particles can be made to scatter photons preferentially from lasers that push them toward the center of the trap. This results in simultaneous trapping and rapid cooling to temperatures near the Doppler limit. An essential challenge in realizing MOTs for molecules is that they rely on the ability to cycle a large number of photons, typically at scattering rates of at least \(\sim\)100 kHz. Nonetheless, over the past decade magneto-optical trapping has been achieved for a number of diatomic molecules. In this section, we describe recent developments in extending magneto-optical trapping to polyatomic molecules, particularly the recent demonstration of a MOT for CaOH. The following discussion is based on the same experiment that was discussed in section 3.5.1 on radiative slowing of CaOH.
The CaOH MOT used the same rotationally-closed cycling transition as was used for laser slowing, i.e., \(\widetilde{A}^{2}\Pi_{1/2}(000)(J^{\prime}=1/2,p^{\prime}=+)\leftarrow \widetilde{X}^{2}\Sigma^{+}(000)(N=1,p=-)\). The photon cycling scheme discussed in Sec. 3.5.1 allows CaOH molecules to scatter \(\sim\)12,000 photons in the MOT before \(1/e\) of the population remains in bright states. In the experiment, the 11 repumping lasers that were used for slowing also covered the volume of the MOT; the same laser beams were used for both slowing and the MOT. Most of the repumping lasers used to achieve radiative slowing and a MOT for CaOH did not require much laser power, see Fig. 25(b). While laser power requirements in laser cooling experiments with polyatomic molecules will generally depend on the exact photon cycling scheme used for the given molecule, the observed powers needed for CaOH provide a preliminary indication of the technical requirements for laser systems in laser cooling of polyatomic molecules.
Coupling of the electron spin and rotational degrees of freedom splits the \(\widetilde{X}^{2}\Sigma^{+}(000)(N=1)\) level into two spin-rotation (SR) components spaced by \(\sim\)52 MHz, with total angular momenta of \(J=1/2\) and \(J=3/2\). To address both SR components in the MOT, the MOT laser light includes two frequency
components separated by the SR splitting, as shown in Fig. 24(a). The small hyperfine splittings in \(\widetilde{X}^{2}\Sigma^{+}(000)(N=1)\) of \(\sim\)1 MHz were unresolved by the MOT light, so additional laser frequencies were not required to address hyperfine sublevels.
The transitions addressed in the CaOH MOT, \(J^{\prime}=1/2\gets J=1/2\) and \(J^{\prime}=1/2\gets J=3/2\), are both type-II transitions, characterized by \(J^{\prime}\leq J\). A defining characteristic of type-II MOTs is the presence of magnetic dark states, which arise because the ground state has more sublevels than the excited state. (A detailed description of the physics of type-II MOTs, and their comparison to type-I MOTs, was given by Tarbutt (2015); Tarbutt and Steimle (2015).) As described in Sec. 3.5.1, these dark states must be destabilized in order to generate trapping forces in the MOT, which derive from repeated photon scattering. A common strategy to achieve this dark-state remixing is to rapidly switch the polarization of the MOT light while simultaneously alternating the direction of the magnetic field gradient. The switching occurs at a rate similar to the rate at which molecules pump into dark states, typically \(1-2\) MHz, leading to the moniker radio-frequency (RF) MOT (Norrgard et al. (2016)). Figure 24(b) illustrates the situation
Figure 24: Overview of the CaOH laser cooling and RF MOT scheme. (a) The level structure of the main cycling transition used in the CaOH MOT, and the polarization configuration of the two frequency components of the MOT light. Since the \(J=1/2\) and \(J=3/2\) components have \(g\)-factors of opposite sign, each component is addressed with opposite polarization. Reproduced from Vilas et al. (2022). (b) The magnetic sublevels of the \(J=3/2\gets J^{\prime}=1/2\) transition for the two configurations of the RF MOT. Magnetic dark states of each configuration are indicated.
for CaOH, showing how the RF switching destabilizes dark states to provide a trapping force and allows for continuous scattering from the laser beams that provide a restoring force toward the center of the MOT. A separate approach, known as a dual-frequency MOT, has been used for diatomic molecules and is likely to be possible for polyatomic molecules (Tarbutt and Steimle (2015); Truppe et al. (2017b); Ding et al. (2020)).
Vilas et al. (2022) produced an RF MOT of CaOH molecules containing \(2.0(5)\times 10^{4}\) molecules trapped at a peak density of \(n=3.0(8)\times 10^{8}\) cm\({}^{-3}\). The temperature of the molecules (\(T=870(50)\)\(\mu\)K), the damping constant (\(\beta=455(85)\) s\({}^{-1}\)), and oscillation frequency (\(\omega=2\pi\times 59(4)\) Hz), were all comparable to values characteristic of MOTs of diatomic molecules.6 The lifetime of the CaOH MOT was limited by photon scattering causing population to accumulate in rovibrational states that are not addressed by the
Figure 25: (a) Fluorescence signal from the CaOH MOT as a function of time after the molecules were produced. In the three curves (I), (II), and (III), different repumpers were turned off at \(\sim\)50 ms to limit the average number of scattered photons per molecule to 1200, 4600, and 12,000, respectively. Dotted lines are exponential fits to extract lifetimes, of 2.60(3) ms, 10.1(2) ms, and 25.7(6) ms for (I)-(III). Inset: MOT lifetime as a function of MOT beam power. (b) Laser power required by each of the repumping lasers (denoted by the ground vibrational state addressed) used in the CaOH photon cycling scheme. Reproduced from Vilas et al. (2022).
photon cycling scheme, as shown in Fig. 24(a). The maximum achieved lifetime was \(\sim\)150 ms, similar to the timescale observed for diatomic molecules but much shorter than the lifetimes that can be realized in atomic MOTs. The characteristic damping time of the CaOH MOT--the time to compress the captured cloud of molecules--was an order of magnitude shorter than the lifetime, enabling full cooling and compression before the trapped molecules were lost to vibrational dark states.
The temperature of the CaOH MOT was relatively high compared to the Doppler limit (around 150 \(\mu\)K), a situation that is common in type-II MOTs. This prevents direct loading from the MOT into conservative traps, such as optical traps, which have relatively low trap depths. The elevated temperature arises from polarization-gradient forces due to the presence of dark states in type-II transitions; we discuss these forces in more detail in Sec. 3.7.
### Sub-Doppler cooling
As discussed in Sec. 3.6, dark states present in the optical cycling scheme of molecular MOTs can lead to relatively high temperatures for the trapped molecules. A variety of techniques have been developed to cool the molecules to lower temperatures, and many of these have been applied to polyatomic molecules. We discuss these methods here, using CaOH as a representative test case.
#### 3.7.1 Grey molasses
As mentioned previously the temperature limitation in the MOT is due to a Sisyphus-like heating effect. This can be turned into a cooling effect by detuning the cooling light to the blue of resonance. Because photon cycling is achieved on an \(F\to F\) and \(F\to F-1\) transition, dark states are present. This means the ground state manifold has both bright and dark states. The bright states, when blue detuned, are shifted above the dark state and are sinusoidaly modulated due to the AC stark shift, Fig. 26(a). An atom or molecule in the ground state will move across this spatially varying potential, undergoing motional coupling from the dark state to be bright state. The atom or molecule then moves up the potential hill and is driven to the excited state where it will then decay back into the dark state. This cycle repeats, each time cooling the particle by an energy proportional to the amplitude of the AC stark modulation. This is commonly referred to as grey molasses cooling as it is a mix of a bright and dark molasses.
Grey molasses cooling was first successfully demonstrated in molecules by Truppe et al. (2017c), who focused on the molecule CaF. This cooling method was extended to polyatomic molecules by Vilas et al. (2022), who used CaOH. It was found that, in the CaOH experiment, optimal cooling occurs at a detuning of 13 MHz (see Fig. 26(b)). The temperature increases for higher detuning due the cooling light being red-detuned relative to the \(J=1/2\) state. The observed (exponential) timescale for cooling is less than about 0.5 ms, indicating that the cooling is relatively rapid.
#### 3.7.2 \(\Lambda\)-enhanced grey molasses
Standard grey molasses is often limited to temperatures \(\sim\)100 \(\mu\)K and more advanced schemes are required to reach lower temperatures. Two such techniques that have been successfully demonstrated in ultracold molecules are \(\Lambda\)-enhanced grey molasses (Cheuk et al. (2018); Langin et al. (2021)) and single frequency (SF) cooling (Caldwell et al. (2019)). Both techniques rely on creating velocity-dependent dark states.
\(\Lambda\)-enhanced grey molasses cooling combines grey molasses with velocity selective coherent population trapping (VSCPT) (Aspect et al. (1988)). The VSCPT mechanism relies on the creation of coherent dark states present in
Figure 26: (a) Illustration of the grey molasses effect. Molecules are optically pumped into dark states near the antinodes of an intensity standing wave and then transit from dark to bright states near the standing wave’s nodes. (b) Grey molasses cooling of CaOH. Top: Temperature as a function of cooling duration Bottom: Temperature as a function of sub-Doppler detuning \(\Delta_{\mathrm{SD}}\). Reproduced from Vilas et al. (2022).
multi-level systems, as shown in Fig. 27(a). With counter-propagating circularly polarized laser beams, a superposition of two states (called \(|a\rangle\) and \(|b\rangle\) for generality) can be formed where the transition amplitudes destructively interfere to form a dark state. The dark state does not persist to large non-zero velocities because in that case the two beams are not at the same frequency (due to opposite Doppler shifts). While VSCPT cooling can reach sub-recoil temperatures in atoms, it is slow and inefficient, relying on random walks to cool toward zero velocity. However, by combining VSCPT cooling with grey molasses, the cooling no longer relies solely on a random walk, as the grey molasses provides a restoring force towards zero velocity and the VSCPT effect traps the atoms or molecules near zero velocity. However, the
Figure 27: (a) Simple picture of the level structure used in \(\Lambda\)-enhanced grey molasses cooling. Ground states \(|a\rangle\) and \(|b\rangle\) are coupled to excited state \(|e\rangle\) with single-photon detuning \(\Delta\) and two-photon detuning \(\delta\). (b) Temperature vs. two-photon detuning and cooling intensity for \(\Lambda\)-enhanced grey molasses cooling of CaOH. (c) Level scheme used in \(\Lambda\)-enhanced grey molasses and single-frequency cooling. \(\Lambda\)-enhanced cooling uses both frequencies, while only laser (I) is used for single-frequency cooling. (d) Temperature vs. detuning for single-frequency cooling of CaOH. Reproduced from Cheuk et al. (2018) and Hallas et al. (2022).
presence of grey molasses forces reduces the extent to which the zero-velocity state is "dark," raising the minimum temperature achievable by this type of cooling. This \(\Lambda\)-enhanced grey molasses cooling method was first demonstrated on the D1 lines of alkali atoms (Grier et al. (2013)) and later in CaF molecules (Cheuk et al. (2018)).
\(\Lambda\)-enhanced grey molasses cooling can be naturally extended to polyatomic molecules. In CaOH, this was achieved by coupling two hyperfine levels in the \(\widetilde{X}^{2}\Sigma^{+}(N=1)\) manifold to the \(\widetilde{A}^{2}\Pi_{1/2}(J^{\prime}=1/2)\) excited state (Hallas et al. (2022)). Specifically, the \(\widetilde{X}^{2}\Sigma^{+}(J=3/2,F=2)\) and \(\widetilde{X}^{2}\Sigma^{+}(J=1/2)\) levels are addressed with \(\sigma^{-}\) and \(\sigma^{+}\) polarization, respectively. Both frequency components are blue-detuned by a common amount \(\Delta_{\rm SD}\) of about 12 MHz, and the two-photon detuning \(\delta\) is varied.
The dependence of \(\Lambda\)-enhanced grey molasses cooling on both \(\delta\) and \(I_{\rm SD}\) is shown in Fig. 27(b). The lowest measured temperature, \(T_{\rm min}=34\)\(\mu\)K, occurs at \(\delta\approx 0\) MHz. A second, slightly higher local temperature minimum is observed at \(\delta\approx 1.5\) MHz, which corresponds to the two-photon resonance for the \(\Lambda\)-system consisting of \(\widetilde{X}^{2}\Sigma^{+}(J=3/2,F=1)\) and \(\widetilde{X}^{2}\Sigma^{+}(J=1/2)\). At higher intensities, the temperature is minimized at increasingly negative \(\delta\) because of the AC Stark shifts of the hyperfine levels coupled in the \(\Lambda\)-enhanced grey molasses: for higher intensities, the levels move further apart, and smaller \(\delta\) is required to satisfy the two-photon resonance condition.
#### 3.7.3 Single-frequency cooling
A primary limitation of \(\Lambda\)-enhanced grey molasses cooling is the fact that the dark states are destabilized by off-resonant scattering from the two laser frequencies interacting with nearby states. Single-frequency cooling solves this by creating dark states with a single laser frequency at large detuning, reducing this effect. This cooling method was first demonstrated for molecules by Caldwell et al. (2019), who used CaF molecules in their experiment.
Single-frequency cooling can also be implemented in polyatomic molecules, as was shown with CaOH by Hallas et al. (2022). By applying light blue-detuned by an amount \(\Delta_{\rm SD}\) (about 70 MHz) from the \(\widetilde{A}^{\,2}\Pi_{1/2}(J=1/2)\leftarrow\widetilde{X}^{\,2}\Sigma^{+}(N=1,J=1/2)\) transition, a minimum temperature \(T_{\rm min}=20\)\(\mu\)K was realized (Fig. 27(c-d)). The cooling was observed to be insensitive to detuning above a certain value (\(\Delta_{\rm SD}\approx 70\) MHz), Fig. 27(d). This insensitivity is beneficial for cooling molecules into an ODT, where trap-induced light
shifts can affect the cooling efficiency.
### Optical trapping
Due to the availability of high-power fiber lasers, optical trapping has become a popular method to trap ultracold atoms and diatomic molecules. Optical trapping has several key advantages, including the ability to trap atoms and molecules irrespective of their internal state, and the ability to greatly increase phase space density due to the small trap volume. However, laser cooling of molecules inside an optical trap can be hindered by a variety of effects, including differential AC Stark shifts. Optical dipole traps consist of tightly focused Gaussian beams, which in combination with the induced Stark shift on the molecules, creates a harmonic confining potential for the molecules. This potential is wavelength-dependent, and the dependence can be quite complicated due to the many levels present in molecules. In the limit of large detuning from an electronic transition, the strength of the trapping potential is inversely proportional to the detuning. Caldwell and Tarbutt (2020) provide details on calculating Stark shifts for molecules in trapping laser fields. The trap depth as a function of wavelength for the polyatomic molecule CaOH, in the large-detuning limit, is shown in Fig. 28(a).
Optical trapping of directly laser-cooled molecules was first demonstrated with CaF (Anderegg et al. (2018)). To efficiently load molecules into the optical dipole trap, the molecules must be cooled into the trap. This was accomplished with CaF by overlapping a far detuned 1064 nm laser with the cloud of molecules during grey molasses cooling. As the molecules traversed the optical trapping light, the grey molasses cooled the molecules, loading
Figure 28: (a) Trap depth vs. wavelength for an optical dipole trap (ODT) of CaOH generated from a 13 W laser with a 25 \(\mu\)m waist. (b) Loading of CaOH molecules into an ODT. Reproduced from Hallas et al. (2022)
them into the trap and increasing their density. Using \(\Lambda\)-enhanced grey molasses, the transfer efficiency into the ODT was greatly improved (Cheuk et al. (2018)).
Optical trapping of CaOH was demonstrated in much the same fashion (Hallas et al. (2022)). Figure 28(b) shows the loading of CaOH molecules into an ODT from the single-frequency grey molasses cloud. ODT loading is relatively inefficient, typically transferring only \(1-10\%\) of the molecules from the molasses. This could be improved by employing new cooling and trapping techniques, such as blue-detuned optical traps (Lu et al. (2022)) and blue-detuned MOTs (Jarvis et al. (2018)), where molecules may be directly loaded from the MOT. Recent work by Burau et al. (2022) has demonstrated the advantages of a blue-detuned MOT for the diatomic radical YO. The additional substructure present in polyatomic species has not been found to hinder the optical trapping process in any significant way. Generically, however, the increased density of states in larger polyatomic molecules may increase the likelihood of unwanted excitations, e.g. due to "accidental" resonances or via Raman or multi-photon processes.
Along with trapping of CaOH molecules in the \(\widetilde{X}(000)\) vibrational ground state, Hallas et al. (2022) demonstrated trapping of CaOH in the excited \(\widetilde{X}(010)\) bending vibrational state and the \(\widetilde{X}(100)\) stretching vibrational level. Trapped molecules were optically pumped into these states by applying a
Figure 29: (a) Optical pumping of optically trapped CaOH molecules into the \(\widetilde{X}(010)\) bending vibrational mode via single frequency cooling with the \(\widetilde{X}(010)\) repumping laser removed. (b) Lifetime of optically trapped CaOH molecules in the \(\widetilde{X}(000)\), \(\widetilde{X}(010)\), and \(\widetilde{X}(100)\) vibrational levels. The solid curves are fits to a rate equation model capturing blackbody excitation and radiative decay along with vacuum losses. Reproduced from Hallas et al. (2022)
single-frequency molasses while turning off the corresponding vibrational repumping laser. For the \(\widetilde{X}(010)\) bending mode the optical pumping requires 1200 photon scattering events before the average molecule vibrationally decays, corresponding to an optical pumping timescale of 23 ms (Fig. 29(a)). Optical pumping into the \(\widetilde{X}(100)\) stretching mode was much faster due to the large branching ratio to this state (\(\sim\)5%). Fig. 29(b) shows measurements of the lifetime of CaOH molecules trapped in each of the \(\widetilde{X}(000)\), \(\widetilde{X}(010)\), and \(\widetilde{X}(100)\) states. It was found that the ground state lifetime was limited primarily by room-temperature blackbody excitation to excited vibrational levels and by imperfect vacuum, while the excited state lifetimes were shorter due to spontaneous, radiative decay back to the vibrational ground state (Hallas et al. (2022)). The lifetimes of all three states could be improved by cooling the surrounding environment to reduce blackbody radiation-induced losses.
### Preparation and coherent control of single quantum states
Following sub-Doppler cooling and ODT loading, the trapped molecular population is distributed over multiple hyperfine sublevels. The many internal states present in polyatomic molecules complicate the task of transferring this population into a single quantum state. An optical pumping sequence for CaOH, shown in Fig. 30(a), is used to populate a single quantum state in the \(\widetilde{X}\,^{2}\Sigma^{+}(010)(N=1^{-})\) vibrational bending mode. This bending mode is of interest because it is precisely the state proposed for use in various quantum computation or precision measurement experiments (Kozyryev and Hutzler (2017); Yu et al. (2019)). Following optical pumping into the bending mode as described in the previous section, the molecular population is spread across twelve hyperfine states. To prepare the molecules in a single quantum state, a microwave-optical pumping sequence is employed; microwave transitions allow hyperfine splittings below the linewidth of optical transitions to be resolved, while optical excitation provides the dissipation necessary for single state preparation. In CaOH, molecules can be prepared in the \((N=1,J=1/2^{-},F=0)\) state using the following sequence. Microwaves are first used to drive population from the \((N=1,J=3/2^{-})\) state to the \((N=2,J=3/2^{-})\) level. A small electric field is applied to mix states of opposite parity and thereby lend transition strength to this nominally forbidden transition. An optical transition then drives population from \((N=2,J=3/2^{-})\) to the excited \(\widetilde{A}(010)\kappa^{2}\Sigma^{(-)}(J=1/2^{+})\) electronic
state, which decays to the \(F=0\) (the target state) and \(F=1\) levels of the (\(N=1,J=1/2^{+}\)) manifold. This sequence is then repeated, but with the microwaves driving population in (\(N=1,J=1/2^{+},F=1\)) to (\(N=2,J=3/2^{-}\)), where molecules are again optically excited and pumped into the target \(F=0\) state. Spectroscopy scans of the (\(N=1,J=1/2^{+}\)), \(F=0\) and \(F=1\) states before and after optical pumping are plotted in Fig. 30(b), showing that the population of the \(F=0\) state is greatly enhanced. The population in the \(F=0\) state can then be transferred to the desired target state and any remaining molecules that were not successfully transferred can be pushed out of the trap using resonant laser light. Rabi oscillations between states can also be driven, as shown in Fig. 30(c). In CaOH, hyperfine splittings are approximately 1 MHz, meaning that Rabi frequencies \(<\)1 MHz are necessary in order to avoid off-resonant excitation.
Figure 30: Demonstrations of coherent control of CaOH molecules. (a) Level diagram showing the microwave/optical pumping steps used to transfer population into a single quantum state. (b) Microwave spectroscopy showing \(F=0\) and \(F=1\) population before and after optical pumping into \(F=0\). (c) Coherent Rabi oscillations driven with 40 GHz microwaves between the \(N=1\) and \(N=2\) levels of optically trapped CaOH molecules in the \(\widetilde{X}(010)\) bending mode. Panel (a) is adapted from Anderegg et al. (2023).
## 4 Outlook and challenges
### Toward larger molecules
One of the underlying trends in the quest to laser cool polyatomic molecules has been a drive to control increasingly large and complex molecules. Nearly from the first proposals to laser cool polyatomic molecules, by both Isaev and Berger (2016) and Kozyryev et al. (2016a), authors were identifying molecules with five or more atoms that appeared to have FCFs sufficiently diagonal to admit direct laser cooling. These proposals relied critically on the concept eventually dubbed an "optical cycling center," such as the MO (M=Ca,Sr,Yb, etc.) moiety that has formed the core of the laser cooling experiments described throughout this review.
More recently, theoretical work has identified an even wider range of aromatic molecules that can be adorned with optical cycling centers, including phenols (and derivatives; Dickerson et al. (2021a); Ivanov et al. (2020a)), polycyclic arenes (Dickerson et al. (2021b)), and fully saturated hydrocarbons (Dickerson et al. (2022)). Remarkably, Dickerson et al. (2021a) were even able to show that substitutions around a cyclic hydrocarbon could be used to _tune_ the FCFs of the metal-centered excitations using simple principles from organic chemistry--a capability that is only possible in large polyatomic species. Experimental verification of these theoretical predictions has been obtained by Zhu et al. (2022); Mitra et al. (2022); Augenbraun et al. (2022); Lao et al. (2022), who have synthesized both phenol and naphthol derivatives adorned with Ca- and Sr-based optical cycling centers and shown that these molecules indeed have the properties desired of laser-coolable species (namely diagonal FCFs and localized metal-centered electronic excitations).
Figure 31 compares the DLIF spectra for three Ca-containing molecules of increasing complexity: CaOH, CaOCH\({}_{3}\), and CaOC\({}_{10}\)H\({}_{7}\) (based on measurements reported by Zhang et al. (2021); Augenbraun (2020); Mitra et al. (2022)). Despite the fact that CaOC\({}_{10}\)H\({}_{7}\) contains over an order of magnitude more vibrational modes than CaOH, the gross structure of their DLIF spectra is largely similar. In all cases, the Ca-O stretching mode is the dominant off-diagonal decay channel. CaOC\({}_{10}\)H\({}_{7}\) shows some activity in a handful of additional modes at the \(\sim\!0.1-1\%\) level, indicating that achieving optical cycling will be more challenging-- but not necessary prohibitive.
The laser cooling and full quantum control of larger molecules (containing a dozen or more atoms) is at the very frontier of the the field, so much so that ideas of what to do with them are just beginning to be explored. Funda
mentally, the larger number of atoms in the molecule, the larger the number of vibrational modes and hyperfine states. The concept of "internal motion" also starts to enter, e.g. a spinning ligand. Such modes can naturally be used to store quantum information, but exactly how this can be done and how useful this will be has not been fully explored. With a large enough molecule, one may be able to completely separate the laser cooling and readout section of the molecule (through an "optical cycling center") from the physics end, perhaps containing an exotic atom such as a heavy radioactive species. Thus, one might be able to realize a "configurable" molecular framework that allows targeted substitution of scientifically interesting components.
### Other molecular motifs
To date, all laser-cooled polyatomic molecules are of the form MOR, in which an alkaline-earth-like metal hosting a localized valence electron is bonded to a linker oxygen atom and electronegative radical. However, other molecular structures may also be suitable for laser cooling. Closely related to MOR molecules are other ML molecules, where L is an electronegative ligand such as NC, SH, or CH\({}_{3}\)(Norrgard et al. (2019b); Augenbraun et al.
Figure 31: Comparison of DLIF spectra following excitation of (a) CaOH, (b) CaOCH\({}_{3}\), and (c) CaOC\({}_{10}\)H\({}_{7}\) to the \(\tilde{A}\) excited state. Spectra (a) and (b) were recorded using fluorescence from a molecular beam, and spectrum (c) was recorded by collecting fluorescence from inside a buffer-gas cell. The diagonal fluorescence features are normalized to unity.
(2020a)). We describe two more dramatic departures from the MOR model, which appear to be favorable for future laser cooling experiments.
First, polyatomic molecules could be functionalized with multiple optical cycling centers, for example the linear molecules SrCCSr and YbCCCa, or asymmetric top molecules consisting of two metals linked by a benzene ring (O'Rourke and Hutzler (2019); Ivanov et al. (2020b)). These systems are expected to exhibit enhanced scattering rates due to the presence of two optical cycling centers, and offer separation of functions between distinct metals (for example, precision measurement localized on a Yb atom, and co-magnetometry localized on a Ca atom). Symmetric molecules like SrCCSr possess a structure analogous to Sr\({}_{2}\), which is a leading candidate for a molecular clock with applications to precision measurement (Zelevinsky et al. (2008)).
Second, polyatomic molecules with multivalent optical cycling centers, for example AlSH, should also be possible (Yu et al. (2023)), in a manner analogous to diatomic molecules like AlF, AlCl, TlF, BH, and CH (Hofsass et al. (2021); Daniel et al. (2021); Grasdijk et al. (2021); Hendricks et al. (2014); Schnaubelt et al. (2021)). Although the generalization from MF to MOH molecules has been highly successful for alkaline-earth-like metals (e.g., M=Ca,Sr,Yb), calculations show that for p-block metals (e.g., M=Al,Si,P), MOH molecules are bent and undergo a large bond angle deflection upon electronic excitation. This phenomenon can be mitigated with the use of a different linker atom like S or Se despite their lower bond polarity, due to reduced bond repulsion. Thus by careful tuning of the competition between bond repulsion and bond polarity, optical cycling of polyatomic molecules also appears feasible for species with p-block metals. Generalization of optical cycling to other structures may also be possible, but remains so far unexplored.
### Challenges and possibilities for other polyatomic molecules
As larger and more complex molecular species are explored, new difficulties and limitations of quantum state control are likely to arise. Larger polyatomic molecules offer richer internal structures, a promising prospect for encoding qudits (higher dimensional analogues of qubits), and unique rovibrational modes, a potential platform to use for searches for fundamental symmetry violations. However, controlling these structures will potentially be more difficult than is the case for small polyatomic species. Several open
questions remain. For example, in the case of ATMs identified by Augenbraun et al. (2020a), it is generally the case that the excited electronic states have large and anisotropic \(g\)-factors; it will be necessary to understand how that structure affects the magnetic-field-dependent forces that are necessary in a MOT. In addition, nonlinear molecules are subject to many symmetry-breaking effects such as the Jahn-Teller effect and other vibronic couplings, and it will be necessary to understand the extent to which these features affect the rovibrational selection rules that aided laser-based control of smaller and higher symmetry species.
Other challenges may arise in the quest to gain quantum-state control over larger species. From a practical standpoint, because buffer-gas cooling techniques are generally most applicable at temperatures above about 0.5 K, large molecules will be distributed over a substantially larger number of internal states. Moreover, the energy levels in large molecules are separated by smaller intervals, which generically complicates the task of achieving coherent control of these molecules. With smaller spacings, Rabi frequencies must be reduced in order to suppress off-resonant excitation and unwanted state transfer. In quantum simulation/quantum information processing applications, for example, reduced Rabi frequencies may become an impediment to high-speed gate operations. To complicate the problem, the larger number of states in these complex polyatomic molecules lead to more pathways present for blackbody excitations and spontaneous decay, limiting the coherence times of experiments.
There may also be fundamental challenges due to the complex level structure of large polyatomic molecules. For example, larger molecules are potentially more susceptible to non-radiative loss channels that may interrupt optical cycling (Bixon and Jortner (1968)). We present here a simple model that, while speculative, conveys our sense of the structural questions that must be understood in order to achieve laser cooling of increasingly large polyatomic molecules. The essential details follow the treatment provided by Uzer and Miller (1991). Readers should also consult the excellent, and highly pedagogical, overview of intramolecular vibrational energy redistribution presented by Nesbitt and Field (1996). Our model begins with the observation that the density of vibrational states at some energy above the absolute ground state grows very rapidly with molecule size, especially for molecules that contain low-frequency (\(\nu\lesssim 100\) cm\({}^{-1}\)) vibrational modes. In many cases, laser cooling transitions involve excitation to an excited electronic state \(\hat{A}\) with energy below the dissociation threshold of the ground electronic state
\(\tilde{X}\), meaning that \(\tilde{A}\) is embedded in a dense manifold of highly-excited vibrational levels of \(\tilde{X}\). See Fig. 32 for calculations of the density of vibrational states for molecules of the form CaOR obtained using the method presented in Haarhoff (1964). For ligands, R, such as C\({}_{6}\)H\({}_{5}\) or C\({}_{10}\)H\({}_{7}\), the density of vibrational states at the location of the \(\tilde{A}(v=0)\) level can be as large as \(10^{11}\) or \(10^{15}\) states per cm\({}^{-1}\), respectively. For CaOC\({}_{6}\)H\({}_{5}\) (CaOC\({}_{10}\)H\({}_{7}\)) this means there are around \(10^{8}\) (\(10^{12}\)) dark states within the frequency range spanned by the natural linewidth of a typical bright state (\(\sim\)30 MHz).
Consider a near-resonant excitation that transfers molecular population from a ground rovibronic state \(|\phi_{0}\rangle\) (in practice a single rotational level of the \(\tilde{X}(v=0)\) manifold) to a "bright" rovibronic excited state \(|\phi_{b}\rangle\) that has natural lifetime \(\tau_{b}\) (typically a single rotational level of the \(\tilde{A}(v=0)\) or \(\tilde{B}(v=0)\) manifolds). If \(|\phi_{b}\rangle\) were an energy eigenstate, the only time evolution that would occur following excitation is decay back to \(|\phi_{0}\rangle\) at a rate \(\Gamma_{b}=1/\tau_{b}\). If, however, \(|\phi_{b}\rangle\) is coupled via some vibronic interaction to the highly-excited levels of \(\tilde{X}\) via a matrix element \(V_{bd}\), then \(|\phi_{b}\rangle\) is not an energy eigenstate; the energy eigenstates can be expressed as
\[|\psi_{n}\rangle=a_{bn}|\phi_{b}\rangle+\sum_{j=1}^{N}a_{d_{j}n}|\phi_{d_{j}}\rangle. \tag{20}\]
Figure 32: (Left) Density of states as a function of energy above the absolute ground state \(\tilde{X}(v=0)\) for CaOR molecules with ligands of increasing size. The vertical gray band indicates the typical energy range of the \(\tilde{A}\leftarrow\tilde{X}\) electronic transition in these species. (Right) Schematic diagram highlighting the levels important to our hypothesized loss mechanism.
Thus, laser excitation that selectively targets \(|\phi_{b}\rangle\) actually populates numerous energy eigenstates, meaning the excited state that is prepared will evolve in time and lead to population building up in the dark manifold \(\{|\phi_{d}\rangle\}\) with a characteristic timescale \(\tau_{bd}\). Because the states in \(\{|\phi_{d}\rangle\}\) are very closely spaced, the timescale for population to return to \(|\phi_{b}\rangle\) (\(\tau_{\rm return}\)) may be very long. Under conditions where \(\tau_{bd}\ll\tau_{b}\) (so that leakage to the dark manifold occurs more rapidly than photon emission) and \(\tau_{\rm return}\gg\tau_{b}\) (so that population remains in the dark manifold over many natural lifetimes), this process will appear to cause loss from the laser cooling experiment. The timescale during which population is primarily found in the dark manifold limits the achievable photon scattering rate, directly impeding optical cycling and laser cooling. Under a crude estimate based on Fermi's golden rule, if \(\tau_{bd}\) is to compete with spontaneous emission (for a typical excited-state lifetime of about 30 ns), we must require
\[\frac{1}{\tau_{bd}}\approx 2\pi\langle V^{2}\rangle\rho, \tag{21}\]
where \(\sqrt{\langle V^{2}\rangle}\) is the root-mean-squared coupling matrix element between the bright and dark manifolds and \(\rho\) is the density of states. For a molecule with vibrational density of states \(\rho\approx 10^{14}\) per cm\({}^{-1}\), an average coupling matrix element as small as \(\sqrt{\langle V^{2}\rangle}\sim 10^{-9}\) cm\({}^{-1}\) would be sufficient to satisfy this condition. It is not currently clear whether coupling matrix elements of this magnitude are present in the large CaOR molecules being proposed for laser cooling applications. It is also possible that the average coupling matrix element could be tuned through judicious molecular design choices that reduce interaction between the metal-centered valence electron and the portion(s) of the molecule that contribute most to the vibrational density of states. It is critical that new theoretical and experimental studies be pursued to study these questions.
## 5 Conclusion
In this review, we have described how the field of ultracold polyatomic molecules is poised to impact many of the frontiers in modern atomic, molecular, and optical physics. Recent experimental results have shown that much of the toolbox developed for ultracold atoms can be applied to molecules with equal success. As we have described, the control of polyatomic molecules depends on a careful understanding of their internal structure. However, the class of molecules that has garnered the most experimental attention
(alkaline-earth psuedohalides) share a set of common features such as diagonal FCFs and manageable rotational selection rules, that points toward a generic way to "design" laser-coolable molecules with diverse geometries, atomic constituents, and responses to external fields. The breadth of molecular species and structures that appear amenable to laser cooling is extremely large, and continues to grow as researchers explore increasingly complex systems.
## 6 Acknowledgments
We gratefully acknowledge valuable comments from Profs. David DeMille, Nicholas Hutzler, Michael Tarbutt, and Jun Ye, as well as Parul Aggarwal and Calder Miller. We also thank Prof. Robert W. Field and Bryan Changala for insightful input on IVR coupling mechanisms in larger molecules. The work described in this paper that was conducted within the Doyle group was supported by the AFOSR, the NSF, and the Heising-Simons Foundation.
|
2304.14089
|
Distributed Multi-Horizon Model Predictive Control for Network of Energy
Hubs
|
The increasing penetration of renewable energy resources has transformed the
energy system from traditional hierarchical energy delivery paradigm to a
distributed structure. Such development is accompanied with continuous
liberalization in the energy sector, giving rise to possible energy trading
among networked local energy hub. Joint operation of such hubs can improve
energy efficiency and support the integration of renewable energy resource.
Acknowledging peer-to-peer trading between hubs, their optimal operation within
the network can maximize consumption of locally produced energy. However, for
such complex systems involving multiple stakeholders, both computational
tractability and privacy concerns need to be accounted for. We investigate both
decentralized and centralized model predictive control (MPC) approaches for a
network of energy hubs. While the centralized control strategy offers superior
performance to the decentralized method, its implementation is computationally
prohibitive and raises privacy concerns, as the information of each hub has to
be shared extensively. On the other hand, a classical decentralized control
approach can ease the implementation at the expense of sub-optimal performance
of the overall system. In this work, a distributed scheme based on a consensus
alternating direction method of multipliers (ADMM) algorithm is proposed. It
combines the performance of the centralized approach with the privacy
preservation of decentralized approach. A novel multi-horizon MPC framework is
also introduced to increase the prediction horizon without compromising the
time discretization or making the problem computationally intractable. A
benchmark three-hub network is used to compare the performance of the mentioned
methods. The results show superior performance in terms of total cost,
computational time, robustness to demand and prices variations.
|
Varsha Behrunani, Hanmin Cai, Philipp Heer, Roy S. Smith, John Lygeros
|
2023-04-27T11:06:19Z
|
http://arxiv.org/abs/2304.14089v1
|
# Distributed Multi-Horizon Model Predictive Control for Network of Energy Hubs
###### Abstract
The increasing penetration of renewable energy resources has transformed the energy system from traditional hierarchical energy delivery paradigm to a distributed structure. Such development is accompanied with continuous liberalization in the energy sector, giving rise to possible energy trading among networked local energy hub. Joint operation of such hubs can improve energy efficiency and support the integration of renewable energy resource. Acknowledging peer-to-peer trading between hubs, their optimal operation within the network can maximize consumption of locally produced energy. However, for such complex systems involving multiple stakeholders, both computational tractability and privacy concerns need to be accounted for. We investigate both decentralized and centralized model predictive control (MPC) approaches for a network of energy hubs. While the centralized control strategy offers superior performance to the decentralized method, its implementation is computationally prohibitive and raises privacy concerns, as the information of each hub has to be shared extensively. On the other hand, a classical decentralized control approach can ease the implementation at the expense of sub-optimal performance of the overall system. In this work, a distributed scheme based on a consensus alternating direction method of multipliers (ADMM) algorithm is proposed. It combines the performance of the centralized approach with the privacy preservation of decentralized approach. A novel multi-horizon MPC framework is also introduced to increase the prediction horizon without compromising the time discretization or making the problem computationally intractable. A benchmark three-hub network is used to compare the performance of the mentioned methods. The results show superior performance of the distributed multi-horizon MPC in terms of total cost, computational time, robustness to demand and prices variations.
keywords: Distributed control, model predictive control, energy hubs, ADMM, consensus algorithm, multi horizon MPC +
Footnote †: journal: Elsevier
## 1 Introduction
The growth of energy demand in the last few decades coupled with climate change have resulted in an increase in environmental concerns. This has led to diversification and expansion of the technologies used to harvest and manage energy including an increased penetration of renewable energy sources in the power supply and the development of efficient multi-generation systems that promise greater energy autonomy and improved sustainability. The transition towards sustainable multi-energy systems requires the joint coordination of different interconnected energy resources and loads, as opposed to the independent planning and operation per sector (e.g., electricity, gas, etc.) of the _status quo_. Furthermore, installation of technologies, such as PV, heat pumps, energy storage, etc. in residential and industrial buildings has led to the proliferation of prosumers, in the sense of units that some times act as energy consumers and others as energy producers. This has motivated research into a shift from the current grid-centric paradigm to a decentralised customer-centric topology comprising a network of multi-energy hubs.
The concept of Energy hubs was first introduced in [1][2] to effectively incorporate local generation, storage and network technologies of multiple energy carriers into a unified local energy system. Energy hubs dispatch energy resources to efficiently manage time-varying production/consumption mismatches and act as intermediaries between supply and demand for a more flexible local use of energy. The coordinated operation of all the resources at the disposal of the energy hub holds leads to versatile energy management strategies and facilitates the integration of intermittent renewable energy sources. Realising this potential requires the development of advanced control strategies for cost reduction, improving self-consumption, mitigating the adverse effects of uncertainties in demand, generation and prices, etc.
Connecting energy hubs in a network can unlock further benefits by exploiting the link between them for peer-to-peer energy trading. In addition to lower cost and higher self-reliance, this also reduces the need for extending large-scale infrastructure, lowers the stress on the electricity grid and reduces energy imports. Harnessing these benefits requires the joint operation of hubs and a coordinated optimal dispatch of the various energy sources present in the different hubs. One approach for doing this is centralized control, where a central controller directly communicates with and controls all hubs. Many of the control approaches designed for single energy hubs such as optimal
power dispatch in [3; 4] and model predictive control (MPC) in [5; 6] can be applied to the centralised control of multiple hubs [7]. The coordinated dispatch of energy hubs with energy trading among them was first formulated as a nonlinear power flow problem in [1; 8] and further extended in [2] to include storage devices. In [9; 10], the control is implemented using MPC considering storage systems. The centralized optimization can also use a multi-objective cost function to balance economic, environmental and social benefit costs [11; 12]. Such problems may be non-convex, nonlinear and non-smooth leading to computation and scalability issues [7]. Different algorithms such as general heuristic algorithms [13] and genetic algorithms [14][15] have been proposed to mitigate these issues. Stochastic optimization methods such as the scenario approach have been proposed to account for uncertainties [16][17]. Furthermore, this strategy can also be used to effectively implement demand response programs [18; 19].
While the centralized control approach may be able to achieve a global optimum for the entire system, it comes with scalability and privacy concerns. As the optimization problem is non-convex and can be large, it may be prohibitive to solve in real time. Moreover, the central controller needs to collect detailed information about the demand characteristics and the converter capacities in each hub. this is information that the hub operators may not want to divulge due to privacy concerns.
Distributed control allows the hubs to preserve privacy to an extent and the problem of scaling is less severe as much of the computation is parallelized. For this purpose, Lagrangian relaxation has been proposed for an energy hub network in [20] and used along with MPC in [21]. In [22], Lagrangian relaxation is used with stochastic optimization methods to account for uncertainties of renewable generation and weather. The alternating direction method of multipliers (ADMM) is used in [23][24] for energy systems and in conjunction with a cooperative game in [25][26] for the economic interaction of energy hubs in the energy sharing market for real time dispatch in [27]. Several privacy-preserving algorithms have been used for energy hub coordination such as Benders decomposition in [28], Douglas Rachford splitting in [29] and iterative mixed-integer second-order cone programming in [30]. Adopting a game theory perspective, a two-level Stackelberg game model is proposed in [31] for analyzing the multiple energy trading problem; similar bi-level formulations are used in [32] and [33]. In [34], a stochastic Stackelberg game algorithm is used that models uncertainty in electricity market price and incorporates a demand response program. Game based formulation for distributed optimization also include potential games [35; 36].
In this paper, we propose a distributed MPC control strategy based on consensus ADMM to determine the optimal operation and dispatch for a network of energy hubs. Consensus ADMM is a modification of classical ADMM in which multiple agents have to come to a consensus on a shared resource. In this method, only the value that the agents need to agree on, in this case the traded energy between the hubs, has to be communicated, improving both privacy and scalability. In addition to electrical energy, we also consider the trading of thermal energy among the hubs over a local heating grid; to the best of our knowledge, this has not been considered in earlier studies.
A key trade-off when applying MPC to energy systems is preview vs. tractability. Ideally, one would like to make the prediction horizon as long as possible to give more preview to the decisions; doing so, however, leads to a larger optimisation problem that is difficult to solve in real time. To address this trade-off, in [37] and [38], a temporal decomposition scheme is presented wherein the time domain is divided into multiple partially overlapping sub-problems solved in parallel. In move-blocking MPC [39], the control input is forced to be constant over several steps in the horizon to reduce the dimensionality of the resulting optimisation problem which facilitates the use of a longer horizon or a smaller sampling time. A similar approach is used in [40] using a low resolution of the MPC at some time steps and the fixing the input at these times using zero-order hold. Recent works have proposed the use of models of different granularities to extend the horizon without increasing the computational load [41][42].
Here, we propose a multi-horizon MPC approach in which several models are used, each with a different time resolution. The time resolution increases later in the horizon. Each model predicts system responses for different parts of the horizon and the predictions are combined to predict system responses of the entire horizon. We exploit the structure of the energy hub and the energy systems to achieve this without adding any additional constraints. Finally, we also investigate the use and performance of the proposed method with a distributed control algorithm which has not been covered before.
In summary, our contributions are:
1. We formulate the model of the energy hub network propose a distributed control strategy based on Consensus ADMM that uses minimal communication between the hubs to achieve the optimal operation, mitigating concerns over privacy and scalability.
2. We introduce a multi-horizon MPC technique to extend the prediction horizon without significantly increasing the dimension of the problem.
We illustrate and validate the proposed methods via extensive numerical simulations on a three-hub network, using realistic models of energy hubs and demand data. The performance of the control methods is compared in terms of cost, computation time, and scalability.
In Section II, the problem formulation and the mathematical model of the multi-carrier system are presented. In Section III, we discuss the decentralised and centralized MPC approaches and propose a distributed MPC approach based on consensus ADMM. We also develop a multi-horizon heuristic used in conjunction with the proposed control schemes. A numerical case study and simulation results applying the method to a three-hub benchmark system are presented in Section IV and V respectively. Section VI concludes this paper and outlines directions for future research.
## 2 Problem Formulation and System Modelling
### System Description
We consider a general system of \(H\) interconnected energy hubs labeled by \(i\in\mathcal{H}:=\{1,\ldots,H\}\). To fix ideas, we use throughout a benchmark system of \(H=3\) hubs, shown in Fig. 1 as a running example. Each hub is connected to the electricity and natural gas grid, and can trade electrical energy and thermal energy with other hubs via the electricity grid and a thermal grid, respectively.
Each hub in the system is a general consumer equipped with energy conversion and storage devices that use electricity and gas from the grid to serve an aggregate respective electricity and heating load demand. We assume that the demands are uncontrolled, and treat demand as a disturbance from the point of view of the hub controller. We assume that the hubs contain converters such as gas boilers (GB) and heat pumps (HP), along with thermal energy storage (TS) that they can use to serve the heating demand, as well as have access to local electrical energy production using photovoltaic (PV) and electrical storage (ES) using batteries that fulfil the electricity requirement. Similarly, we assume that the hubs have converters such as solar thermal collectors (ST), Combined Heat and Power (CHP) and micro-CHP (mCHP) that simultaneously generate both electricity and heat. These devices along with the heat pump, couple the two energy systems. Electricity can also be directly drawn from the electricity grid and excess electrical energy produced in the hub can be fed back into the grid. The hubs can trade electrical energy through the existing electricity grid and are connected via a local heat distribution network that facilitates the transfer of heat energy between them.
### Energy Hub Model
MPC makes use of models of the devices to predict the evolution of the energy hub over a finite horizon into the future. We consider discrete time models and use the superscript \(k\) to denote the values of quantities at time step \(k\); the superscript is omitted for quantities that are assumed to be constant.
#### 2.2.1 Energy Conversion Devices
_Photovoltaic (PV) and solar thermal collectors (ST):_
The energy output of the solar photovoltaic system, \(P^{k}_{\text{pv,i}}\), is given by,
\[P^{k}_{\text{pv,i}}=\eta_{\text{pv,i}}\cdot I^{k}_{\text{solar,i}}\cdot a_{ \text{pv,i}}\;, \tag{1}\]
where \(I^{k}_{\text{solar}}\) [\(\text{kW/m}^{2}\)] is the solar irradiance incident to the surface, \(a_{\text{pv}}\) is the total area of the panel, and \(\eta_{\text{pv}}\) is the fixed efficiency. Similarly, the total electrical and thermal output, \(P^{k}_{\text{st,i}}\) and \(Q^{k}_{\text{st,i}}\), respectively, are given by,
\[P^{k}_{\text{st,i}}=\eta_{\text{st,i}}\cdot I^{k}_{\text{solar,i} }\cdot a_{\text{st,i}}\cdot a^{p}_{\text{st,i}}\;, \tag{2}\] \[Q^{k}_{\text{st,i}}=\eta_{\text{st,i}}\cdot I^{k}_{\text{solar,i} }\cdot a_{\text{st,i}}\cdot a^{q}_{\text{st,i}}\;,\]
where \(a_{\text{st,i}}\) and \(\eta_{\text{st,i}}\) are the surface area and efficiency of the solar collector respectively, and \(a^{q}_{\text{st,i}}\) and \(a^{q}_{\text{st,i}}\) are the fixed heat and electricity output shares of ST respectively.
_Heat pump (HP) and gas boiler (GB):_
HP uses electricity, \(P^{k}_{\text{hp,i}}\), to extract heat \(Q^{k}_{\text{hp,i}}\) from the ground or air (ground source or air source) whereas the GB uses natural gas, \(F^{k}_{\text{gb,i}}\), to generate heat, \(Q^{k}_{\text{gb,i}}\). The relation between the input and output of the heat pump and the boiler are:
\[Q^{k}_{\text{hp,i}}=\text{COP}\cdot P^{k}_{\text{hp,i}}\;, \tag{3}\] \[Q^{k}_{\text{gb,i}}=\eta_{\text{gb,i}}\cdot F^{k}_{\text{gb,i}}\;, \tag{4}\]
where COP is the coefficient of performance for the pump and \(\eta_{\text{gb,i}}\) is the boiler efficiency. In this work, a detailed model using part-load efficiencies is used for the boiler in which the efficiency of the boiler depends on the load, specifically, \(\eta^{0.25}_{\text{gb,i}}\), \(\eta^{0.5}_{\text{gb,i}}\), \(\eta^{0.75}_{\text{gb,i}}\) and \(\eta^{1}_{\text{gb,i}}\) for 0-25%. 25-50%, 50-75% and 75-100% operating load, respectively. This results in a piecewise linear relation that is implemented using binary variables as in [44].
_Combined heat and power (CHP) and micro-CHP (mCHP):_
The electrical and thermal output of the CHP are \(P^{k}_{\text{chp,i}}\) and \(Q^{k}_{\text{chp,i}}\), respectively. The CHP operation is bounded by a convex feasibility region described by a polyhedron with vertices A, B, C, and D [44] and its corresponding electrical and thermal output, \(p_{\text{A,i}}\), \(p_{\text{B,i}}\), \(p_{\text{C,i}}\), \(p_{\text{D,i}}\), and \(q_{\text{A,i}}\), \(q_{\text{B,i}}\), \(q_{\text{C,i}}\), \(q_{\text{D,i}}\), respectively. The outputs are modelled as a convex combination of the vertices with weights \(w^{k}_{\text{A,i}}\), \(w^{k}_{\text{B,i}}\), \(w^{k}_{\text{C,i}}\), and \(w^{k}_{\text{D,i}}\), respectively. the model of the CHP is given by,
Figure 1: System of three interconnected energy hubs. Each hub can import energy from the electricity and gas grid, feed-in electricity to the grid as well as trade electrical and thermal energy with other hubs, the electricity, heating and gas network is shown in green, red and brown respectively. [43].
\[\begin{split} P^{k}_{\text{chpij}}&=\cdot\sum_{\mu\leq S }w^{k}_{\text{j,i}}\cdot p_{\text{j,i}}\;,\;w^{k}_{\text{j,i}}\in[0,1]\;,\;S= \{\text{A, B, C, D}\}\;,\\ Q^{k}_{\text{chpij}}&=\sum_{j\leq S}w^{k}_{\text{j,i}} \cdot q_{\text{j,i}}\;,\\ P^{k}_{\text{chpij}}&=\eta_{\text{chpij}}\cdot F^{k}_{ \text{chpij}}\;,\\ b^{k}_{\text{chpij}}&=\sum_{j\leq S}w^{k}_{\text{j,i}} \;,\;b^{k}_{\text{chpij}}\in\{0,1\}\;.\end{split} \tag{5}\]
where \(F^{k}_{\text{chpij}}\) is the total fuel consumed that depends only on the electrical output subject to the fuel efficiency, \(\eta_{\text{chpij}}\). The binary variable \(b_{\text{chpij}}\) is 1/0 if the CHP is On/Off. Additionally, safety constraints that limit the ramping up and down of the CHP, \(r^{a}_{\text{chpij}}\) and \(r^{d}_{\text{chpij}}\), and the minimum on and off time, \(r^{a}_{\text{chpij}}\), and \(t^{d}_{\text{chpij}}\) are also implemented using the binary variable. mCHP is a smaller CHP which is modelled by a simplified output model using the fixed heat and electricity output shares, \(\alpha^{p}_{\text{mchpij}}\), and \(\alpha^{a}_{\text{mchpij}}\), resp. and the fuel efficiency, \(\eta_{\text{mchpij}}\) by,
\[\begin{split} P^{k}_{\text{mchpij}}&=P^{\text{out,k} }_{\text{mchpij}}\cdot\alpha^{p}_{\text{mchpij}}\\ Q^{k}_{\text{mchpij}}&=P^{\text{out,k}}_{\text{mchpij}} \cdot\alpha^{a}_{\text{mchpij}}\\ P^{k}_{\text{mchpij}}&=\eta_{\text{mchpij}}\cdot F^{k}_{ \text{mchpij}}\end{split} \tag{6}\]
Additionally, the output of all the converters is limited by the following capacity constraints:
\[\begin{split} P^{\text{min}}_{\text{m,i}}\leq P^{k}_{\text{m,i}} \leq P^{\text{max}}_{\text{m,i}}&\text{m}\in\{\text{pv, st, chp, mchp}\}\;,\\ Q^{\text{min}}_{\text{m,i}}\leq Q^{k}_{\text{n,i}}\leq Q^{\text{ max}}_{\text{m,i}}&\text{n}\in\{\text{st, gb, hp, chp, mchp}\}\;.\end{split} \tag{7}\]
Finally, the total electrical and thermal output of the energy converters can be compactly written as \(P^{k}_{\text{c,i}}\) and \(Q^{k}_{\text{c,i}}\) respectively.
#### 2.2.2 Energy Storage
The dynamics of the storage devices, in our study hot water tanks and batteries, are described by discrete time dynamical systems with a scaler state modelling the state of charge. For thermal storage, the relation between the power charged into and discharged from storage, \(Q^{k}_{\text{chi}}\) and \(Q^{k}_{\text{dc,i}}\), respectively, and the storage level, \(E^{k}_{\text{ts,i}}\) is defined by
\[\begin{split} E^{k+1}_{\text{ts,i}}=\gamma_{\text{ts,i}}\cdot E ^{k}_{\text{ts,i}}+\eta_{\text{ts,i}}\cdot Q^{k}_{\text{ch,i}}-\left(\frac{1}{ \eta_{\text{ts,i}}}\right)\cdot Q^{k}_{\text{dc,i}}\;,\\ E^{\text{min}}_{\text{ts,i}}\leq E^{k}_{\text{ts,i}}\leq E^{\text{ max}}_{\text{ts,i}}\;,\\ Q^{\text{min}}_{\text{m,i}}\leq Q^{k}_{\text{m,i}}\leq Q^{\text{ max}}_{\text{m,i}}&\text{m}\in\{\text{ch,dc}\}\;,\end{split} \tag{8}\]
where \(\gamma_{\text{ts,i}}\) and \(\eta_{\text{h,i}}\) are the storage efficiency and charging efficiency of the thermal storage respectively (to account for standby and cycle losses). The constraints ensure that the storage levels and the power charged/discharged from storage are within some maximum and minimum levels. Similarly, for the battery storage, we use
\[\begin{split} E^{k}_{\text{es,i}}=\gamma_{\text{e,i}}\cdot E^{k-1 }_{\text{es,i}}+\eta_{\text{es,i}}\cdot P^{k}_{\text{ch,i}}-\left(\frac{1}{ \eta_{\text{es,i}}}\right)\cdot P^{k}_{\text{dc,i}}\;,\\ E^{\text{min}}_{\text{es,i}}\leq E^{k}_{\text{es,i}}\leq E^{\text{ max}}_{\text{es,i}}\;,\\ P^{\text{min}}_{\text{m,i}}\leq P^{\text{P}}_{\text{m,i}}\leq P^{ \text{max}}_{\text{m,i}}&\text{m}\in\{\text{ch,dc}\}\;.\end{split} \tag{9}\]
where \(E^{k}_{\text{e,i}}\), is the storage level, \(P^{k}_{\text{ch,i}}\) is the energy charged into the battery, \(P^{k}_{\text{dc,i}}\) is the energy discharged from the battery, and \(\gamma_{\text{es,i}}\) and \(\gamma_{\text{es,i}}\) are the standby and cycle efficiencies, respectively.
#### 2.2.3 Network
The network and internal connections defines the energy and mass balances equations of different energy carriers. The total gas imported into the energy hub is:
\[F^{k}_{\text{g,i}}=F^{k}_{\text{gh,i}}+F^{k}_{\text{chpij}}+F^{k}_{\text{mchpij} }\;. \tag{10}\]
For each hub \(i\), the load balance equations for the electrical load \(L^{k}_{\text{e,i}}\) is given by
\[L^{k}_{\text{e,i}}=P^{k}_{\text{c,i}}+\left(P^{k}_{\text{in,i}}-P^{k}_{\text{ out,i}}\right)+\left(P^{k}_{\text{dc,i}}-P^{k}_{\text{ch,i}}\right)\;, \tag{11}\]
where \(P^{k}_{\text{in,i}}\) and \(P^{k}_{\text{out,i}}\) are the electrical energy imported and fed into electricity grid respectively. Similarly, the load balance equations for the heat load \(L^{k}_{\text{h,i}}\) is given by
\[L^{k}_{\text{h,i}}=Q^{k}_{\text{c,i}}+\left(Q^{k}_{\text{ch,i}}-Q^{k}_{\text{ dc,i}}\right)\;. \tag{12}\]
In this study, we assume there is no global thermal grid and no thermal losses in the grid within each hub. A simplified model of the thermal dynamics within the hub is considered here and a detailed model that considers temperature constraints, hydraulics, pipe dynamics, etc. is not considered here. In the absence of a heating grid, we assume demand can be met locally exactly at all times by conversion or storage. Including these in the analysis is left as a topic of future work.
## 3 Control Problem Formulation
In this section, we introduce three different control strategies for the framework described above that are illustrated in Fig. 2. The first two schemes are conventional, and serve as benchmarks for the novel scheme proposed in this paper. The first is a baseline decentralised MPC (DecMPC) controller where the individual hubs are operated in isolation from one another (Fig. 2(a)). The second is a centralised MPC (CMPC) approach with a single supervisory controller that measures all variables in the network and determines actions for all actuators (Fig. 2(b)). Finally, we introduce the main contribution of this work, a Distributed MPC (DMPC) method using consensus ADMM, where the controllers of the individual hubs in the network work in tandem to determine the optimal strategy by communicating limited information (Fig. 2(c)).
All control strategies are implemented using MPC by solving iteratively a finite-horizon optimization problem involving the predicted output of a plant using its internal dynamic model. Given the model of a hub and its components, constraints, measurements at the current time, and demand forecasts, the controller formulates an open-loop optimization problem over a prediction horizon of time \(T_{\text{pred}}\) divided into \(N\) discrete time steps and solves it to compute a control input sequence that minimises operating costs subject to the constraints. Then, the first time step of the computed control sequence is applied to the
plant [45], the response is measured and the process is repeated. Fig. 3(a) shows how MPC is implemented in closed loop. As an example, the first three time steps, \(k=0,\ldots,2\), are shown in Fig. 3(b). The receding horizon repetition brings feedback into the process through the measurements and allows the controller to continuously adapt to new forecast information, suppress the effect of model mismatch and disturbances, as well as anticipate increasing or decreasing energy prices.
### Decentralised model predictive control (DecMPC)
In the decentralised MPC scheme, each hub attempts to optimize its operation individually based on its own demand and capacities, without any communication or energy exchange with the other hubs in the network. For hub \(i\), let \(p_{\mathrm{c,i}}=\{p_{\mathrm{c,i}}^{k},\ p_{\mathrm{c,i}}^{k+1},\ldots,p_{ \mathrm{c,i}}^{k}\}\) collect the operational set points and \(p_{\mathrm{s,i}}=\{p_{\mathrm{s,i}}^{k},\ p_{\mathrm{s,i}}^{k+1},\ldots,p_{ \mathrm{s,i}}^{k+N-1}\}\) be the set of variables that are completely determined by \(p_{\mathrm{c}}\) and the constraints (1) - (12) over the horizon \(k=1,\ldots,N\), where \(p_{\mathrm{s,i}}^{k}\) and \(p_{\mathrm{c,i}}^{k}\) at time step \(k\) is defined as:
\[p_{\mathrm{s,i}}^{k}= \left[E_{\mathrm{e,i}}^{k},E_{\mathrm{t,i}}^{k},P_{\mathrm{m,i}} ^{k},P_{\mathrm{out,i}}^{k}\right]^{\mathrm{T}}\,,\] \[p_{\mathrm{c,i}}^{k}= \left[P_{\mathrm{pv,i}}^{k},P_{\mathrm{stc,i}}^{k},P_{\mathrm{ chp,i}}^{k},P_{\mathrm{mchp,i}}^{k},P_{\mathrm{hp,i}}^{k},P_{\mathrm{dc,i}}^{k},P_{ \mathrm{ch,i}}^{k},\right.\] \[\left.Q_{\mathrm{sc,i}}^{k},Q_{\mathrm{gb,i}}^{k},Q_{\mathrm{chp,i}}^{k},Q_{\mathrm{mchp,i}}^{k},Q_{\mathrm{hp,i}}^{k},F_{\mathrm{mchp,i}}^{k},F_{\mathrm{chp,i}}^{k},F_{\mathrm{gb,i}}^{k}\right]^{\mathrm{T}}\,.\]
The control objective is to minimize its total energy cost which comprises of the cost of energy procured from the electricity and gas grid. The cost function for hub \(i\) is then
\[J_{\mathrm{dec,i}}\left(p_{\mathrm{c,i}},p_{\mathrm{s,i}}\right)=\sum_{k=0}^ {N-1}\left(c_{\mathrm{in,e}}^{k}\cdot P_{\mathrm{in,i}}^{k}-c_{\mathrm{out,e }}^{k}\cdot P_{\mathrm{out,i}}^{k}+c_{\mathrm{g}}^{k}\cdot F_{\mathrm{g,i}}^{ k}\right)\,,\]
where \(c_{\mathrm{in,e}}^{k}\), and \(c_{\mathrm{g}}^{k}\) are the known prices for the electricity and natural gas consumption, and \(c_{\mathrm{out,e}}^{k}\) is the feed-in tariff for the electricity grid. We assume that prices and the feed-in tariffs can be different at different time points, but that their values over the horizon are known perfectly at the time when the optimisation problem is solved. Extension to imperfect price forecasts are the topic of current work. The resulting decentralized finite-horizon economic dispatch problem for hub \(i\) can be compactly stated as:
\[\min_{p_{\mathrm{c,i}},p_{\mathrm{s,i}}}J_{\mathrm{dec,i}}\left(p _{\mathrm{c,i}},p_{\mathrm{s,i}}\right)\] (13) \[\text{s.t. Equations \eqref{
energy using the existing grid infrastructure and thermal energy using a local heat distribution network. The consolidated network with a central controller acts like a single macro-hub comprising smaller hubs that can interact and exchange energy with one another.
The modified load balance constraints account for the energy exchange between hubs. Let \(P^{k}_{\text{tr,ij}}\) and \(Q^{k}_{\text{tr,ij}}\) be, respectively, the electrical and thermal energy transferred from hub \(i\) to hub \(j\) at time step \(k\). The total electrical and thermal energy imported to hub \(i\) from the neighbouring hubs are then \(\sum\limits_{i\neq j}\xi_{\text{c,ij}}P^{k}_{\text{tr,ij}}\) and \(\sum\limits_{i\neq j}\xi_{\text{h,ij}}Q^{k}_{\text{tr,ij}}\) respectively, where \(\xi_{\text{c,ij}}\) and \(\xi_{\text{h,ij}}\) are the corresponding efficiency of electrical and thermal energy transfer that accounts for the losses between the hubs. The loss of efficiency ensures that there are no cyclic energy transfers. Among the hubs, the network usage fees borne by the importing hub discussed below have a similar effect. For hub \(i\), the resulting load balance constraints are:
\[L^{k}_{\text{e,i}} =P^{k}_{\text{c,i}}+\left(P^{k}_{\text{in,i}}-P^{k}_{\text{out,i} }\right)+\left(P^{k}_{\text{dc,i}}-P^{k}_{\text{ch,i}}\right) \tag{14}\] \[\quad+\left(\sum\limits_{i\neq H\backslash\{i\}}\zeta_{\text{c, ij}}\cdot P^{k}_{\text{tr,ij}}-\sum\limits_{j\in H\backslash\{i\}}P^{k}_{ \text{tr,ij}}\right)\] \[L^{k}_{\text{h,i}} =Q^{k}_{\text{c,i}}+\left(Q^{k}_{\text{ch,i}}-Q^{k}_{\text{dc,i}} \right)+\left(\sum\limits_{i\neq H\backslash\{i\}}\zeta_{\text{c,ij}}Q^{k}_{ \text{tr,ij}}-\sum\limits_{j\in H\backslash\{i\}}Q^{k}_{\text{tr,ij}}\right)\]
Furthermore, additional constraints are imposed to limit the trade between the hubs, in the form of line limits for the electrical and heat transfer are \(\kappa_{\text{c,ij}}\) and \(\kappa_{\text{h,ij}}\) respectively:
\[\begin{split}& p^{k}_{\text{tr,ij}},p^{k}_{\text{tr,ij}}\leq \kappa_{\text{c,ij}}\;,\\ & Q^{k}_{\text{tr,ij}},Q^{k}_{\text{tr,ij}}\leq\kappa_{\text{h,ij} }\;.\end{split} \tag{15}\]
If two hubs in the network are not connected to one another or cannot exchange energy between them, then the line limit is set to 0 allowing us to indirectly us to represent the network topology in the optimization.
In addition to the energy costs encoded in (13), the control objective also accounts for the fees collected by the network operator for using the grid infrastructure to exchange energy between the hubs; we assume that this cost is borne by the entity importing the energy. Let \(c^{k}_{\text{tr}}\) be a known per-unit tariff for using the grid and \(p_{\text{tr,ij}}=\{p^{k}_{\text{tr,ij}},\,p^{k+1}_{\text{tr,ij}},\,\cdots,\,p^{k +N-1}_{\text{tr,ij}}\}\) collect all the transfer variables over the horizon \(\mathcal{N}\) associated with the transfer between hub \(i\) and hub \(j\), with
\[p^{k}_{\text{tr,ij}}=\left[P^{k}_{\text{ij}},P^{k}_{\text{ji}},Q^{k}_{\text{ij }},Q^{k}_{\text{ij}}\right]^{\text{T}}\;.\]
The resulting cost function is the sum of the cost of all the hubs in the network:
\[J_{\text{con}}\left(p_{\text{c}},p_{\text{s}},p_{\text{tr}}\right)=\] \[\sum\limits_{\mathcal{H}}\underbrace{\sum\limits_{\mathcal{L}=0}^{ N-1}\left[c^{k}_{\text{in,e}}\cdot P^{k}_{\text{in,i}}-c^{k}_{\text{out,i}} \cdot P^{k}_{\text{out,i}}+c^{k}_{\text{g}}\cdot F^{k}_{\text{g,i}}+\sum \limits_{j\in\mathcal{H}\backslash\{i\}}c^{k}_{\text{tr}}\cdot P^{k}_{\text{ tr,ij}}\right]}_{J_{\text{con}}\left(p_{\text{c}};p_{\text{i}},p_{\text{tr,ij}}\right)}\;.\]
Overall, the economic dispatch problem can be compactly written as the following MILP:
\[\begin{split}&\min\limits_{p_{\text{c}},p_{\text{tr}},p_{\text{tr}}}J_{ \text{con}}\left(p_{\text{c}},p_{\text{s}},p_{\text{tr}}\right)\\ &\text{s.t.~{}Equations~{}\eqref{eq
to update the local setpoints and estimates of the bilateral trade. For hub \(i\), the optimization is given below.
\[\min_{p_{\mathrm{x},i},p_{\mathrm{w},\mathrm{j}},\vec{p}_{\mathrm{x},\mathrm{ij}}} J_{\mathrm{dist,i}}\left(p_{\mathrm{c,i}},p_{\mathrm{s,i}},\widehat{p }_{\mathrm{tr,ij}}^{i}\right)\] (18) s.t. Equations \[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:
for the future predictions, gives an approximate prediction that does not account for the disturbances within the widening time steps. However, EDS ensures that the since these disturbances would occur in the future, their effect on the current time step decays exponentially. By having a high resolution for the first few hours that gradually increases, we ensure that the perturbations near the current time are minimized and the near-term information is captured accurately. As the horizon moves forward, while the uncertainties increase due to a lower prediction and control resolutions, these differences have a much lower effect on the optimal solution at current time step. This strategy also exploits the receding horizon nature of MPC, as only the first control action is implemented on the actual system. Any suboptimality in the future control actions will not impact the overall result as long as the first control result is unchanged.
In this work, multi-horizon MPC (MH-MPC) strategy is used with both CMPC (abbreviated as MH-CMPC) and with DMPC (abbreviated as MH-DMPC) in order to understand the effectiveness and convergence properties of the algorithms with this approach. We use an exponential time coarsening, as illustrated in Fig. 4.
It is crucial to analyse the recursive feasibility of the multi-horizon MPC scheme. In the case of the energy hub system, recursive feasibility is ensured by two aspects. The electricity grid that can be viewed as an implicit slack variable for the electrical load balance equations (12) (14) and it can ensure that the equality constraint is met irrespective of the local generation. This allows the other generation units to produce in accordance to their own constraints and any excess/deficit can be compensated by the electrical grid. For the thermal load, in the absence of a district heating grid, the same effect is achieved in practise here by making the thermal load equality equation a soft constraint and adding a slack variable to the equation that is penalised heavily by the cost function which ensures that the equality is not violated unless needed. In practise this slack variable would result in either additional heat that would have to be discarded and deficit heat that may cause a comfort violation. Hence, these factors render feasible the overall problem at all time steps.
## 4 Case Study and Numerical Simulations
In this section, a case study is presented in which the proposed distributed MPC and distributed multi-horizon MPC scheme are applied to a three-hub system. The performance of the distributed approach is compared to the performance of the central and the decentral MPC scheme. The solver Gurobi is used to perform the numerical simulations in Python.
### Problem Configuration
In order to establish the efficacy of the proposed method, a sample three hub system is used. Each hub has a daily profile of its load demand and also of the energy prices that are received from the DSO. In this case study, a perfect forecast is assumed, in which no additional disturbances occur. The electrical energy prices are time varying based on the peak load hours whereas the gas prices are considered to be fixed throughout the day. The three energy hubs are interconnected as shown in Fig. 1. The energy hubs are connected by a local thermal distribution grid with limited transfer capacity. The limit \(\kappa_{\mathrm{h,ij}}\) is set to \(200\,\mathrm{kW}\), for all connections between any hub \(i\) and \(j\). The energy that can be exchanged using the electrical grid is also limited and there is a fixed congestion process to use the grid. The maximum electricity that can be transferred between any 2 hubs \(i\) and \(j\) is \(\kappa_{\mathrm{e,ij}}\) that is set to \(250\,\mathrm{kW}\). Table 1 details the technologies present within the three hubs and the corresponding capacity and constraint limits for all the technologies. The tariffs for importing electricity and gas from the grid and for utilizing the grid are specified in Table 2 (the tariffs are based on the Swiss electricity prices). The system is simulated for a simulation horizon \(T_{\mathrm{sim}}=30\) days. For specific cases, the system is also simulated for the complete year to see how the costs evolve over time. Figure 5 shows the ambient temperature, solar radiation and electricity prices for the simulation horizon, \(T_{\mathrm{sim}}\). Finally, in order to understand how the methods scales with the number of hubs, the system is also extended up to 15 hubs.
The generation costs are minimized for different prediction horizons \(T_{\mathrm{pred}}\) to understand the effect of changing prediction horizon on the cost. Prediction horizons of \(12\,\mathrm{h}\), \(24\,\mathrm{h}\), \(36\,\mathrm{h}\), \(48\,\mathrm{h}\), and \(72\,\mathrm{h}\) are considered. The sampling time of the plant, i.e., the time intervals in which measurements are received from the hubs \(T_{\mathrm{plant}}\) is \(15\,\mathrm{min}\). Various sampling times of the MPC
Figure 4: Comparison of Classical MPC and Multi-horizon MPC. The figure illustrates how the complete prediction horizon \(T_{\mathrm{pred}}\) is split into \(N\) discrete time steps in each case and how the time steps correlate to each other. As \(T_{\mathrm{res}}\) changes over time in the multi-horizon MPC, the number of time steps at each resolution \(r\) and the total time covered in each horizon are \(N_{\mathrm{h,r}}\) and \(T_{\mathrm{h,r}}\), respectively. \(N_{\mathrm{h,1}}\) and \(T_{\mathrm{h,1}}\) for the 1st time resolution is illustrated here as an example.[42]
controller \(T_{\mathrm{res}}\) are used; \(15\,\mathrm{min}\), \(30\,\mathrm{min}\) and \(60\,\mathrm{min}\) to understand the effect of time coarsening and the potential benefits of having a higher resolution on the cost and dispatch. The central and distributed control schemes are also implemented with multi-horizon MPC. The prediction horizon for MH-CMPC and MH-DMPC is \(48\,\mathrm{h}\) and \(72\,\mathrm{h}\), and the resolution of the MPC time steps ranges from \(15\,\mathrm{min}\) to \(6\,\mathrm{h}\) over the complete horizon as shown in Table 3 with a different horizon for each \(T_{\mathrm{res}}\). The number of time steps at each resolution \(r\) and the total time covered in each horizon are \(N_{\mathrm{h,s}}\) and \(T_{\mathrm{h,r}}\), respectively.
Table 4 shows how the number of optimization time steps scale for different resolution in case of classical MPC as compared to the MH-MPC for a total time horizon of \(48\,\mathrm{h}\), and \(72\,\mathrm{h}\). This illustrates how significantly the number of time steps grows if a complete horizon of \(T_{\mathrm{res}}=15\,\mathrm{min}\) is used as opposed to a multi-horizon strategy that also has a highest resolution of \(15\,\mathrm{min}\) but total time steps \(N\) corresponding to a standard MPC resolution of \(T_{\mathrm{res}}=60\,\mathrm{min}\). Finally, the maximum number of iterations for the distributed algorithm, \(h_{\mathrm{max}}\), is set to \(150\).
## 5 Results and Discussion
### Comparison of DecMPC, CMPC and DMPC
Initially, the system is simulated using the decMPC, CMPC and DMPC control strategies with each of the three controllers'
\begin{table}
\begin{tabular}{l l l} \hline \multicolumn{3}{c}{**Hub 1**} \\ \hline \multicolumn{3}{c}{Parameter} & Value \\ \hline PV & \(\eta_{\mathrm{pv},1}\), \(a_{\mathrm{pv},1}\) & \(0.15\), \(8400\,\mathrm{m}^{2}\) \\ & \([P_{\mathrm{pv},1}^{\mathrm{min}},P_{\mathrm{pv},1}^{\mathrm{max}}]\) & [0, \(2500\)] kW \\ ST & \(\eta_{\mathrm{st},1}\), \(a_{\mathrm{st},1}\), \(a_{\mathrm{st},1}^{\mathrm{q}}\) & \(0.15\), \(8400\,\mathrm{m}^{2}\), \(0.38\), \(0.62\) \\ & \([P_{\mathrm{st},1}^{\mathrm{min}},P_{\mathrm{st},1}^{\mathrm{max}}]\) & [0, \(2500\)] kW \\ CHP & \(\eta_{\mathrm{chp},1}\), \([t_{\mathrm{chp},1}^{\mathrm{q}},t_{\mathrm{chp},1}^{\mathrm{d}}]\) & 0.364, [16,4] h \\ & \([P_{\mathrm{a},1},P_{\mathrm{B},1}\),\(p_{\mathrm{C},1}\),\(p_{\mathrm{D},1}]\) & [380, \(315\), \(745\), \(800\)] kW \\ & \([A_{\mathrm{st},1},q_{\mathrm{B},1}\),\(q_{\mathrm{C},1}\),\(q_{\mathrm{D},1}]\) & [0, \(515\), \(1220\), 0] kW \\ & \([r_{\mathrm{chp},i}^{\mathrm{d}},t_{\mathrm{chp},j}^{\mathrm{d}}]\) & [400,400] kW \\ mCHP & \(\eta_{\mathrm{chp},1}\), \(\sigma_{\mathrm{chp},1}^{\mathrm{max}}\), \(\alpha_{\mathrm{chp},1}^{\mathrm{q}}\) & 0.35, \(0.38\), \(0.62\) \\ & \([P_{\mathrm{chp},1}^{\mathrm{min}},P_{\mathrm{chp},1}^{\mathrm{max}}]\) & [0, \(240\)] kW \\ HP & COP, \([Q_{\mathrm{ph},1}^{\mathrm{min}},Q_{\mathrm{ph},1}^{\mathrm{max}}]\) & 4.5, [0, \(350\)] kW \\ GB & \([\eta_{\mathrm{gh},1}^{0.25},0_{\mathrm{gh},1}^{\mathrm{0.5}},\eta_{\mathrm{gh},1 }^{\mathrm{0.75}},\eta_{\mathrm{gh},1}^{\mathrm{1}}]\) & [0.59, \(0.83\), \(0.9\), \(0.82\)] \\ & \([Q_{\mathrm{gh},1}^{\mathrm{min}},Q_{\mathrm{gh},1}^{\mathrm{max}}]\) & [0, \(350\)] kW \\ ES & \(\eta_{\mathrm{es},1}\), \(\gamma_{\mathrm{es},1}\), \([E_{\mathrm{es},1}^{\mathrm{min}},E_{\mathrm{es},1}^{\mathrm{max}}]\) & 0.99, \(0.999\), [150, \(750\)] kWh \\ & \([P_{\mathrm{chp},1}^{\mathrm{min}},P_{\mathrm{chp},1}^{\mathrm{max}}]\) & [0,200] kW \\ TS & \(\eta_{\mathrm{bs},1}\), \(\gamma_{\mathrm{b},1}\), \([E_{\mathrm{es},1}^{\mathrm{min}},E_{\mathrm{ts},1}^{\mathrm{max}}]\) & 0.95, \(0.992\), [300, \(12900\)] kWh \\ & \([Q_{\mathrm{ch}/\mathrm{k},1}^{\mathrm{min}},Q_{\mathrm{ch}/\mathrm{k},1}]\) & [0,3200] kW \\ \hline \multicolumn{3}{c}{**Hub 2**} \\ \hline \multicolumn{3}{c}{Parameter} & Value \\ \hline PV & \(\eta_{\mathrm{pv},2}\), \(a_{\mathrm{pv},2}\) & 0.15, \(3170\,\mathrm{m}^{2}\) \\ & \([P_{\mathrm{pv},2}^{\mathrm{min}},P_{\mathrm{pv},2}^{\mathrm{max}}]\) & [0,350] kW \\ HP & COP, \([Q_{\mathrm{ph},2}^{\mathrm{min}},Q_{\mathrm{hp},2}^{\mathrm{max}}]\) & 4.5, [0,350] kW \\ GB & \([P_{\mathrm{gb},2}^{0.2},P_{\mathrm{gb},2}^{0.5},\eta_{\mathrm{gb},2}^{0.75}, \eta_{\mathrm{gb},2}^{\mathrm{1}}]\) & [0.59, \(0.83\), \(0.9\), \(0.82\)] \\ & \([Q_{\mathrm{gh},2}^{\mathrm{min}},Q_{\mathrm{gb},2}^{\mathrm{max}}]\) & [0, \(50\)] kW \\ TS & \(\eta_{\mathrm{bs},2}\), \(\gamma_{\mathrm{s},1}\), \([E_{\mathrm{es},2}^{\mathrm{min}},E_{\mathrm{ts},2}^{\mathrm{max}}]\) & 0.95,0.992,[0.36, \(1.62\)] kWh \\ & \([Q_{\mathrm{ch}/\mathrm{k},2}^{\mathrm{min}},Q_{\mathrm{ch}/\mathrm{k},2}]\) & [0,0.3] kW \\ \hline \multicolumn{3}{c}{**Hub 3**} \\ \hline \multicolumn{3}{c}{Parameter} & Value \\ \hline PV & \(\eta_{\mathrm{pv},3}\), \(a_{\mathrm{pv},3}\) & 0.15, \(380\,\mathrm{m}^{2}\) \\ & \([P_{\mathrm{pv},3}^{\mathrm{min}},P_{\mathrm{pv},3}^{\mathrm{max}}]\) & [0, \(80\)] kW \\ HP & COP, \([Q_{\mathrm{hp},3}^{\mathrm{min}},Q_{\mathrm{hp},3}^{\mathrm{max}}]\) & 4.5, [0, \(50\)] kW \\ \hline \multicolumn{3}{c}{**Hub 3**} \\ \hline \multicolumn{3}{c}{Parameter} & Value \\ \hline PV & \(\eta_{\mathrm{pv},3}\), \(a_{\mathrm{pv},3}\) & 0.15, \(380\,\mathrm{m}^{2}\) \\ & \([P_{\mathrm{pv},3}^{\mathrm{min}},P_{\mathrm{pv},3}^{\mathrm{max}}]\) & [0, \(80\)] kW \\ HP & COP, \([Q_{\mathrm{hp},3}^{\mathrm{min}},Q_{\mathrm{hp},3}^{\mathrm{max}}]\) & 4.5, [0, \(50\)] kW \\ \hline \end{tabular}
\end{table}
Table 1: Parameters and capacities for energy hubs used in the numerical study.
\begin{table}
\begin{tabular}{l c c} \hline \multicolumn{1}{c}{Tariff} & Parameter & Cost(CHF/kW) \\ \hline Electricity - peak/offpeak & \(c_{\mathrm{in},e}\) & 0.27/0.22 \\ Electricity - feed-in & \(c_{\mathrm{out},e}\) & 0.12 \\ Gas & \(c_{\mathrm{g}}\) & 0.115 \\ Electricity grid & \(c_{\mathrm{tr}}\) & 0.02 \\ \hline \end{tabular}
\end{table}
Table 2: Tariffs for electricity and gas utility.
Figure 5: Inputs for the simulation for a period of \(7\) days. (a) Ambient temperature (b) Solar radiation (c) Electricity tariff \(c_{\mathrm{in},e}\).
\(T_{\text{res}}\) and for different values of the prediction horizon, \(T_{\text{pred}}\). The controllers are compared based on the total operational cost i.e., the fuel, electricity and grid utility cost incurred by applying the first control input at each time step, and the total computation time required by the controllers. When the sampling time of the controller is larger than that of the plant, the system may deviate from the forecasted demand between the controller samples and this deviation may cause a mismatch in the actual load and the load that the energy hub planned to supply. For electrical load, this deviation is compensated using the electrical grid by buying additional electricity from the grid when the demand is greater than the originally forecasted demand and feeding into the grid when the electricity production exceeds the true demand. In the absence of a global thermal grid, when the thermal energy produced is more than the requested demand, this energy is discarded or regarded as waste heat and the mismatch can be quantified as a cost by computing the cost saving that could've been achieved when thermal production exceeds the true demand. Alternately, when there is a deficit in thermal energy i.e., the heat produced is less than the demand, it results in a thermal comfort violation.
The resulting total costs and computational times over the complete simulation horizon for each of the control strategies for different \(T_{\text{res}}\) and \(T_{\text{pred}}\) are presented in Fig. 6. The figure shows that irrespective of \(T_{\text{res}}\) and \(T_{\text{pred}}\), the central 2 and distributed 3 controller performance surpasses that of the decentralised controller 1 resulting in a lower cost. This is because when the hubs are operated in a coordinated manner, the controller is able to utilize the cheaper sources of energy and the storage more efficiently and trade between the hubs resulting in a lower import from the electricity grid. The distributed controller 2 performs similar to the central controller 2 and the solution obtained using the consensus ADMM algorithm converges to the central MPC solution. The cost of decentralisation of the DMPC can be evaluated using the optimality gap which is defined as the ratio of difference between distributed solution and the optimal central solution to the solution of the central controller. In this case, the maximum optimality gap is \(0.42\,\%\) achieved for \(T_{\text{res}}=60\,\text{min}\) with \(T_{\text{pred}}=12\,\text{h}\), whereas the minimum is \(0.07\,\%\) achieved for \(T_{\text{res}}=15\,\text{min}\) with \(T_{\text{pred}}=48\,\text{h}\) and the average optimality gap is \(0.2\,\%\).
Figure 6 shows the total computational time for each of the different controller variations. The decentralised controller 1 has the smallest computational time since the controllers are isolated and the resulting optimization problems are much smaller than the complete central problem. The central controller optimization problem 2 combines the objectives, constraints and decision variables of each of the different hubs resulting in a much larger optimization problem to be solved at each time step. Since the optimization does not scale linearly with the number of time steps or the number of variables, the resulting time taken is much larger than just the sum of the time taken by the decentralised controllers for the three hubs. The distributed controllers 3 solve an optimization problem that are similar in scale to the decentralised controller optimization problem. However, due to the iterative nature of the consensus algorithm, the optimization is solved multiple times at each time step until convergence is reached resulting in the distributed algorithm to have the largest total computational time.
Furthermore, the figure also shows how the optimal solution and total computation time is impacted by the varying prediction horizon and the controller time resolution. The total system cost for the simulation horizon decreases as the prediction horizon for the MPC controller increases and eventually, the minimum cost achievable by each controller saturates above a prediction horizon of \(48\,\text{h}\). This is because as the prediction horizon increases the MPC controller is able to make better decisions by anticipating the future variations but knowledge of more than \(48\,\text{h}\) ahead does not have a significant impact on the overall result in this case. Comparing the corresponding values from Fig. 6(a-c), it can be seen that the sampling time of the controller can cause a significant impact on the cost with the lowest cost associated with the highest resolution. With the increase in \(T_{\text{pred}}\), the computational time for the decentral and central controller rises for all \(T_{\text{res}}\) and does not scale linearly with horizon. The figure also shows that the convergence time for the distributed controller using the consensus ADMM algorithm scales exponentially with the size of the decision vector. The computation time also increases for a higher controller resolution since for the same prediction horizon \(T_{\text{pred}}\), a smaller \(T_{\text{res}}=15\,\text{min}\) has \(4\) times the number of time steps(\(N\)) as a \(T_{\text{res}}=60\,\text{min}\) and has to also solve the larger optimization \(4\) times within the same time period. This highlights the main challenge of increasing the horizon and of having a high resolution, both of which result in a much better cost optimal but causes the number of optimization steps and decision variables to rise and consequently, the time to increase exponentially justifying the need for a multi-horizon strategy.
Figure 7 depicts the total energy transferred between the three hubs over the complete simulation. Electrical energy is traded mostly from Hub 1 to the other two hubs since it is the largest of the three hubs and has the higher production capacities (Fig.7 (a)). This transfer is also possible due to multi-generation units such as CHP that have co-generation of electricity and heat and a much lower cost than purchasing electricity for the grid. When the heat demand is high, both electricity and heat are produced and in case of central operation, instead of exporting this excess electricity to the grid, it is transferred to neighbouring hubs to satisfy their demands. On the other hand, thermal energy is imported from the other hubs into hub 1 as seen in Fig.7 (b). During time periods of high heating demand, the transfer of electricity allows Hubs 2 and 3 to produce more
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(T_{\text{pred}}\) & \(T_{\text{res}}\) & 15min & 30 min & 1h & Multi-horizon \\ \hline
12 h & 48 & 24 & 12 & - \\
24 h & 96 & 48 & 24 & - \\
36 h & 144 & 72 & 36 & - \\
48 h & 192 & 96 & 48 & 30 \\
72 h & 288 & 144 & 72 & 34 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Number of optimization time steps (\(N\)) for all the simulation scenarios with different \(T_{\text{pred}}\) and \(T_{\text{res}}\).
heat at a lower price using heat pumps to full capacity than if it were locally produced at Hub 1 using the more expensive gas powered boiler. This coordinated synergetic operation is what results in the lower cost for the central 2 and distributed 3 operation as opposed to decentralised approach 1.
Comparing the results obtained from DMPC, CMPC and DecMPC demonstrates that the CMPC can achieve the lowest cost compared to decentralised approach, however it results in a larger optimization problem and requires the load and capacity information from all hubs. DMPC is able to achieve close to the optimal CMPC cost in a private manner using the iterative consensus ADMM algorithm. While this has a neglible optimality gap, it requires a much larger computation time due to the convergence of the iterative algorithm.
### MH-MPC - Comparison of classical MPC to MH-MPC
In this section, the simulation results of the CMPC and the DMPC are compared to the MH-CMPC and MH-DMPC for the different \(T_{\text{res}}\) and \(T_{\text{pred}}\) combinations over the complete simulation.
Figure 8 compares the results of the MH-CMPC and MH-DMPC to the CMPC and DMPC results using different \(T_{\text{res}}\) and \(T_{\text{pred}}\). The cost of the MH-CMPC 1 is similar to the cost of the CMPC with the same prediction horizon and the controller \(T_{\text{res}}\) of \(15\,\text{min}\) 3 despite having just a fraction of the decision variables in comparison to CMPC. Similarly, the MH-DMPC 2 also performs close to the DMPC approach with \(T_{\text{res}}\) of \(15\,\text{min}\) 4. The optimality gap of the MH-DMPC with respect to the MH-CMPC is \(0.31\,\%\) which is similar to the gap obtained using classical MPC. Additionally, the optimality gap of the multi-horizon approaches, MH-CMPC and MH-DMPC with respect to the classical CMPC with \(T_{\text{res}}=15\,\text{min}\) and \(T_{\text{pred}}=72\,\text{h}\) resulting in the lowest system cost over the complete simulation
Figure 6: Comparison of cost and computation time for decMPC, CMPC and DMPC control strategies under varying \(T_{\text{res}}\) and \(T_{\text{pred}}\) evaluated for \(T_{\text{sin}}=30\) days. (a),(b) and (c) show the results with \(T_{\text{res}}\) of \(15\,\text{min}\), \(30\,\text{min}\), and \(60\,\text{min}\) respectively.
Figure 7: Total (a) Electrical energy (b) Thermal energy transferred between the three hubs over a period of 30 days.
horizon is \(0.55\,\%\) and \(0.21\,\%\) respectively. This illustrates that the performance of the MH-MPC is comparable to the CMPC with a very high resolution despite having just a fraction of the decision variables in comparison to the original problem. That is because the prediction horizon of \(T_{\text{pred}}=72\,\text{h}\) allows MH-MPC to look far ahead into the future and find solutions compatible with the future conditions. The lower time resolution of \(15\,\text{min}\) in the near future makes it possible to avoid any near-term load mismatch. Figure 8(b) shows the total computational time taken by the different control strategies. Despite having a cost comparable to the CMPC approach with \(T_{\text{res}}=15\,\text{min}\), the MH-CMPC 1 approach requires a much smaller time to compute a solution and the computation time is indistinguishable from the time taken by the CMPC with a high \(T_{\text{res}}=60\,\text{min}\)1 with the same \(T_{\text{pred}}\). Similar to classical MPC, MH-DMPC 2 in this case also takes a much larger time compared to the MH-CMPC 1 since it has to solve all the subproblems repeatedly till consensus is reached. However, the total computation time of the MH-DMPC approach 2 is low in comparison to other distributed controllers and matches the performance of the DMPC approach with the same \(T_{\text{pred}}\) and \(T_{\text{res}}=60\,\text{min}\)3. This time is negligible when compared to the DMPC with a \(T_{\text{res}}=15\,\text{min}\)4 despite achieving the same cost performance. While the MH-CMPC and MH-DMPC acheive optimal cost similar to the classical CMPC and DMPC with a high \(T_{\text{res}}=15\,\text{min}\), the computation time for MH-CMPC is approximately 60 times smaller than CMPC and that for MH-DMPC is approximately 140 times smaller than DMPC.
While the total time taken by the MH-DMPC matches the classical DMPC at a higher \(T_{\text{res}}\), it is important to note that the MH-DMPC solves 4 times as many consenses ADMM problems since the algorithm has to be applied every 15 mins as opposed to once every hour. Hence, the resulting time taken at a single time step to reach convergence and compute the MH-DMPC optimal is smaller that the time taken for DMPC.
These results show that MH-MPC balances the trade-off between performance and computation time and achieves a low cost similar to an MPC controller with a higher sampling time and longer prediction horizon while the maintaining a low overall computation time that matches an MPC controller with a smaller sampling time.
In Fig. 9, the performance of the MH-DMPC controller is compared to the standard DMPC with a \(T_{\text{res}}\) of 60 min, and the same \(T_{\text{pred}}\) of 48 h for both controllers for a total \(T_{\text{sim}}\) of 1 year. The results show that the MH-DMPC controller 2 consistently surpasses the standard DMPC throughout the year resulting in a strictly increasing cumulative cost difference 3 for every month of the year. A total cost difference amounts to more that 115 thousand Swiss francs in a single year under the current simulation setup which more than twice the average monthly cost of 57 thousand Swiss francs and higher than the actual monthly cost of 9 months of a year (Feb-Oct). While the cost difference is just a fraction of the overall cost each month and lower in the winter months compared to the summer months, the continued consistency of the MH-DMPC controller results has a significant impact over the span of just 1 year for the complete energy system. Additionally, as shown earlier, the MH-DMPC controller also results in better utility of the the renewable resources such as PV's and less energy imports from the electricity grid, therefore resulting in a higher efficiency and lower emissions for the whole system.
Finally, to understand the scalability of this approach, the system is simulated for a larger set of hubs for a period of 1 week, from 1 Dec. 2018 to 7 Dec. 2018. The results of the simulation from 3 up to 18 hubs for multi-horizon controller(MH-CMPC and MH-DMPC) and the classical MPC (CMPC and DMPC with \(T_{\text{res}}=60\,\text{min}\) and \(T_{\text{pred}}=72\,\text{h}\)) are shown in Fig. 10. In order to ensure that the distributed approaches converge within the controller time resolution even when the number of hubs in the network is large, the controller convergence criteria are modified to set a maximum time limit of 10 min for each time step in addition to the maximum number of iterations. Fig. 10 depicts how the total cost and required computation time of the system scales with the network. The cost of the distributed approaches are consistent with the central control methods for both the classical and MH-MPC even as the number of hubs rises. The figure verifies that performance of the MH-MPC approach surpass the classical MPC for all configurations. -- In Fig. 10 (b), it can be seen that while the total computation times for central MPC approaches are much smaller than the distributed approaches, this time rises exponentially as the net
Figure 8: Performance of Multi-horizon MPC controller compared to the classical MPC controller with different \(T_{\text{res}}\) and \(T_{\text{pred}}\) evaluated for \(T_{\text{sim}}=30\) days. (a) Cost and (b) Total computation time of MH-MPC and standard MPC.
work grows. This is because both approaches solve a single optimization that becomes larger as number of hubs increases and the solution time grows exponentially. For the distributed approaches, the total time initially increases but settles as the number of hubs increases further. this is because since the computation hub level is done in parallel throughout the network, adding additional hubs does not impact the total time. Furthermore, the optimization problem of each hub remains mostly unchanged with the increase in the network size. This is further illustrated in Fig. 11 where a comparison between the empirical distributions of the number of iterations required for the MH-DMPC algorithm to converge over the complete simulation for 3, 9, 12 and 18 hubs is shown. The distributions clearly verify that the number of iterations required for the distributed algorithm remains consistent irrespective of the number of hubs in the network. The mean and median number of iterations for all the simulations ranges from 28-33 the distribution is concentrated below 60, with less than 2% of simulations requiring more than 60 iterations. This amounts to less than 13 time steps i.e., less than 3 non-consecutive hours over a week.
## 6 Conclusion
The advent of multi-energy systems is transforming future systems into a network of multi-energy hubs that can produce electricity as well as trade with their peers within the network. In this paper, we present a distributed MPC approach for the coordinated operation and control of multiple energy hubs in the network. The distributed consensus ADMM algorithm finds an optimal in a privacy-preserving manner with limited information shared between the hubs. Furthermore, a novel multi-horizon MPC scheme is employed that allows the controller to have a longer prediction horizon and a high time resolution without having a detrimental effect on the computational time. The proposed approach was tested on a simulated energy hub network. The results highlight the efficiency of the distributed approach as well as the benefit of using multi-horizon MPC in terms of total cost, required computational time, and coordination of the hubs. Future works aims to experimentally validate the proposed method on a real energy hub network, extend the method to establish fair prices for the energy traded within the network as well as reduce the need for modelling each energy hub by using data-driven methods.
## Acknowledgement
This research is supported by the SNSF through NCCR Automation (Grant Number 180545).
|
2307.04912
|
A Generalization of Arithmetic Derivative to $p$-adic Fields and Number
Fields
|
The arithmetic derivative is a function from the natural numbers to itself
that sends all prime numbers to $1$ and satisfies the Leibniz rule. The
arithmetic partial derivative with respect to a prime $p$ is the $p$-th
component of the arithmetic derivative. In this paper, we generalize the
arithmetic partial derivative to $p$-adic fields (the local case) and the
arithmetic derivative to number fields (the global case). We study the
dynamical system of the $p$-adic valuation of the iterations of the arithmetic
partial derivatives. We also prove that for every integer $n\geq 0$, there are
infinitely many elements with exactly $n$ anti-partial derivatives. In the end,
we study the $p$-adic continuity of arithmetic derivatives.
|
Brad Emmons, Xiao Xiao
|
2023-07-10T21:34:51Z
|
http://arxiv.org/abs/2307.04912v1
|
# A generalization of arithmetic derivative to \(p\)-adic fields and number fields
###### Abstract.
The arithmetic derivative is a function from the natural numbers to itself that sends all prime numbers to \(1\) and satisfies the Leibniz rule. The arithmetic partial derivative with respect to a prime \(p\) is the \(p\)-th component of the arithmetic derivative. In this paper, we generalize the arithmetic partial derivative to \(p\)-adic fields (the local case) and the arithmetic derivative to number fields (the global case). We study the dynamical system of the \(p\)-adic valuation of the iterations of the arithmetic partial derivatives. We also prove that for every integer \(n\geq 0\), there are infinitely many elements with exactly \(n\) anti-partial derivatives. In the end, we study the \(p\)-adic continuity of arithmetic derivatives.
Key words and phrases:Arithmetic derivative, arithmetic partial derivative, \(p\)-adic fields, number fields, \(p\)-adic continuity 2020 Mathematics Subject Classification: Primary: 11A25, Secondary: 11R04
## 1. Introduction
Let \(\mathbb{N}=\{0,1,2,\ldots\}\). The arithmetic derivative is a function \(D:\mathbb{N}\rightarrow\mathbb{N}\) that satisfies the following two properties: \(D(p)=1\) for all primes \(p\), and the Leibniz rule, \(D(xy)=D(x)y+xD(y)\) for all \(x,y\in\mathbb{N}\). One of the questions on the 1950 Putnam competition [3] asked the contestants to predict the limit of the sequence \(63,D(63),D^{2}(63),\ldots\). Many sources cite this as the origin of the arithmetic derivative. However we were able to find a paper by Shelly [12] published in 1911 which introduced this topic as well as some of the basic properties and generalizations of this function.
One can ask a more general question. If we fix \(x\in\mathbb{N}\), what is the limit of the sequence \(x,D(x),D^{2}(x),\ldots\). This is not easy to predict in general. Ufnarovski and Ahlander made the following conjecture.
**Conjecture 1.1**.: _[_13_, Conjecture 2]_ _For every \(x\in\mathbb{N}\), exactly one of the following could happen: either \(D^{i}(x)=0\) or \(p^{p}\) for some prime \(p\) for sufficiently large \(i\), or \(\lim_{i\rightarrow+\infty}D^{i}(x)=+\infty\)._
We note that Shelly [12] alluded to this conjecture and Barbeau [1] made a similar conjecture. One corollary of this conjecture is that if the sequence \(x,D(x),D^{2}(x),\ldots\) is eventually periodic, then the period is \(1\). That
is \(D^{k}(x)=p^{p}\) for some prime \(p\) when \(k\gg 0\). Given \(y>1\), it is not hard to show [13, Corollary 3] that there are finitely many (possibly \(0\)) \(x\) such that \(D(x)=y\). We call \(x\) an anti-derivative of \(y\). Ufnarovski and Ahlander made the following conjecture.
**Conjecture 1.2**.: _[_13_, Conjecture 8]_ _For every integer \(n\geq 0\) there are infinitely many \(x>0\) such that \(x\) has exactly \(n\) anti-derivatives._
Let \(\nu_{p}\) be the \(p\)-adic valuation. One can show that \(D(0)=0\) and for \(x>0\), \(D\) has the following explicit formula
\[D(x)=x\sum_{p}\frac{\nu_{p}(x)}{p}.\]
This is a finite sum as there are only finitely many \(p\) such that \(\nu_{p}(x)\neq 0\). It is natural to generalize \(D\) to \(\mathbb{Q}\) as \(\nu_{p}\) is well-defined over \(\mathbb{Q}\). We will use \(D\) to denote the arithmetic derivative defined on \(\mathbb{Q}\) in the introduction section. This generalization allows positive integers to have more anti-derivatives than they have in \(\mathbb{N}\). For example, \(2\) does not have an anti-derivative in \(\mathbb{N}\) but \(D(-21/16)=2\). The only anti-derivatives of \(1\) in \(\mathbb{N}\) are the prime numbers but \(D(-5/4)=1\). Another direction to generalize \(D\) is, instead of differentiating with respect to all prime numbers, we only differentiate with respect to a set of primes. More specifically, let \(T\subset\mathbb{P}\) be a nonempty set of rational primes. For \(0\neq x\in\mathbb{Q}\), we define
\[D_{\mathbb{Q},T}(x)=x\sum_{p\in T}\frac{\nu_{p}(x)}{p}.\]
This is called the arithmetic subderivative over \(\mathbb{Q}\) with respect to \(T\), first introduced by Haukkanen, Merikoski, and Tossavainen [5]. If \(T=\mathbb{P}\), then \(D_{\mathbb{Q},T}=D\). If \(T=\{p\}\) contains a single prime number, then \(D_{\mathbb{Q},T}=D_{\mathbb{Q},p}\) is called the arithmetic partial derivative with respect to \(p\).
The authors of this paper have proved [2] that the following sequence of integers
\[\nu_{p}(x),\;\nu_{p}(D_{\mathbb{Q},p}(x)),\;\nu_{p}(D_{\mathbb{Q},p}^{2}(x)), \;\dots\]
is eventually periodic of period \(\leq p\). An immediate corollary of this result is a positive answer to a conjecture similar to Conjecture 1.1 in the case of arithmetic partial derivative. We have to replace \(p^{p}\) in Conjecture 1.1 by \(bp^{p}\) where \(\nu_{p}(b)=0\) since \(D_{\mathbb{Q},p}(bp^{p})=bp^{p}\). In the same paper, we also proved a criterion to determine when an integer has integral anti-partial derivatives, and as application, we gave a positive answer to Conjecture 1.2 in the case of arithmetic partial derivative.
A natural next step is to generalize the arithmetic derivative to number fields and their rings of integers. The Leibniz rule can be used to generalize \(D\) to all unique factorization domains (UFD) \(R\). In every equivalence class \(\{x\) irreducible in \(R\mid x=ux^{\prime},u\in R^{\times}\}\), we choose an element \(x_{0}\) and define \(D_{R}(x_{0})=1\) (similar to \(D(p)=1\)). For all units \(u\in R^{\times}\), we define \(D_{R}(u)=0\) (similar to \(D(\pm 1)=0\)). By the unique factorization property and the Leibniz rule, we can extend the definition of \(D\) to the entire ring \(R\) as well as its field of fraction \(\operatorname{Frac}(R)\). Let \(\mathcal{P}\) be a set of chosen irreducible elements as described above, one from each equivalence classes. For every \(x\in\operatorname{Frac}(R)\), if \(x=up_{1}\cdots p_{k}q_{1}^{-1}\cdots q_{\ell}^{-1}\) with \(u\in R^{\times}\) and \(p_{i},q_{j}\in\mathcal{P}\) (\(p_{i},q_{j}\) are not necessarily pairwise different) then
\[D_{R}(x)=x\Big{(}\sum_{i=1}^{k}\frac{1}{p_{i}}-\sum_{j=1}^{\ell}\frac{1}{q_{j} }\Big{)}.\]
There are two major obstacles with this generalization. First, for every number field \(K\), it is well known that \(\mathcal{O}_{K}\) is not necessarily a UFD. It has been proved that this idea will fail for non-UFD [4]. Second, this definition of \(D(x)\) depends on the choice of irreducible elements set \(\mathcal{P}\) as well as the ring. There is no canonical way to choose \(x_{0}\) within each equivalence classes. Also, for an irreducible element \(x\in\mathcal{P}\subset R\), we have \(D_{R}(x)=1\). But if we consider \(x\in\operatorname{Frac}(R)\) and define \(D\) over \(\operatorname{Frac}(R)\), then we will get \(D_{\operatorname{Frac}(R)}(x)=0\) since all nonzero elements of \(\operatorname{Frac}(R)\) are invertible. In other words, suppose \(x\in R_{1}\subset R_{2}\), we do not necessarily have \(D_{R_{1}}(x)=D_{R_{2}}(x)\). This phenomenon makes it hard to generalize \(D\) to all number fields in a consistent way using this definition.
To get around the first obstacle, Mistri and Pandey [9] defined the arithmetic derivative of an ideal in the ring of integers \(\mathcal{O}_{K}\) of a number field \(K\). This generalization uses the fact that every fractional ideal of \(K\) can be uniquely factorized into a product of prime ideals in \(\mathcal{O}_{K}\). Suppose \(I=\mathfrak{p}_{1}\mathfrak{p}_{2}\cdots\mathfrak{p}_{k}\) is an ideal of \(\mathcal{O}_{K}\) where \(\mathfrak{p}_{i}\) are primes ideals of \(\mathcal{O}_{K}\) with \(\mathfrak{p}_{i}\mid p_{i}\) (again \(\mathfrak{p}_{i}\) and \(p_{i}\) are not necessarily pairwise different). Then the arithmetic derivative of \(I\) is an ideal of \(\mathcal{O}_{K}\) defined by
\[D_{K}(I)=\Big{(}p_{1}p_{2}\cdots p_{k}\sum_{i=1}^{k}\frac{1}{p_{i}}\Big{)}.\]
This means that the arithmetic derivative of every ideal of \(\mathcal{O}_{K}\) is a principal ideal in \(\mathcal{O}_{K}\) generated by an integer. From the definition, it is easy to see that \(D_{\mathbb{Z}}(n)=(D(n))\) where \(D_{\mathbb{Z}}(n)\) is the arithmetic derivative of the ideal \((n)\) and \(D(n)\) is the usual arithmetic derivative of an integer. This
coincidence is certainly nice as part of the generalization but the second obstacle mentioned above still exists. For example, let \(K=\mathbb{Q}(i)\) and we have \(2\mathcal{O}_{K}=(1+i)(1-i)\), hence \(D_{K}(2\mathcal{O}_{K})=4\mathcal{O}_{K}\). On the other hand, \(D_{\mathbb{Z}}(2\mathbb{Z})=\mathbb{Z}\). This means that if \(x\in K_{1}\subset K_{2}\), we do not necessarily have \(D_{K_{1}}(x\mathcal{O}_{K_{1}})\subset D_{K_{2}}(x\mathcal{O}_{K_{2}})\).
In this paper, we propose a new way to define the arithmetic derivative (resp. the arithmetic subderivative) \(D_{K}\) (resp. \(D_{K,T}\)) on every finite Galois extension \(K/\mathbb{Q}\) in a consistent way in the following sense. First \(D_{K}(x)=D(x)\) for all \(x\in\mathbb{Q}\), so \(D_{K}\) is a true extension of \(D\) from \(\mathbb{Q}\) to \(K\). Second, if \(K_{1}\) and \(K_{2}\) are two finite Galois extensions, then for every \(x\in K_{1}\cap K_{2}\), we have \(D_{K_{1}}(x)=D_{K_{2}}(x)\). This means that the definition of arithmetic derivative of \(x\) does not depend on the choice of the Galois extension. Because the arithmetic derivative satisfies \(D_{K}(x)/x\in\mathbb{Q}\), we can even generalize it to every number field \(L/\mathbb{Q}\) (not necessarily Galois) by taking a restriction \(D_{L}(x):=D_{K}(x)=x\cdot(D_{K}(x)/x)\in L\) where \(K\) is a finite Galois extension containing \(x\). Please refer to Section 3 for detailed definition.
At the local level, suppose \(K\) is a finite extension of the \(p\)-adic rational numbers \(\mathbb{Q}_{p}\). Let \(\nu_{\mathfrak{p}}\) be the unique valuation on \(K\) that extends the \(p\)-adic valuation \(\nu_{p}\) on \(\mathbb{Q}\). It only makes sense to study the arithmetic partial derivative \(D_{K,\mathfrak{p}}\) over \(K\). As part of the study of the behavior of the sequence \(x,D_{K,\mathfrak{p}}(x),D_{K,\mathfrak{p}}^{2}(x),\ldots\), we give a complete description of the behavior of the following so-called \(\nu_{\mathfrak{p}}\) sequence of \(x\)
\[\nu_{\mathfrak{p}}(x),\;\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x)),\;\nu_{ \mathfrak{p}}(D_{K,\mathfrak{p}}^{2}(x)),\;\ldots.\]
**Theorem 1.3**.: _Let \(K\) be a finite extension over \(\mathbb{Q}_{p}\) and \(\mathfrak{p}\) be the unique prime ideal of \(\mathcal{O}_{K}\). For every \(x\in K\), we have the following three properties._
1. _If_ \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))\geq 0\) _or_ \(\nu_{\mathfrak{p}}(x)\in\{0,+\infty\}\)_, then the_ \(\nu_{\mathfrak{p}}\) _sequence of_ \(x\) _is eventually periodic of period_ \(\leq p\)_._
2. _If_ \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))<0\)_, then the_ \(\nu_{\mathfrak{p}}\) _sequence of_ \(x\) _converges to_ \(-\infty\)_._
3. _The_ \(\nu_{\mathfrak{p}}\) _sequence of_ \(x\) _is eventually_ \(+\infty\) _if and only if_ \[\nu_{\mathfrak{p}}(x)\in\{0,1,\ldots,p-1,+\infty\}.\]
See Lemma 2.2, Proposition 2.4, and Theorem 2.8 for a proof of Theorem 1.3. Using the same idea as in our previous paper [2], we are also able to give a positive answer to Conjecture 1.2 in the \(p\)-adic fields case as well.
**Theorem 1.4** (Theorem 2.14).: _Let \(K\) be a finite extension over \(\mathbb{Q}_{p}\). For each positive integer \(n\), there are infinitely many \(x_{0}\in K\) such that \(D_{K,\mathfrak{p}}(x_{0})\) has exactly \(n\) anti-partial derivatives in \(K\)._
One difficulty of studying the iteration of arithmetic derivatives is that the arithmetic derivative is neither additive nor a group homomorphism. But if one considers the so-called logarithmic derivative \(\operatorname{ld}(x):=D(x)/x\), it is not hard to see that \(\operatorname{ld}:\mathbb{Q}^{\times}\to\mathbb{Q}\) is a group homomorphism from the multiplicative group to the additive group, just like the usual logarithmic function. As we generalize \(D\) to \(D_{K}\), we also study the generalization of \(\operatorname{ld}\) to \(\operatorname{ld}_{K}\). In particular, we have shown that \(\operatorname{ld}_{K}(K^{\times})\) are also isomorphic as subgroups of \(\mathbb{Q}\) for any finite Galois extension \(K\); see Theorem 4.2. We also give a concrete description of the exact image of \(\operatorname{ld}_{K}(K^{\times})\) when \(K\) is a quadratic extension.
It is not surprising that the arithmetic derivative function \(D\) is not continuous over \(\mathbb{Q}\) because given two rational numbers that are close by (under the Archimedean metric), their prime factorizations can be drastically different. In fact, Haukkanen, Merikoski and Tossavainen [6] have shown that for every \(x\in\mathbb{Q}\), the arithmetic subderivative \(D_{\mathbb{Q},T}\) (and in particular the arithmetic derivative) can obtain arbitrary large values in any small neighbourhood of \(x\). Therefore \(D_{\mathbb{Q},T}\) is clearly not continuous with respect to the standard Archimedean topology of \(\mathbb{Q}\). But what about the \(p\)-adic topology? In another paper, Haukkanen, Merikoski and Tossavainen [7] have proved that the arithmetic partial derivative \(D_{\mathbb{Q},p}\) is always continuous. They have also shown in some cases, the arithmetic subderivative \(D_{\mathbb{Q},T}\) can be continuous at some points but discontinuous at other points. Major cases have been left open, for example, it is unknown whether \(D_{\mathbb{Q},T}\) is continuous or not at nonzero points when \(T\) is an infinite set. As we generalize arithmetic partial derivatives to \(p\)-adic local fields and arithmetic subderivative to number fields, it makes sense to study whether the generalizations are \(\mathfrak{p}\)-adically continuous or not. We state our results in two theorems, one for the local case and one for the global case.
**Theorem 1.5**.: _Suppose \(K\) is a number field. Let \(\mathfrak{p}\) be a prime ideal of \(\mathcal{O}_{K}\). Then the arithmetic partial derivative \(D_{K,\mathfrak{p}}\) is \(\mathfrak{p}\)-adically continuous at every point in \(K\). Moreover \(D_{K,\mathfrak{p}}\) is strictly differentiable and twice strictly differentiable (with respect to the ultrametic \(|\cdot|_{\iota_{\mathfrak{p}}}\)) at every nonzero point in \(K\) but \(D_{K,\mathfrak{p}}\) is not strictly differentiable (with respect to the ultrametic \(|\cdot|_{\iota_{\mathfrak{p}}}\)) at \(0\)._
See Theorems 5.2, 5.3, and 5.4 for a proof of Theorem 1.5. The same result is true for arithmetic partial derivative over \(p\)-adic fields.
**Theorem 1.6**.: _Suppose \(K\) is a number field. Let \(\mathfrak{p}\) be a prime ideal and \(T\) be a nonempty subset of prime ideals of \(\mathcal{O}_{K}\)._
1. _The arithmetic subderivative_ \(D_{K,T}\) _is_ \(\mathfrak{p}\)_-adically continuous but not strictly differentiable (with respect to the ultrametic_ \(|\cdot|_{\nu_{\mathfrak{p}}}\)_) at_ \(0\)_._
2. _If_ \(T\neq\{\mathfrak{p}\}\)_, then the arithmetic subderivative_ \(D_{K,T}\) _is_ \(\mathfrak{p}\)_-adically discontinuous at every nonzero point in_ \(K\)_._
See Theorems 5.6, 5.8, 5.9, and 5.12 for a proof of Theorem 1.6. By letting \(K=\mathbb{Q}\) and \(\mathfrak{p}=(p)\) in Theorem 1.6, we are able to give answers to all the open questions in [7, Section 7].
## 2. \(p\)-adic Fields
### Definition
Fix a rational prime \(p\). Let \(\mathbb{Q}_{p}\) be the field of \(p\)-adic rational numbers and \(\nu_{p}\) the \(p\)-adic valuation. We denote the \(p\)-adic absolute value by \(|\cdot|_{\nu_{p}}\). Recall that the arithmetic partial derivative (with respect to \(p\)) \(D_{\mathbb{Q},p}:\mathbb{Q}\to\mathbb{Q}\) is defined by
\[D_{\mathbb{Q},p}(x):=\begin{cases}x\nu_{p}(x)/p,&\text{if }x\neq 0;\\ 0,&\text{if }x=0.\end{cases}\]
One can extend \(D_{\mathbb{Q},p}\) to \(D_{\mathbb{Q}_{p},p}\) with the same formula because \(\nu_{p}\) is well-defined on \(\mathbb{Q}_{p}\). We can further extend \(D_{\mathbb{Q}_{p},p}\) to \(p\)-adic fields because \(\nu_{p}\) can be uniquely extended to a discrete valuation over \(p\)-adic fields. Let \(K\) be a finite extension of \(\mathbb{Q}_{p}\) of degree \(n=[K:\mathbb{Q}_{p}]\). Let \(\mathcal{O}_{K}\) be the ring of integers, which is a discrete valuation ring with maximal ideal \(\mathfrak{p}\) and residue field \(\mathcal{O}_{K}/\mathfrak{p}\). Let \(f=f(K|\mathbb{Q}_{p})=[\mathcal{O}_{K}/\mathfrak{p}:\mathbb{F}_{p}]\) be the inertia degree and \(e=e(K|\mathbb{Q}_{p})\) the ramification index, that is, the unique integer such that \(p\mathcal{O}_{K}=\mathfrak{p}^{e}\). We have \(n=ef\). It is well known [11, Chapter 2 Proposition 3] that \(K\) is again complete with respect to the \(\mathfrak{p}\)-adic topology. There exists a unique discrete valuation \(\nu_{\mathfrak{p}}:K\to\mathbb{Q}\cup\{+\infty\}\) that extends \(\nu_{p}\) defined by
\[\nu_{\mathfrak{p}}(x):=\frac{1}{n}\nu_{p}(N_{K/\mathbb{Q}_{p}}(x)),\]
where \(N_{K/\mathbb{Q}_{p}}:K\to\mathbb{Q}_{p}\) is the norm. We know that \(\nu_{\mathfrak{p}}(K)=\mathbb{Z}/e\). For every \(x\in K\), we set \(k=k(x):=\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))\), so \(k\geq-\nu_{p}(e)\). The discrete valuation \(\nu_{\mathfrak{p}}\) defines a unique absolute value on \(K\), which will be denoted by \(|\cdot|_{\nu_{\mathfrak{p}}}\), that extends the \(p\)-adic absolute value on \(\mathbb{Q}_{p}\):
\[|x|_{\nu_{\mathfrak{p}}}=\sqrt[n]{\left|N_{K/\mathbb{Q}_{p}}(x)\right|_{\nu_{ p}}}.\]
We can extend \(D_{\mathbb{Q}_{p},p}\) to \(D_{K,\mathfrak{p}}:K\to K\) as follows:
\[D_{K,\mathfrak{p}}(x):=\begin{cases}x\nu_{\mathfrak{p}}(x)/p,&\text{if }x\neq 0; \\ 0,&\text{if }x=0.\end{cases}\]
One can check that \(D_{K,\mathfrak{p}}\) satisfies the Leibniz rule. It is evident that \(D_{K,\mathfrak{p}}(x)=D_{\mathbb{Q}_{p},p}(x)\) for all \(x\in\mathbb{Q}_{p}\). Note that the definition of \(D_{K,\mathfrak{p}}\) is independent of the choice of uniformizers of \(\mathcal{O}_{K}\).
Let \(K\) and \(K^{\prime}\) be two finite extensions over \(\mathbb{Q}_{p}\) such that \(x\in K\cap K^{\prime}=:K^{\prime\prime}\). Let \(\nu_{\mathfrak{p}}\), \(\nu_{\mathfrak{p}^{\prime}}\), \(\nu_{\mathfrak{p}^{\prime\prime}}\) be the unique discrete valuations that extend \(\nu_{p}\) to \(K\), \(K^{\prime}\), and \(K^{\prime\prime}\) respectively. Clearly \(\nu_{\mathfrak{p}}|_{K^{\prime\prime}}=\nu_{\mathfrak{p}^{\prime}}|_{K^{ \prime\prime}}=\nu_{\mathfrak{p}^{\prime\prime}}\). Therefore we have \(D_{K,\mathfrak{p}}(x)=x\nu_{\mathfrak{p}}(x)/p=x\nu_{\mathfrak{p}^{\prime \prime}}(x)/p=x\nu_{\mathfrak{p}^{\prime}}(x)/p=D_{K^{\prime},\mathfrak{p}^{ \prime}}(x)\in K\cap K^{\prime}\). This implies that the definition of arithmetic partial derivative of \(x\) is independent of the choice of finite extensions where \(x\) lies.
**Remark 2.1**.: Let \(q\) be another prime different from \(p\). The \(q\)-adic valuation \(\nu_{q}\) defined on \(\mathbb{Q}\) does not extend to \(\mathbb{Q}_{p}\) or finite extensions of \(\mathbb{Q}_{p}\). Therefore, unlike the case of \(\mathbb{Q}\) where we have one arithmetic partial derivative for each prime number, there is only one well-defined arithmetic partial derivative for \(\mathbb{Q}_{p}\) and for finite extensions of \(\mathbb{Q}_{p}\).
### Periodicity of \(\nu_{\mathfrak{p}}\) sequence
Let \(K/\mathbb{Q}_{p}\) be a finite extension and let \(x\in K\). Let \(\mathfrak{p}\) be the maximal ideal of \(\mathcal{O}_{K}\) and \(\nu_{\mathfrak{p}}\) the unique discrete valuation that extends \(\nu_{p}\). We call the following sequence
\[\nu_{\mathfrak{p}}(x),\ \nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x)),\ \nu_{ \mathfrak{p}}(D_{K,\mathfrak{p}}^{2}(x)),\ \ldots\]
the \(\nu_{\mathfrak{p}}\) sequence of \(x\). Note that the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is independent of the choice of \(K\). If \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{j}(x))=+\infty\) for some integer \(j\geq 0\), then \(D_{K,\mathfrak{p}}^{j}(x)=0\) and thus \(D_{K,\mathfrak{p}}^{i}(x)=0\) for all \(i\geq j\). If \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{i}(x))<+\infty\) for all \(i\geq 0\), then we call the sequence of increments of consecutive terms
\[\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))-\nu_{\mathfrak{p}}(x),\nu_{ \mathfrak{p}}(D_{K,\mathfrak{p}}^{2}(x))-\nu_{\mathfrak{p}}(D_{K,\mathfrak{p} }(x)),\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{3}(x))-\nu_{\mathfrak{p}}(D_{K, \mathfrak{p}}^{2}(x)),\ldots\]
the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\). Suppose \(\nu_{\mathfrak{p}}(x)=bp^{k}\) where \(\nu_{p}(b)=0\) and \(k\geq-\nu_{p}(e)\). Then the increment is
\[\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))-\nu_{\mathfrak{p}}(x)=\nu_{ \mathfrak{p}}(\frac{\nu_{\mathfrak{p}}(x)}{p})=\nu_{\mathfrak{p}}(bp^{k-1})=k -1=\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))-1. \tag{1}\]
**Lemma 2.2**.: _The following two statements are equivalent:_
1. _The_ \(\nu_{\mathfrak{p}}\) _sequence of_ \(x\) _is eventually_ \(+\infty\)_._
2. \(\nu_{\mathfrak{p}}(x)\in\{0,1,2,\ldots,p-1,+\infty\}\)_._
Proof.: Suppose \(\nu_{\mathfrak{p}}(x)\in\{0,1,2,\ldots,p-1,+\infty\}\). If \(\nu_{\mathfrak{p}}(x)=+\infty\), then \(x=0\), and \(D_{K,\mathfrak{p}}(x)=0\) for all \(n\geq 0\). If \(\nu_{\mathfrak{p}}(x)=0\), then \(x\) is a unit in \(\mathcal{O}_{K}\), and thus \(D_{K,\mathfrak{p}}^{n}(x)=0\) for all \(n\geq 1\). If \(\nu_{\mathfrak{p}}(x)=j\) for some \(1\leq j\leq p-1\), then \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{i}(x))=j-i\) for \(1\leq i\leq j\). From \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{i}(x))=0\) we get \(D_{K,\mathfrak{p}}^{i}(x)\) is a unit in \(\mathcal{O}_{K}\), and thus \(D_{K,\mathfrak{p}}^{n}(x)=0\) for all \(n>j\).
Now we show that if \(\nu_{\mathfrak{p}}(x)\not\in\{0,1,2,\ldots,p-1,+\infty\}\), then the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is not eventually \(+\infty\). It suffices to show that \(\nu_{\mathfrak{p}}(D^{i}_{K,\mathfrak{p}}(x))\neq 0\) for all \(i\geq 0\). We consider three mutually disjoint cases.
1. Suppose \(\nu_{\mathfrak{p}}(x)\not\in\mathbb{Z}\). Then \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))\not\in\mathbb{Z}\) by (1). By induction, we get \(\nu_{\mathfrak{p}}(D^{i}_{K,\mathfrak{p}}(x))\not\in\mathbb{Z}\) since \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(D^{i-1}_{K,\mathfrak{p}}(x)))-1\in \mathbb{Z}\). In particular, \(\nu_{\mathfrak{p}}(D^{i}_{K,\mathfrak{p}}(x))\neq 0\).
2. Suppose \(\nu_{\mathfrak{p}}(x)\geq p\) is an integer. If \(p\nmid\nu_{\mathfrak{p}}(x)\), then \(\nu_{\mathfrak{p}}(x)>p\) and \(k=0\), and so \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=\nu_{\mathfrak{p}}(x)-1\geq p\). If \(p\mid\nu_{\mathfrak{p}}(x)\), then \(k\geq 1\), and thus \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))\geq\nu_{\mathfrak{p}}(x)\geq p\) by (1). Therefore \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))\geq p>0\). By induction, we get \(\nu_{\mathfrak{p}}(D^{i}_{K,\mathfrak{p}}(x))\neq 0\).
3. Suppose \(\nu_{\mathfrak{p}}(x)=bp^{k}<0\) is an integer. Since \(|bp^{k}|\geq p^{k}>k-1\), we get \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=bp^{k}+(k-1)<0\). By induction, we get \(\nu_{\mathfrak{p}}(D^{i}_{K,\mathfrak{p}}(x))\neq 0\).
Combining all three cases, we have proved that if \(\nu_{\mathfrak{p}}(x)\notin\{0,1,2,\ldots,p-1,+\infty\}\), then the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is not eventually \(+\infty\).
**Remark 2.3**.: Ufnarovski and Ahlander conjecture [13, Conjecture 8] that there exists an infinite sequence \(a_{n}\) of different natural numbers such that \(a_{1}=1\) and \(D_{\mathbb{Q}}(a_{n})=a_{n-1}\) for \(n\geq 2\). Here \(D_{\mathbb{Q}}\) is the arithmetic derivative (not arithmetic partial derivative) defined on \(\mathbb{Q}\). The same question can be asked for \(D_{K,\mathfrak{p}}\). Suppose there exists an infinite sequence \(a_{n}\in K\) such that \(a_{1}=1\) and \(D_{K,\mathfrak{p}}(a_{n})=a_{n-1}\) for \(n\geq 2\). Let \(N=p+1\) and we know that the \(\nu_{\mathfrak{p}}\) sequence of \(a_{N}\) is eventually \(+\infty\) because \(\nu_{\mathfrak{p}}(D^{N}_{K,\mathfrak{p}}(a_{N}))=\nu_{\mathfrak{p}}(D_{K, \mathfrak{p}}(a_{1}))=\nu_{\mathfrak{p}}(0)=+\infty\). By the proof of Lemma 2.2, we know that \(\nu_{\mathfrak{p}}(a_{2})=1,\nu_{\mathfrak{p}}(a_{3})=2,\ldots,\nu_{\mathfrak{ p}}(a_{N-1})=p-1\), and there does not exist \(a_{N}\) such that \(D_{K,\mathfrak{p}}(a_{N})=a_{N-1}\). Hence the conjecture is false over \(K\) for arithmetic partial derivative. On a related note, if we let \(a_{1}\in K\backslash\mathcal{O}_{K}^{\times}\) for some finite extension \(K/\mathbb{Q}_{p}\), then it is possible to find an infinite sequence \(a_{n}\in K\) such that \(D_{K,\mathfrak{p}}(a_{n})=a_{n-1}\) for all \(n\geq 2\). For example, let \(K=\mathbb{Q}\), \(a_{1}=p^{p^{2}}\), and for all \(m\geq 1\), let \(a_{2m}=p^{p^{2}+1}/(p^{2}+1)^{m}\) and \(a_{2m+1}=p^{p^{2}}/(p^{2}+1)^{m}\). It is easy to check that \(D_{\mathbb{Q},p}(a_{2m+1})=a_{2m}\) and \(D_{\mathbb{Q},p}(a_{2m})=a_{2m-1}\).
The next proposition tells us if \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))<0\), then the \(\operatorname{inc}_{p}\) sequence of \(x\) is constant and negative. As a result of that, the \(\nu_{\mathfrak{p}}\) sequence of \(x\) converges to \(-\infty\).
**Proposition 2.4**.: _Let \(x\in K\) be a nonzero element such that \(\nu_{\mathfrak{p}}(x)=bp^{k}\) with \(\nu_{p}(b)=0\) and \(k<0\). Then the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is a constant sequence with negative terms_
\[(k-1,k-1,k-1,\ldots).\]
_As a result, the \(\nu_{\mathfrak{p}}\) sequence of \(x\) converges to \(-\infty\)._
Proof.: Equation (1) implies that the first term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is indeed \(k-1\). Since
\[\nu_{\mathfrak{p}}(x)+(k-1)=bp^{k}+(k-1)=p^{k}(b+(k-1)p^{-k})\]
where \(\nu_{p}(b+(k-1)p^{-k})=0\), we can write \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=b^{\prime}p^{k}\) where \(b^{\prime}:=b+(k-1)p^{-k}\) with \(\nu_{p}(b^{\prime})=0\). Since \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x)))=\nu_{\mathfrak{ p}}(\nu_{\mathfrak{p}}(x))\), we see that the second term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is again \(k-1\). In the meantime, we can write \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{2}(x))=b^{\prime\prime}p^{k}\) for some \(b^{\prime\prime}:=b^{\prime}+(k-1)p^{-k}\) where \(\nu_{p}(b^{\prime\prime})=0\). By induction, we see that every term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is equal to \(k-1\). Therefore \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{n}(x))=\nu_{\mathfrak{p}}(x)+n(k-1) \to-\infty\) as \(n\to\infty\).
If the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is eventually \(+\infty\), then it is periodic of period \(1\). For the rest of this subsection, we assume that the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is not eventually \(+\infty\) and \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))>0\). We will show that under these conditions, the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is eventually periodic of period \(\leq p\). The next proposition gives a recipe of the initial terms of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) if \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))>0\).
**Proposition 2.5**.: _Let \(x\in K\) be a nonzero element such that \(\nu_{\mathfrak{p}}(x)=bp^{k}\) with \(\nu_{p}(b)=0\) and \(k>0\). Denote \(k^{\prime}:=(k-1\bmod p)+1\leq p\). The first \(k^{\prime}\) terms of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) are_
\[(k-1,\underbrace{-1,-1,\ldots,-1}_{(k-1\bmod p)\ copies}).\]
Proof.: The first term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is indeed \(k-1\) by (1). We have
\[\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=bp^{k}+(k-1).\]
If \(k^{\prime}=1\), then there is nothing further to prove. If \(k^{\prime}=2\), we have \(k\equiv 2\pmod{p}\) and thus \(p\nmid(bp^{k}+(k-1))\). By (1) again, we get the second term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is
\[\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{2}(x))-\nu_{\mathfrak{p}}(D_{K, \mathfrak{p}}(x))=-1\]
and \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{2}(x))=bp^{k}+(k-2)\). The proof is complete by induction on \(k^{\prime}\).
**Corollary 2.6**.: _Let \(x\in K\) be a nonzero element such that \(\nu_{\mathfrak{p}}(x)=bp^{k}\) with \(\nu_{p}(b)=0\) and \(1\leq k\leq p\). Then the \(\nu_{\mathfrak{p}}\) sequence and the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) are periodic of period \(k\)._
Proof.: If \(1\leq k\leq p\), then \(k^{\prime}=(k-1\bmod p)+1=k-1+1=k.\) The first \(k+1\) terms of the \(\nu_{\mathfrak{p}}\) sequence are
\[(bp^{k},bp^{k}+(k-1),bp^{k}+(k-2),\ldots,bp^{k}+1,bp^{k}).\]
It is now clear that the \(\nu_{\mathfrak{p}}\) sequence and the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) are periodic of period \(k\).
We will see later that the periodicity predicted by Corollary 2.6 will eventually happen as part of the \(\nu_{\mathfrak{p}}\) sequence of \(x\) for all nonzero \(x\in K\) as long as \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x))\geq 0\) and the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is not eventually \(+\infty\).
**Definition 2.7**.: For any integer \(k\geq 1\), we call the following sequence
\[\mathcal{S}_{k,p}:=(k-1,\underbrace{-1,-1,\ldots,-1}_{(k-1\bmod p)\text{ copies}})\]
the _\(k\)-segment_ (with respect to \(p\)).
We define a sequence of integers \(\kappa_{0},\kappa_{1},\kappa_{2},\ldots\) recursively from \(\nu_{\mathfrak{p}}(x)\) that will allow us to predict the period of the \(\nu_{\mathfrak{p}}\) sequence of \(x.\) Let \(\kappa_{0}:=\nu_{\mathfrak{p}}(x)\bmod p\) and \(\kappa_{1}:=\nu_{p}(\lfloor\kappa_{0}\rfloor_{p}).\) Here \(\lfloor x\rfloor_{p}:=x-(x\bmod p).\) For \(i\geq 2\), we define
\[\kappa_{i}:=\begin{cases}\nu_{p}(\lfloor\kappa_{i-1}-1\rfloor_{p}),&\text{if } \kappa_{i-1}<+\infty;\\ +\infty,&\text{if }\kappa_{i-1}=+\infty.\end{cases} \tag{2}\]
It is clear that if \(1\leq\kappa_{i}\leq p\), then \(\kappa_{i+1}=+\infty\); if \(p+1\leq\kappa_{i}<+\infty\), then \(\kappa_{i+1}<\log_{p}(\kappa_{i}).\) If the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is not eventually \(+\infty\), then there exists a unique positive integer \(N=N(k)\) such that \(1\leq\kappa_{N}\leq p\), and \(\kappa_{i}=+\infty\) for all \(i>N\).
**Theorem 2.8**.: _Let \(x\in K\) be a nonzero element such that \(\nu_{\mathfrak{p}}(x)=bp^{k}\) with \(\nu_{p}(b)=0\) and \(k\geq 0\). If the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is not eventually \(+\infty\), then the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is of the form_
\[(\underbrace{-1,-1,\ldots,-1}_{\kappa_{0}\text{ copies}},\mathcal{S}_{\kappa_{1},p}, \mathcal{S}_{\kappa_{2},p},\mathcal{S}_{\kappa_{3},p},\ldots,\mathcal{S}_{ \kappa_{N},p},\mathcal{S}_{\kappa_{N},p},\mathcal{S}_{\kappa_{N},p},\ldots).\]
_As a result, the \(\nu_{\mathfrak{p}}\) sequence and the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) are eventually periodic of period \(\kappa_{N}\)._
Proof.: For \(0\leq i\leq\kappa_{0}\), we have \(\nu_{\mathfrak{p}}(D^{i}_{K,\mathfrak{p}}(x))=b-i=\nu_{\mathfrak{p}}(x)-i.\) Hence the first \(\kappa_{0}\) terms of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) are
\[(\underbrace{-1,-1,\ldots,-1}_{\kappa_{0}\text{ copies}}).\]
We can write \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{\kappa_{0}}(x))=b_{0}p^{\kappa_{1}}\) with \(\kappa_{1}\geq 1\). By Proposition 2.5, we know that the next \(\kappa_{1}^{\prime}:=(\kappa_{1}-1\bmod p)+1\) term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence is the \(\kappa_{1}\)-segment
\[\mathcal{S}_{\kappa_{1},p}=(\kappa_{1}-1,\underbrace{-1,-1,\ldots,-1}_{(\kappa _{1}-1)\bmod p\text{ copies}}).\]
Furthermore, we get \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{\kappa_{0}+i}(x))=bp^{\kappa_{1}}+( \kappa_{1}-i)\) for \(0\leq i\leq\kappa_{1}^{\prime}\). As \(\kappa_{1}-\kappa_{1}^{\prime}=\lfloor\kappa_{1}-1\rfloor_{p}\) and \(\kappa_{2}=\nu_{p}(\lfloor\kappa_{1}-1\rfloor_{p})\), we can write \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}^{\kappa_{0}+\kappa_{1}^{\prime}+1}(x))= b_{1}p^{\kappa_{2}}\). If \(\kappa_{2}\geq 1\), by Proposition 2.5 again, we know that the next \(\kappa_{2}^{\prime}:=(\kappa_{2}-1\bmod p)+1\) term of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence is the \(\kappa_{2}\)-segment. Let \(N=N(k)\) be the unique positive integer such that \(1\leq k_{N}\leq p\). By induction, we know that the initial terms of the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\) is of the form
\[(\underbrace{-1,-1,\ldots,-1}_{\kappa_{0}\text{ copies}},\mathcal{S}_{\kappa_{1},p}, \mathcal{S}_{\kappa_{2},p},\mathcal{S}_{\kappa_{3},p},\ldots,\mathcal{S}_{ \kappa_{N},p}).\]
Corollary 2.6 implies that if \(b_{N-1}p^{\kappa_{N}}\) is a term in the \(\nu_{\mathfrak{p}}\) sequence of \(x\), then \(\mathcal{S}_{\kappa_{N},p}\) will appear repeatedly in the \(\operatorname{inc}_{\mathfrak{p}}\) sequence of \(x\).
### Anti-partial derivatives
We fix a finite extension \(K/\mathbb{Q}_{p}\) in this subsection. Note that not all elements in \(K\) have an anti-partial derivative. For example, suppose \(x\in K\) is an anti-partial derivative of \(p^{p-1}\in K\), then \(D_{K,\mathfrak{p}}^{p+1}(x)=0\) and thus the \(\nu_{\mathfrak{p}}\) sequence of \(x\) is eventually \(+\infty\). By Lemma 2.2, \(\nu_{\mathfrak{p}}(x)\in\{0,1,2,\ldots,p-1,+\infty\}\), but that is not possible as \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=p-1\). Therefore \(p^{p-1}\) does not have anti-partial derivative in \(K\). Given an element \(y\in K\), if \(y\) has an anti-partial derivative in \(K\), we want to know how many there are. We start with \(y=0\). Let \(x\in K\) such that
\[D_{K,\mathfrak{p}}(x)=\frac{x\nu_{\mathfrak{p}}(x)}{p}=0.\]
Then \(x\nu_{\mathfrak{p}}(x)=0\) which implies that \(x=0\) or \(\nu_{\mathfrak{p}}(x)=0\). Hence the anti-partial derivative of \(0\) in \(K\) is
\[\{x\in K\,:\,\nu_{\mathfrak{p}}(x)=0\}\,\cup\,\{0\}.\]
**Lemma 2.9**.: _For every \(0\neq y\in K\), if there exists \(x\in K\) such that \(D_{K,\mathfrak{p}}(x)=y\), then \(x\in\mathbb{Q}_{p}(y)\)._
Proof.: Since \(D_{K,\mathfrak{p}}(x)=x\nu_{\mathfrak{p}}(x)/p=y\) and \(\nu_{\mathfrak{p}}(x)/p\in\mathbb{Q}\), we know that \(x\in\mathbb{Q}_{p}(y)\).
Let \(x_{1},x_{2}\in K\) with \(D_{K,\mathfrak{p}}(x_{1})=D_{K,\mathfrak{p}}(x_{2})\). If \(\nu_{\mathfrak{p}}(x_{1})=0\), then \(D_{K,\mathfrak{p}}(x_{1})=0=D_{K,\mathfrak{p}}(x_{2})\). Thus \(\nu_{\mathfrak{p}}(x_{2})=0\). Hence \(\nu_{\mathfrak{p}}(x_{1})=0\) if and only if \(\nu_{\mathfrak{p}}(x_{2})=0\).
Suppose \(\nu_{\mathfrak{p}}(x_{1}),\nu_{\mathfrak{p}}(x_{2})\neq 0.\) Let \(\nu_{\mathfrak{p}}(x_{1})=b_{1}p^{k_{1}}\) and \(\nu_{\mathfrak{p}}(x_{2})=b_{2}p^{k_{2}}\) where \(\nu_{p}(b_{1}b_{2})=0.\) We get
\[b_{1}p^{k_{1}}-b_{2}p^{k_{2}}=k_{2}-k_{1}. \tag{3}\]
Suppose \(k_{1}=k_{2},\) then (3) implies that \(\nu_{\mathfrak{p}}(x_{1})=\nu_{\mathfrak{p}}(x_{2}).\) Hence
\[x_{1}=\frac{D_{K,\mathfrak{p}}(x_{1})p}{\nu_{\mathfrak{p}}(x_{1})}=\frac{D_{K,\mathfrak{p}}(x_{2})p}{\nu_{\mathfrak{p}}(x_{2})}=x_{2}.\]
This means that \(x_{1}=x_{2}\) if and only if \(k_{1}=k_{2}.\)
If \(k_{1}\neq k_{2},\) without loss of generality, we assume \(k_{1}<k_{2}.\) Suppose \(k_{1}<0,\) then (3) implies that
\[b_{1}-b_{2}p^{k_{2}-k_{1}}=p^{-k_{1}}(k_{2}-k_{1}).\]
This is a contradiction because \(\nu_{\mathfrak{p}}(b_{1}-b_{2}p^{k_{2}-k_{1}})=0\) and \(\nu_{\mathfrak{p}}(p^{-k_{1}}(k_{2}-k_{1}))\geq-k_{1}>0.\) Hence if \(k_{1}<0,\) then \(D_{K,\mathfrak{p}}(x_{1})\) has exactly one anti-partial derivative.
Suppose \(k_{1}>0.\) There is an element \(x_{0}\in K\) in the set of all anti-partial derivatives of \(D_{K,\mathfrak{p}}(x_{1})\) with the smallest possible \(k_{0}.\) We call \(x_{0}\) the _primitive_ anti-partial derivative of \(D_{K,\mathfrak{p}}(x_{1}).\) Equation (3) implies that
\[b_{0}p^{k_{0}}-bp^{k_{1}}=k_{1}-k_{0}, \tag{4}\]
As \(x_{0}\) is primitive, we have \(k_{0}\leq k_{1}\) and (4) implies that \(p^{k_{0}}(b_{0}-bp^{k_{1}-k_{0}})=k_{1}-k_{0}.\) Let \(k_{1}-k_{0}=p^{k_{0}}c\) for some \(c\in\mathbb{Z}_{\geq 0}.\) Then \(b_{0}-bp^{p^{k_{0}}c}=c.\) So \(b=\frac{b_{0}-c}{p^{p^{k_{0}}c}}\) and \(\nu_{p}(b_{0}-c)=p^{k_{0}}c\) since \(\nu_{p}(b)=0.\) Let
\[C(x_{0}):=\Big{\{}c\in\mathbb{Z}_{\geq 0}\,:\,\nu_{p}(b_{0}-c)=p^{k_{0}}c \Big{\}}.\]
It is easy to see that \(C(x_{0})\) is finite because as \(c\gg 0,\)\(\nu_{p}(b_{0}-c)<p^{k_{0}}c.\)
**Theorem 2.10**.: _With the above notations, suppose \(x_{0}\) is the primitive anti-partial derivative of \(D_{K,\mathfrak{p}}(x_{0}).\) Let \(\nu_{\mathfrak{p}}(x_{0})=b_{0}p^{k_{0}}\) with \(\nu_{p}(b_{0})=0\) and \(k_{0}>0.\) There is a one-to-one correspondence between \(C(x_{0})\) and the set of all anti-partial derivatives of \(D_{K,\mathfrak{p}}(x_{0}).\) Furthermore, suppose we fix a uniformizer \(\pi\in\mathfrak{p}\subset\mathcal{O}_{K}\) and let \(e\) be the ramification index of \(K/\mathbb{Q}_{p}\), we can write \(x_{0}=\alpha_{0}\pi^{eb_{0}p^{k_{0}}}\) and \(p=\alpha_{p}\pi^{e}\) with \(\alpha_{0},\alpha_{p}\in\mathcal{O}_{K}^{\times}.\) If \(x=\alpha\pi^{eb^{k}}\) is an anti-partial derivative of \(D_{K,\mathfrak{p}}(x_{0})\) such that \(\nu_{p}(b)=0\) and \(\alpha\in\mathcal{O}_{K}^{\times}\), then there exists a unique \(c\in C(x_{0})\) such that_
\[k=p^{k_{0}}c+k_{0}\in\mathbb{Z}_{\geq 0},\quad b=\frac{b_{0}-c}{p^{k-k_{0}}}= \frac{b_{0}-c}{p^{p^{k_{0}}c}},\quad\alpha=\frac{\alpha_{0}b_{0}}{b}\alpha_{p} ^{k_{0}-k}\in\mathcal{O}_{K}^{\times}.\]
Proof.: We show that every anti-partial derivative \(x\) of \(D_{K,\mathfrak{p}}(x_{0})\) is associated with a unique \(c\in C(x_{0}).\) If \(x=x_{0},\) then we associate \(x\) with \(c=0.\) Suppose \(x\neq x_{0}.\) Let \(\nu_{\mathfrak{p}}(x)=bp^{k}.\) Since \(x_{0}\) is the primitive anti-partial derivative and \(\nu_{\mathfrak{p}}(x_{0})\neq 0,\) we know that \(b\neq 0\) and \(k>k_{0}.\) Then \(p^{k_{0}}(b_{0}-bp^{k-k_{0}})=k-k_{0}\) and thus \(\nu_{p}(k-k_{0})=k_{0}.\) Let \(k-k_{0}=p^{k_{0}}c\) where \(c>0\) and \(\nu_{p}(c)=0.\) By plugging \(k-k_{0}=p^{k_{0}}c\) into \(p^{k_{0}}(b_{0}-bp^{k-k_{0}})=k-k_{0},\) we get \(b_{0}-bp^{k-k_{0}}=c.\) Since \(\nu_{p}(b)=0,\) we know that \(\nu_{p}(b_{0}-c)=p^{k_{0}}c.\)
Then we show that for each \(c\in C(x_{0}),\) we can define a unique \(x=x(c)\) such that \(D_{K,\mathfrak{p}}(x)=D_{K,\mathfrak{p}}(x_{0}).\) Since \(\nu_{p}(b_{0}-c)=p^{k_{0}}c,\) there exists \(b\in\mathbb{Q}\) with \(\nu_{p}(b)=0\) such that \(b_{0}-c=bp^{p^{k_{0}}c}.\) Set \(k:=p^{k_{0}}c+k_{0}.\) We can compute
\[bp^{k}+k-1=\frac{b_{0}-c}{p^{k-k_{0}}}p^{k}+k-1=(b_{0}-c)p^{k_{0}}+p^{k_{0}}c+k _{0}-1\]
\[=(b_{0}-c)p^{k_{0}}+p^{k_{0}}c+k_{0}-1=b_{0}p^{k_{0}}+k_{0}-1.\]
Set \(x:=\alpha\pi^{eb\mathfrak{p}^{k}}\) where \(\alpha=\alpha_{0}b_{0}\alpha_{p}^{k_{0}-k}/b.\) We have
\[D_{K,\mathfrak{p}}(x)=\frac{x\nu_{\mathfrak{p}}(x)}{p}=\frac{\alpha\pi^{eb \mathfrak{p}^{k}}ebp^{k}}{p}=\alpha bc\pi^{eb\mathfrak{p}^{k}}p^{k-1}=\alpha b e \alpha_{p}^{k-1}\pi^{e(bp^{k}+k-1)}\]
\[=\alpha_{0}b_{0}e\alpha_{p}^{k_{0}-1}\pi^{e(b_{0}p^{k_{0}}+k_{0}-1)}=\frac{ \alpha_{0}b_{0}e}{p}\pi^{eb_{0}p^{k_{0}}}p^{k_{0}}=\frac{x_{0}\nu_{\mathfrak{p }}(x_{0})}{p}=D_{K,\mathfrak{p}}(x_{0}).\qed\]
**Corollary 2.11**.: _For any nonzero \(y\in K\), the set \(\{x\in K:D_{K,\mathfrak{p}}(x)=y\}\) is finite (possibly empty)._
For the rest of this subsection, we will prove Conjecture 1.2 for partial derivatives over any finite extension \(K/\mathbb{Q}_{p}.\) We will show that for each positive integer \(n,\) there exists infinitely many \(x\in\mathbb{Q}_{p}\) such that \(D_{\mathbb{Q}_{p},p}(x)\) has exactly \(n\) anti-partial derivatives in \(\mathbb{Q}_{p}.\) By Lemma 2.9, we know that all anti-partial derivatives of \(D_{\mathbb{Q}_{p},p}(x)\) must be in \(\mathbb{Q}_{p}\) and thus \(D_{\mathbb{Q}_{p},p}(x)\) has exactly \(n\) anti-partial derivatives in any finite extension \(K/\mathbb{Q}_{p}.\) The first lemma gives us a way to construct \(k_{0}\in\mathbb{Z}_{>0}\) such that if \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x_{0}))=k_{0},\) then \(x_{0}\) is the primitive anti-partial derivative of \(D_{K,\mathfrak{p}}(x_{0}).\)
**Lemma 2.12**.: _For every integer \(m\geq 2\), let \(k_{0}=k_{0}(m):=p+p^{2}+\cdots+p^{m}.\) For every \(x_{0}\in\mathbb{Q}_{p}\), if \(\nu_{\mathfrak{p}}(\nu_{\mathfrak{p}}(x_{0}))=k_{0}\), then \(x_{0}\) is the primitive anti-partial derivative of \(D_{\mathbb{Q}_{p},p}(x_{0})\)._
Proof.: Suppose \(x_{0}\) is not the primitive anti-partial derivative of \(D_{\mathbb{Q}_{p},p}(x_{0}).\) Let \(x\neq x_{0}\) be another anti-partial derivative of \(D_{\mathbb{Q}_{p},p}(x_{0})\) with \(\nu_{\mathfrak{p}}(x)=bp^{k}\) such that \(k<k_{0}.\) If \(k<0,\) we know that \(D_{\mathbb{Q}_{p},p}(x_{0})\) has exactly one anti-partial derivative. Hence \(k\geq 0.\) Since \(D_{\mathbb{Q}_{p},p}(x)=D_{\mathbb{Q}_{p},p}(x_{0}),\) we get \(bp^{k}-b_{0}p^{k_{0}}=k_{0}-k.\) This means that \(\nu_{p}(k_{0}-k)=k.\) It suffices to show that no \(0\leq k<k_{0}\) satisfies this relation. It is clear that \(k\neq 0\) because \(\nu_{p}(k_{0})=1,\)
and \(k\neq 1\) because \(\nu_{p}(k_{0}-1)=0.\) Suppose \(k>1.\) If \(\nu_{p}(k_{0}-k^{\prime})=k\) for some \(k^{\prime}>0,\) then \(k^{\prime}\geq p+\cdots+p^{k-1}>k.\) Therefore there does not exist an anti-partial derivative \(x\) with \(k<k_{0}.\) This means that \(x_{0}\) is the primitive anti-partial derivative of \(D_{\mathbb{Q}_{p},p}(x_{0}).\)
The next lemma allows us to construct \(b_{0}\) for every \(k_{0}>0\) such that there are exactly \(n-1\) different possible values of \(c\in\mathbb{Z}_{>0}\) such that \(\nu_{p}(b_{0}-c)=p^{k_{0}}c.\) This means that the set \(C(x_{0})\) has exactly \(n\) elements (with \(0\) included).
**Lemma 2.13**.: _Fix a positive integer \(k.\) Let \(c_{1}=0\), and for \(i\geq 2\), let \(c_{i}:=p^{p^{k}c_{i-1}}+c_{i-1}.\) Suppose_
\[C_{n}:=\{c\in\mathbb{Z}_{>0}\;:\;\nu_{p}(c_{n+1}-c)=p^{k}c\}.\]
_Then \(C_{n}=\{c_{2},\ldots,c_{n}\}.\)_
Proof.: We first note that for any \(1\leq i<j\),
\[c_{j}-c_{i}=\sum_{m=i}^{j-1}\left(c_{m+1}-c_{m}\right)=\sum_{m=i}^{j-1}p^{p^{k }c_{m}}\]
and so \(\nu_{p}(c_{j}-c_{i})=p^{k}c_{i}.\) This shows that \(c_{m}\in C_{n}\) if and only if \(m\in\{2,3,\ldots,n\}.\)
Next, we show that no other integers are in \(C_{n}.\) If \(c\in C_{n}\) where \(c>c_{n+1},\) then \(c-c_{n+1}=\alpha p^{p^{k}c},\) where \(\alpha>0.\) By definition of \(c_{n+1},\)\(c-c_{n+1}=c-(c_{n}+p^{p^{k}c_{n}}).\) Thus
\[c-c_{n}=\alpha\,p^{p^{k}c}+p^{p^{k}c_{n}}=p^{p^{k}c_{n}}\left(\alpha\,p^{p^{k} (c-c_{n})}+1\right).\]
This is a contradiction, since the expression on the right hand side is clearly larger than \(c-c_{n}.\) This shows that if \(c\in C_{n},\) then \(c\leq c_{n+1}.\)
Suppose \(c\in C_{n}\) where \(c_{m}<c<c_{m+1}\) for some \(2\leq m\leq n.\) We have \(\nu_{p}(c_{n+1}-c_{m+1})=p^{k}c_{m+1}\) when \(m<n.\) Since \(\nu_{p}(c_{n+1}-c)=p^{k}c,\) we have
\[\nu_{p}(c_{m+1}-c)=\nu_{p}\Big{(}(c_{n+1}-c)-(c_{n+1}-c_{m+1})\Big{)}=p^{k}c.\]
Therefore \(c_{m+1}-c=\gamma p^{p^{k}c}\) for some \(\gamma>0.\) By definition, \(c_{m+1}=p^{p^{k}c_{m}}+c_{m},\) and so we would have
\[p^{p^{k}c_{m}}+c_{m}=\gamma p^{p^{k}c}+c,\]
which is a contradiction, since the left side is clearly less than the right. This shows that if \(c\in C_{n}\) and \(c\leq c_{n+1},\) then \(c=c_{m}\) for some \(2\leq m\leq n.\) This concludes the proof of the lemma.
**Theorem 2.14**.: _For each positive integer \(n\), there are infinitely many \(x_{0}\in K\) such that \(D_{K,\mathfrak{p}}(x_{0})\) has exactly \(n\) anti-partial derivatives in \(K\)._
Proof.: By Lemma 2.9, it suffices to assume that \(K=\mathbb{Q}_{p}\) and \(\mathfrak{p}=(p)\). For every integer \(m\geq 2\), let \(k_{0}=k_{0}(m)\) be defined as in Lemma 2.12, and let \(b_{0}=c_{n+1}\) be defined as in Lemma 2.13 for \(k=k_{0}\). Set \(x_{0}:=p^{b_{0}p^{k_{0}}}\). Lemma 2.12 implies that \(x_{0}\) is the primitive anti-partial derivative of \(D_{\mathbb{Q}_{p},p}(x_{0})\). Lemma 2.13 implies that \(D_{\mathbb{Q}_{p},p}(x_{0})\) has exactly \(n\) anti-partial derivatives. Therefore, for each positive integer \(n\), there exists infinitely many \(x_{0}\in\mathbb{Q}_{p}\) such that \(D_{\mathbb{Q}_{p},p}(x_{0})\) has exactly \(n\) anti-partial derivatives with \(x_{0}\) being its primitive anti-partial derivative.
## 3. Number Fields
In this section, we will generalize arithmetic derivative and arithmetic partial derivative to number fields. Recall that the explicit formula of the arithmetic derivative defined on \(\mathbb{Q}\):
\[D_{\mathbb{Q}}(x)=x\sum_{p|x}\frac{\nu_{p}(x)}{p}.\]
Let \(K/\mathbb{Q}\) be a number field of finite degree. One could mimic the above formula and define the arithmetic derivative on \(K\) by the formula:
\[D_{K}(x)=x\sum_{\mathfrak{p}|x}\frac{\nu_{\mathfrak{p}}(x)}{p},\]
where \(\mathfrak{p}\) are prime ideals in \(\mathcal{O}_{K}\). The sum is finite as there are finitely many \(\mathfrak{p}\) such that \(\nu_{\mathfrak{p}}(x)\neq 0\). This formula presents a challenge. Let \(p\in\mathbb{Q}\) be a rational prime. Then
\[D_{K}(p)=p\sum_{\mathfrak{p}|p}\frac{\nu_{\mathfrak{p}}(p)}{p}=\sum_{ \mathfrak{p}|p}\nu_{\mathfrak{p}}(p)=g(p,K)\cdot 1=g(p,K),\]
where \(g(p,K)\) is the number of prime ideals in \(\mathcal{O}_{K}\) that divide \(p\). When \(g(p,K)\neq 1\), \(D_{K}(p)\neq D_{\mathbb{Q}}(p)\) so the above formula of \(D_{K}\) does not give a true extension of \(D_{\mathbb{Q}}\). In order for \(D_{K}(x)=D_{\mathbb{Q}}(x)\) for all \(x\in\mathbb{Q}\), we will need to divide \(g(p,K)\). Furthermore, let \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{2}\) be two prime ideals in \(\mathcal{O}_{K}\) that divide \(p\) and let \(L/K\) be a finite extension. We know that in general \(g(\mathfrak{p}_{1},L)\neq g(\mathfrak{p}_{2},L)\) unless \(L/K\) is finite Galois. So we will start the generalization of \(D_{\mathbb{Q}}\) to finite Galois extensions. Then we can further generalize \(D_{\mathbb{Q}}\) to number fields by taking restriction.
### Finite Galois extensions
Let \(K\) be a finite Galois extension of \(\mathbb{Q}\) of degree \(n\). Let \(\mathcal{O}_{K}\) be the ring of integers and \(\mathfrak{p}\) a nonzero prime ideal of \(\mathcal{O}_{K}\) such that \(p\in\mathfrak{p}\). There is a discrete valuation \(\nu_{\mathfrak{p}}\) on \(K\) that extends the \(p\)-adic valuation \(\nu_{p}\) on \(\mathbb{Q}\). This induces a norm \(|\cdot|_{\nu_{\mathfrak{p}}}=[\mathcal{O}_{K}:\mathfrak{p}]^{-\nu_{\mathfrak{ p}}(\cdot)}\) on \(K\). Let \(K_{\nu_{\mathfrak{p}}}\) be the completion of \(K\) with respect to the \(\mathfrak{p}\)-adic topology and thus \(K_{\nu_{\mathfrak{p}}}\) is a
finite extension of \(\mathbb{Q}_{p}\) such that \(\mathfrak{p}\cap\mathbb{Q}=(p)\) (denoted by \(\mathfrak{p}\mid p\)). Let \(e(K_{\nu_{\mathfrak{p}}}|\mathbb{Q}_{p})\) be the ramification index and \(f(K_{\nu_{\mathfrak{p}}}|\mathbb{Q}_{p}):=[\mathcal{O}_{K}/\mathfrak{p}: \mathbb{F}_{p}]\) be the inertia degree of the extension \(K_{\nu_{\mathfrak{p}}}/\mathbb{Q}_{p}\). One has the following decomposition:
\[p\mathcal{O}_{K}=\prod_{\mathfrak{p}\mid p}\mathfrak{p}^{e(K_{\nu_{\mathfrak{p }}}|\mathbb{Q}_{p})}.\]
It is well known that for every fixed prime number \(p\), we have the formula
\[n=\sum_{\mathfrak{p}\mid p}e(K_{\nu_{\mathfrak{p}}}|\mathbb{Q}_{p})f(K_{\nu_{ \mathfrak{p}}}|\mathbb{Q}_{p}). \tag{5}\]
The Galois group \(G(K/\mathbb{Q})\) acts transitively on the set of prime ideals \(\{\mathfrak{p}\subset\mathcal{O}_{K}:\mathfrak{p}\mid p\}\) for every fixed prime \(p\in\mathbb{Q}\)[11, Chapter 1, Section 7, Proposition 19]. This implies that for every nonzero prime ideal \(\mathfrak{p}\mid p\), the ramification index \(e(K_{\nu_{\mathfrak{p}}}|\mathbb{Q}_{p})\) and the inertia degree \(f(K_{\nu_{\mathfrak{p}}}|\mathbb{Q}_{p})\) depend only on \(p\). If we denote them by \(e(p,K)\) and \(f(p,K)\) respectively, then formula (5) becomes
\[n=e(p,K)f(p,K)g(p,K), \tag{6}\]
where \(g(p,K)\) (again only depends on \(p\)) is the number of distinct prime ideals \(\mathfrak{p}\) such that \(\mathfrak{p}\mid p\). Now we can extend the arithmetic derivative \(D_{\mathbb{Q}}\) to \(K\). For every nonzero \(x\in K\), we define
\[D_{K}(x):=x\sum_{\mathfrak{p}\mid x}\frac{\nu_{\mathfrak{p}}(x)}{pg(p,K)}.\]
One can check that \(D_{K}\) satisfies the Leibniz rule:
\[D_{K}(xy) =xy\sum_{\mathfrak{p}\mid xy}\frac{\nu_{\mathfrak{p}}(xy)}{pg(p,K)}\] \[=xy\sum_{\mathfrak{p}\mid xy}\frac{\nu_{\mathfrak{p}}(x)+\nu_{ \mathfrak{p}}(y)}{pg(p,K)}\] \[=\Big{(}\sum_{\mathfrak{p}\mid xy}\frac{x\nu_{\mathfrak{p}}(x)} {pg(p,K)}\Big{)}y+x\Big{(}\sum_{\mathfrak{p}\mid xy}\frac{y\nu_{\mathfrak{p}} (y)}{pg(p,K)}\Big{)}\] \[=D_{K}(x)y+xD_{K}(y).\]
It is easy to check that \(D_{K}(0)=0\). To check that \(D_{K}:K\to K\) extends \(D_{\mathbb{Q}}:\mathbb{Q}\to\mathbb{Q}\), recall that for every prime \(p\), we have \(\nu_{\mathfrak{p}}(x)=\nu_{p}(x)\) for every \(x\in\mathbb{Q}\). And so for every nonzero \(x\in\mathbb{Q}\), we get
\[D_{K}(x)=x\sum_{\mathfrak{p}\mid x}\frac{\nu_{\mathfrak{p}}(x)}{pg(p,K)}=x \sum_{p\mid x}\Big{(}\frac{g(p,K)\cdot\nu_{p}(x)}{pg(p,K)}\Big{)}=x\sum_{p \mid x}\frac{\nu_{p}(x)}{p}=D_{\mathbb{Q}}(x).\]
### Number fields
Let \(K/\mathbb{Q}\) be a number field and let \(L/K\) be an extension such that \(L/\mathbb{Q}\) is Galois (e.g., one can take \(L\) to be a Galois closure of \(K/\mathbb{Q}\)). For every \(x\in K\), one can define \(D_{K}(x)=D_{L}(x)\). But we want to make sure that \(D_{L}(x)=D_{K}(x)\) for all \(x\in K\) so the definition of \(D_{K}\) does not depend on the choice of Galois extensions.
**Lemma 3.1**.: _Suppose \(K/\mathbb{Q}\) and \(L/\mathbb{Q}\) are finite Galois extensions. We have \(D_{K}(x)=D_{L}(x)\) for every \(x\in K\cap L\)._
Proof.: We first assume that \(K\subset L\). Since \(L/\mathbb{Q}\) is Galois, we know that \(L/K\) is also Galois. For every rational prime \(p\) and nonzero prime ideals \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{2}\) of \(\mathcal{O}_{K}\) with \(\mathfrak{p}_{1}\mid p\) and \(\mathfrak{p}_{2}\mid p\), we get \(g(\mathfrak{p}_{1},L)=g(\mathfrak{p}_{2},L)\). Let \(\mathfrak{p}\) and \(\mathfrak{P}\) be two prime ideals in \(\mathcal{O}_{K}\) and \(O_{L}\) respectively such that \(\mathfrak{P}\mid\mathfrak{p}\mid p\). For every nonzero \(x\in K\), we have
\[D_{L}(x) =x\sum_{\mathfrak{p}\mid x}\frac{\nu_{\mathfrak{p}}(x)}{pg(p,L)} =x\sum_{\mathfrak{p}\mid x}\sum_{\mathfrak{q}\mid\mathfrak{p}}\frac{\nu_{ \mathfrak{p}}(x)}{pg(p,L)}\] \[=x\sum_{\mathfrak{p}\mid x}\frac{g(\mathfrak{p},L)\nu_{ \mathfrak{p}}(x)}{pg(\mathfrak{p},L)g(p,K)}=x\sum_{\mathfrak{p}\mid x}\frac{ \nu_{\mathfrak{p}}(x)}{pg(p,K)}=D_{K}(x).\]
This shows that \(D_{K}(x)=D_{L}(x)\) for all \(x\in K\) if \(K\subset L\).
Now suppose \(K/\mathbb{Q}\) and \(L/\mathbb{Q}\) are two arbitrary finite Galois extensions. Since \(K\cap L\) is also a finite Galois extension of \(\mathbb{Q}\), for every \(x\in K\cap L\), we have \(D_{K}(x)=D_{K\cap L}(x)\) by the previous paragraph. Using the same argument, we get \(D_{L}(x)=D_{K\cap L}(x)\) for every \(x\in K\cap L\), and therefore \(D_{K}(x)=D_{L}(x)\) for every \(x\in K\cap L\).
Suppose \(K/\mathbb{Q}\) is a number field (not necessarily Galois). For every \(x\in K\), we can define \(D_{K}(x):=D_{K^{\text{Gal}}}(x)\) where \(K^{\text{Gal}}\) is a Galois closure of \(K/\mathbb{Q}\). When \(x\neq 0\), it is clear that \(D_{K}(x)/x\in\mathbb{Q}\) and thus \(D_{K}(x)\in K\). We have a well-defined arithmetic derivative \(D_{K}:K\to K\) when \(K\) is a number field.
### Arithmetic subderivative
Let \(S\) be a (finite or infinite) subset of the prime numbers \(\mathbb{P}\). One can define the so-called arithmetic subderivative \(D_{\mathbb{Q},S}:\mathbb{Q}\to\mathbb{Q}\) by
\[D_{\mathbb{Q},S}(x)=\sum_{p\in S}x\nu_{p}(x)/p.\]
It is easy to see that \(D_{\mathbb{Q},S}=\sum_{p\in S}D_{p}\) and \(D_{\mathbb{Q}}=\sum_{p\in\mathbb{P}}D_{p}\). One can extend \(D_{\mathbb{Q},S}\) to all finite Galois extensions \(K/\mathbb{Q}\). Let \(T\) be a set of prime ideals of \(\mathcal{O}_{K}\). For every nonzero \(x\in K\), we define
\[D_{K,T}(x):=x\sum_{\mathfrak{p}\in T,\mathfrak{p}\mid p}\frac{\nu_{\mathfrak{ p}}(x)}{pg(p,K)}.\]
If \(T=\{\mathfrak{p}\}\) contains only one prime ideal, then we call \(D_{K,T}=D_{K,\mathfrak{p}}\) the arithmetic partial derivative with respect to \(\mathfrak{p}\). By taking \(K=\mathbb{Q}\) and \(\mathfrak{p}=\{p\}\), we can see \(D_{K,\mathfrak{p}}\) is the generalization of arithmetic partial derivative with respect to \(p\). Suppose \(L/K\) is a finite Galois extension. Let
\[T_{L/K}=\{\mathfrak{P}:\mathfrak{P}\text{ prime ideal of }\mathcal{O}_{L}, \exists\;\mathfrak{p}\in T\text{ such that }\mathfrak{P}\mid\mathfrak{p}\}.\]
For every nonzero \(x\in K\), we have
\[D_{L,T_{L/K}}(x) =\sum_{\mathfrak{P}\in T_{L/K},\mathfrak{P}|p}\frac{x\nu_{ \mathfrak{P}}(x)}{pg(p,L)}=\sum_{\mathfrak{p}\in T}\sum_{\mathfrak{P}\in T_{L /K},\mathfrak{P}|\mathfrak{p}}\frac{x\nu_{\mathfrak{P}}(x)}{pg(p,L)}\] \[=\sum_{\mathfrak{p}\in T}g(\mathfrak{p},L)\frac{x\nu_{\mathfrak{ p}}(x)}{pg(p,K)g(\mathfrak{p},L)}=\sum_{\mathfrak{p}\in T}\frac{x\nu_{ \mathfrak{p}}(x)}{pg(p,K)}=D_{K,T}(x).\]
In this case, \(D_{L,T_{L/K}}\) extends \(D_{K,T}\).
If \(K/\mathbb{Q}\) is a number field (not necessarily Galois), we can define \(D_{K,T}\) via a larger Galois extension. Let \(L/K\) be a finite extension such that \(L/\mathbb{Q}\) is Galois. Let \(T_{L/K}\) be defined as above. We can define \(D_{K,T}(x):=D_{L,T_{L/K}}(x)\) for all \(x\in K\). Again this definition does not depend on the choice of Galois extensions. Let \(L_{1}/K\) and \(L_{2}/K\) be finite extensions such that \(L_{1}/\mathbb{Q}\) and \(L_{2}/\mathbb{Q}\) are Galois. Let \(L_{3}:=L_{1}\cap L_{2}\) and \(T^{\prime}:=T_{L_{3}/K}\). We note that \(T_{L_{1}/K}=T^{\prime}_{L_{1}/L_{3}}\) and \(T_{L_{2}/K}=T^{\prime}_{L_{2}/L_{3}}\). Therefore for every \(x\in K\subset L_{3}\), we have
\[D_{L_{1},T_{L_{1}/K}}(x)=D_{L_{1},T^{\prime}_{L_{1}/L_{3}}}(x)=D_{L_{3},T^{ \prime}}(x)=D_{L_{2},T^{\prime}_{L_{2}/L_{3}}}(x)=D_{L_{2},T_{L_{2}/K}}(x).\]
**Remark 3.2**.: Let \(K/\mathbb{Q}\) be a finite Galois extension. Just like in the local case, one can ask whether Theorems 1.3 and 1.4 are true for \(D_{K,\mathfrak{p}}\). Note that in the global case \(D_{K,\mathfrak{p}}(x)=\frac{x\nu_{\mathfrak{p}}(x)}{pg(p,K)}\), whereas in the local case \(g(p,K)=1\). If \(\nu_{\mathfrak{p}}(g(p,K))=0\), then \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=\nu_{\mathfrak{p}}(x)+\nu_{ \mathfrak{p}}(\nu_{\mathfrak{p}}(x))-1\), which is the same as Equation (1). In this case, Theorems 1.3 and 1.4 are still true and can be proved in a similar fashion. If \(\nu_{\mathfrak{p}}(g(p,K))=a>0\), then \(\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))=\nu_{\mathfrak{p}}(x)+\nu_{ \mathfrak{p}}(\nu_{\mathfrak{p}}(x))-1-a\). In this case, the behavior of the \(\nu_{\mathfrak{p}}\) sequence of \(x\) warrants further study.
## 4. Arithmetic Logarithmic Derivative
### Local Case
The logarithmic partial derivative (with respect to \(p\)) \(\operatorname{ld}_{\mathbb{Q},p}:\mathbb{Q}^{\times}\to\mathbb{Q}\) is a homomorphism defined by the formula
\[\operatorname{ld}_{\mathbb{Q},p}(x)=D_{\mathbb{Q},p}(x)/x\]
because
\[\operatorname{ld}_{\mathbb{Q},p}(xy)=\frac{D_{\mathbb{Q},p}(xy)}{xy}=\frac{D_ {\mathbb{Q},p}(x)y+xD_{\mathbb{Q},p}(y)}{xy}=\operatorname{ld}_{\mathbb{Q},p} (x)+\operatorname{ld}_{\mathbb{Q},p}(y).\]
The image of \(\operatorname{ld}_{\mathbb{Q},p}\) is
\[\operatorname{ld}_{\mathbb{Q},p}(\mathbb{Q}^{\times})=\{m/p:m\in\mathbb{Z}\}= \langle 1/p\rangle\cong\mathbb{Z}\]
and thus \(\operatorname{ld}_{\mathbb{Q},p}\) is not onto. Suppose \(\operatorname{ld}_{\mathbb{Q},p}(x)=0\), then \(D_{\mathbb{Q},p}(x)=0\) and thus
\[\operatorname{Ker}(\operatorname{ld}_{\mathbb{Q},p})=\{x\in\mathbb{Q}^{\times }:\nu_{p}(x)=0\}.\]
One can extend \(\operatorname{ld}_{\mathbb{Q},p}\) to \(\mathbb{Q}_{p}^{\times}\) by the formula \(\operatorname{ld}_{\mathbb{Q}_{p},p}(x):=D_{\mathbb{Q}_{p},p}(x)/x\in\mathbb{Q}\).
Using the same argument, we get
\[\operatorname{ld}_{\mathbb{Q}_{p},p}(\mathbb{Q}_{p}^{\times})=\{m/p:m\in \mathbb{Z}\},\qquad\operatorname{Ker}(\operatorname{ld}_{\mathbb{Q}_{p},p})= \{x\in\mathbb{Q}_{p}^{\times}:\nu_{p}(x)=0\}.\]
Let \(K/\mathbb{Q}_{p}\) be a finite extension. We can define \(\operatorname{ld}_{K,\mathfrak{p}}:K^{\times}\to\mathbb{Q}\) as
\[\operatorname{ld}_{K,\mathfrak{p}}(x):=\frac{D_{K,\mathfrak{p}}(x)}{x}=\frac{ \nu_{\mathfrak{p}}(x)}{p}.\]
It is easy to see the kernel of \(\operatorname{ld}_{K,\mathfrak{p}}\) is
\[\operatorname{Ker}(\operatorname{ld}_{K,\mathfrak{p}})=\{x\in K^{\times}:\nu_ {\mathfrak{p}}(x)=0\}.\]
The description of the image of \(\operatorname{ld}_{K,\mathfrak{p}}\) depends on whether \(p\) divides the ramification index \(e\). Let \(e=p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots p_{j}^{r_{j}}\) be the unique factorization of the ramification index into prime powers. If \(p\notin\{p_{1},p_{2},\ldots,p_{j}\}\), then
\[\operatorname{ld}_{K,\mathfrak{p}}(K^{\times})=\{m/pe:m\in\mathbb{Z}\}= \langle 1/p,1/p_{1}^{r_{1}},\ldots,1/p_{j}^{r_{j}}\rangle\cong\mathbb{Z}.\]
If \(p\in\{p_{1},p_{2},\ldots,p_{j}\}\) and assume \(p=p_{1}\), then
\[\operatorname{ld}_{K,\mathfrak{p}}(K^{\times})=\{m/pe:m\in\mathbb{Z}\}= \langle 1/p_{1}^{r_{1}+1},1/p_{2}^{r_{2}},\ldots,1/p_{j}^{r_{j}}\rangle\cong \mathbb{Z}.\]
### Global Case
If \(K/\mathbb{Q}\) is a finite Galois extension, one can define the arithmetic logarithmic derivative \(\operatorname{ld}_{K}:K^{\times}\to\mathbb{Q}\) as
\[\operatorname{ld}_{K}(x)=\frac{D_{K}(x)}{x}=\sum_{\mathfrak{p}|x}\frac{\nu_{ \mathfrak{p}}(x)}{pg(p,K)}\in\mathbb{Q}.\]
It is easy to show that \(\operatorname{ld}_{K}\) is a group homomorphism. When \(K=\mathbb{Q}\), we get that \(\operatorname{ld}_{\mathbb{Q}}(x)=\sum_{p|x}\frac{\nu_{p}(x)}{p}\). Hence \(\operatorname{ld}_{\mathbb{Q}}(\mathbb{Q}^{\times})=\langle\frac{1}{p}:p\in \mathbb{P}\rangle\). For every finite Galois extension \(K/\mathbb{Q}\), one can show that \(\operatorname{ld}_{K}(K^{\times})\) are isomorphic as subgroups of \(\mathbb{Q}\). Before we prove this result, we need to recall a concept called \(p\)-height in the classification of subgroups of \(\mathbb{Q}\). Let \(G\) be an (additive) subgroup of \(\mathbb{Q}\) and \(g\in G\). The \(p\)-height of \(g\) in \(G\) is \(k\) if \(p^{k}x=g\) is solvable in \(G\) and \(p^{k+1}x=g\) is not. If \(p^{k}x=a\) has a solution for every \(k\), then we say that the \(p\)-height of \(a\) in \(G\) is infinite. Let \(H_{p_{i},G}(g)\) be the \(p_{i}\)-height of \(g\) in \(G\). Set \(H_{G}(g):=(H_{2,G}(g),H_{3,G}(g),H_{5,G}(g),\ldots)\). It turned out that \(H_{G}(1)\) is an invariant of the subgroup \(G\) in the following sense.
**Theorem 4.1**.: _[_8_, Theorem 4]_ _Let \(G_{1}\) and \(G_{2}\) be two subgroups of \(\mathbb{Q}\). Then \(G_{1}\cong G_{2}\) if and only if \(H_{G_{1}}(1)\) and \(H_{G_{2}}(1)\) only differ in finitely many indices, and in the case \(H_{p_{i},G_{1}}(1)\neq H_{p_{i},G_{2}}(1)\), both of them are finite._
**Theorem 4.2**.: _Let \(K/\mathbb{Q}\) be a finite Galois extension. Then \(\operatorname{ld}_{K}(K^{\times})\cong\langle\frac{1}{p}:p\in\mathbb{P} \rangle<\mathbb{Q}\)._
Proof.: Let \(G:=\langle\frac{1}{p}:p\in\mathbb{P}\rangle<\mathbb{Q}\). It is easy to see that
\[H_{G}=(1,1,1,\ldots).\]
Let \([K:\mathbb{Q}]=n\) and \(\overline{\nu}_{\mathfrak{p}}(x):=\nu_{\mathfrak{p}}(x)e(p,K)\) be the normalized discrete valuation. For every \(x\in K^{\times}\), we have
\[\operatorname{ld}_{K}(x)=\sum_{\mathfrak{p}|x}\frac{\nu_{\mathfrak{p}}(x)}{ pg(p,K)}=\sum_{\mathfrak{p}|x}\frac{\overline{\nu}_{\mathfrak{p}}(x)}{pg(p,K)e(p,K )}=\frac{1}{n}\sum_{\mathfrak{p}|x}\frac{\overline{\nu}_{\mathfrak{p}}(x)f(p, K)}{p}.\]
Therefore
\[\operatorname{ld}_{K}(K^{\times}) =\Big{\{}\frac{1}{n}\sum_{\mathfrak{p}|x}\frac{\overline{\nu}_{ \mathfrak{p}}(x)f(p,K)}{p}\mid x\in K^{\times}\Big{\}}\] \[=\Big{\langle}\frac{f(p,K)}{np}\mid p\in\mathbb{P}\Big{\rangle}\] \[=\Big{\langle}\frac{1}{p^{1+\nu_{\mathfrak{p}}(n)-\nu_{p}(f(p,K) )}}\mid p\in\mathbb{P}\Big{\rangle}.\]
For every \(p\in\mathbb{P}\), we denote \(m(p):=1+\nu_{p}(n)-\nu_{p}(f(p,K))\). It is easy to see that
\[H_{\operatorname{ld}_{K}(K^{\times})}=(m(2),m(3),m(5),\ldots).\]
As \(f(p,K)\mid n\), we know that \(1\leq m(p)<+\infty\). When \(p>n\), we have \(\nu_{p}(n)=\nu_{p}(f(p,K))=0\). This implies that \(m(p)=1\) for all except for finitely many primes. Hence \(H_{G}\) and \(H_{\operatorname{ld}_{K}(K^{\times})}\) only differ in finitely many indices, and in the case \(H_{p_{i},G}(1)\neq H_{p_{i},\operatorname{ld}_{K}(K^{\times})}\), both of them are finite. Hence \(\operatorname{ld}_{K}(K^{\times})\cong G\) by Theorem 4.1.
To determine the exact image of \(\operatorname{ld}_{K}\) in general is not easy. We give an example.
**Example 4.3**.: Let \(K=\mathbb{Q}(\sqrt{D})\) be a quadratic extension, where \(D\) is a square free integer. We rewrite the formula of \(\operatorname{ld}_{K}\) using the normalized discrete valuation \(\overline{\nu}_{\mathfrak{p}}=\nu_{\mathfrak{p}}\cdot e(p,K)\)
\[\operatorname{ld}_{K}(x)=\sum_{\mathfrak{p}|x}\frac{\nu_{\mathfrak{p}}(x)}{pg( p,K)}=\sum_{\mathfrak{p}|x}\frac{\overline{\nu}_{\mathfrak{p}}(x)}{pg(p,K)e(p,K)}= \frac{1}{2}\sum_{\mathfrak{p}|x}\frac{\overline{\nu}_{\mathfrak{p}}(x)f(p,K)} {p}.\]
It remains to determine when \(2\) is inert in \(K\), that is, \(f(2,K)=2\). Let \(\Delta_{K}\) be the discriminant of \(K\), that is, \(\Delta_{K}=D\) if \(D\equiv 1\pmod{4}\) and
\(\Delta_{K}=4D\) if \(D\equiv 2,3\pmod{4}\). Hence \(\Delta_{K}\equiv 0,1,4,5\pmod{8}\). We know that \(\mathcal{O}_{K}=\mathbb{Z}[\frac{\Delta_{K}+\sqrt{\Delta_{K}}}{2}]\). The minimal polynomial of \(\frac{\Delta_{K}+\sqrt{\Delta_{K}}}{2}\) is
\[(X-\frac{\Delta_{K}+\sqrt{\Delta_{K}}}{2})(X-\frac{\Delta_{K}-\sqrt{\Delta_{K} }}{2})=X^{2}-\Delta_{K}X+\frac{\Delta_{K}^{2}-\Delta_{K}}{4}.\]
We discuss the cases based on the value of \(\Delta_{K}\bmod{8}\).
1. If \(\Delta_{K}\equiv 0\pmod{8}\), then \(\Delta_{K}^{2}-\Delta_{K}\equiv 8\pmod{8}\). Hence \(\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv 0\pmod{2}\). Therefore \[X^{2}-\Delta_{K}X+\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv X^{2}\pmod{2},\] and \((2)=(2,\frac{\Delta_{K}+\sqrt{\Delta_{K}}}{2})^{2}\) is ramified in this case, that is, \(e(2,K)=2\).
2. If \(\Delta_{K}\equiv 1\pmod{8}\), then \(\Delta_{K}^{2}-\Delta_{K}\equiv 1-1\equiv 0\pmod{8}\). Hence \(\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv 0\pmod{2}\). Therefore \[X^{2}-\Delta_{K}X+\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv X^{2}+X\equiv X(X+ 1)\pmod{2},\] and \((2)=(2,\frac{\Delta_{K}+\sqrt{\Delta_{K}}}{2})(2,\frac{\Delta_{K}+\sqrt{ \Delta_{K}}}{2}+1)\) is totally split in this case, that is, \(g(2,K)=2\).
3. If \(\Delta_{K}\equiv 4\pmod{8}\), then \(\Delta_{K}^{2}-\Delta_{K}\equiv 0-4\equiv 4\pmod{8}\). Hence \(\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv 1\pmod{2}\). Therefore \[X^{2}-\Delta_{K}X+\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv X^{2}+1\equiv(X+ 1)^{2}\pmod{2},\] and \((2)=(2,\frac{\Delta_{K}+\sqrt{\Delta_{K}}}{2}+1)^{2}\) is ramified in this case, that is, \(e(2,K)=2\).
4. If \(\Delta_{K}\equiv 5\pmod{8}\), then \(\Delta_{K}^{2}-\Delta_{K}\equiv 1-5\equiv 4\pmod{8}\). Hence \(\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv 1\pmod{2}\). Therefore \[X^{2}-\Delta_{K}X+\frac{\Delta_{K}^{2}-\Delta_{K}}{4}\equiv X^{2}+X+1\pmod{2},\] which is irreducible. In this case, \(2\) is inert, that is, \(f(2,K)=2\).
If \(\Delta_{K}\equiv 5\pmod{8}\), then \(\Delta_{K}\equiv 1\pmod{4}\). In this case, \(\Delta_{K}=D\) and thus \(D\equiv 5\pmod{8}\). Therefore
\[\operatorname{ld}_{K}(K^{\times})=\begin{cases}\langle 1/2,1/3,1/5,\ldots, \rangle,&\text{if $D\equiv 5\pmod{8}$;}\\ \langle 1/4,1/3,1/5,\ldots,\rangle,&\text{otherwise.}\end{cases}\]
## 5. \(p\)-adic Continuity and Discontinuity
In this section, we study when arithmetic partial derivatives and arithmetic subderivatives are \(p\)-adically continuous and discontinuous. When they are continuous, we will also study if they are strictly differentiable. We first recall some definitions.
Let \(K\) be a field and \(\nu:K\to\mathbb{R}\cup\{+\infty\}\) be a discrete valuation. For all \(x,y\in K\), we have \(\nu(x+y)\geq\min\{\nu(x),\nu(y)\}\). An important property of \(\nu\) that we will use repeatedly in this subsection is that if \(\nu(x)\neq\nu(y)\), then \(\nu(x+y)=\min\{\nu(x),\nu(y)\}\). If \(c\) is a real number number between \(0\) and \(1\), then the discrete valuation \(\nu\) induces an absolute value on \(K\) as follows:
\[|x|_{\nu}:=\begin{cases}c^{\,\nu(x)},&\text{if }x\neq 0;\\ 0,&\text{if }x=0.\end{cases}\]
We then have the formula \(|x+y|_{\nu}\leq\max\{|x|_{\nu},|y|_{\nu}\}\) and thus \(|\cdot|\) is an ultrametic absolute value. The subset \(\mathcal{O}_{K}=\{x\in K:\nu(x)\geq 0\}\) is a ring with the unique maximal ideal \(\mathfrak{p}=\{x\in K:\nu(x)>0\}\). Let \(f:K\to K\) be a function. We say that \(f\) is \(\mathfrak{p}\)-adically continuous at a point \(x\in K\) if for every \(\epsilon>0\), there exists \(\delta>0\) such that for every \(|y-x|_{\nu}<\delta\), we have \(|f(y)-f(x)|_{\nu}<\epsilon\). Equivalently, to show that \(f\) is \(\mathfrak{p}\)-adically continuous at \(x\), it is enough to show that for every sequence \(x_{i}\),
\[\lim_{i\to+\infty}\nu(x-x_{i})=+\infty\quad\text{implies}\quad\lim_{i\to+ \infty}\nu(f(x)-f(x_{i}))=+\infty.\]
On the contrary, to show that \(f\) is \(\mathfrak{p}\)-adically discontinuous at \(x\), it is enough to find one sequence \(x_{i}\) such that
\[\lim_{i\to+\infty}\nu(x-x_{i})=+\infty\quad\text{and}\quad\lim_{i\to+\infty} \nu(f(x)-f(x_{i}))\neq+\infty.\]
Recall that \(f\) is differentiable at a point \(x\) if the difference quotients \((f(y)-f(x))/(y-x)\) have a limit as \(y\to x\) (\(y\neq x\)) in the domain of \(f\). When the absolute value of the domain is ultrametric, we study the so-called strict differentiability. For more details on \(p\)-adic analysis, we refer the reader to [10].
**Definition 5.1**.: Let \(K\) be a field equipped with an ultrametric absolute value \(|\cdot|_{\nu}\). We say that \(f:K\to K\) is _strictly differentiable_ at a point \(x\in K\) (with respect to \(|\cdot|_{\nu}\)) if the difference quotients
\[\Phi f(u,v)=\frac{f(u)-f(v)}{u-v}\]
have a limit as \((u,v)\to(x,x)\) while \(u\) and \(v\) remaining distinct. Similarly, we say that \(f\) is _twice strictly differentiable_ at a point \(x\) if
\[\Phi_{2}f(u,v,w)=\frac{\Phi f(u,w)-\Phi f(v,w)}{u-v}\]
tends to a limit as \((u,v,w)\to(x,x,x)\) while \(u\), \(v\), and \(w\) remaining pairwise distinct.
### Partial Derivative
Let \(K/\mathbb{Q}\) be a finite Galois extension of degree \(n\). Let \(p\in\mathbb{Q}\) be a rational prime and \(\mathfrak{p}\) be a prime ideal in \(\mathcal{O}_{K}\) such that \(\mathfrak{p}\mid p\). The discrete valuation \(\nu_{\mathfrak{p}}\) that extends \(\nu_{p}\) defines an ultrametric absolute value on \(K\) by
\[|x|_{\nu_{\mathfrak{p}}}=\sqrt[n]{|N_{K_{\mathfrak{p}}/\mathbb{Q}_{p}}(x)|_{ \nu_{\mathfrak{p}}}}.\]
**Theorem 5.2**.: _Let \(K\) be a number field and \(\mathfrak{p}\) a prime ideal of \(\mathcal{O}_{K}\). The arithmetic partial derivative \(D_{K,\mathfrak{p}}\) is \(\mathfrak{p}\)-adically continuous on \(K\)._
Proof.: Suppose \(K/\mathbb{Q}\) is Galois. We first show that \(D_{K,\mathfrak{p}}\) is continuous at nonzero \(x\in K\). Let \(x_{i}\) be a sequence that converges to \(x\)\(\mathfrak{p}\)-adically. Since \(x\neq 0\), we can rename the sequence as \(x_{i}x\) without loss of generality. As \(i\to+\infty\), we know that
\[\nu_{\mathfrak{p}}(x-x_{i}x)=\nu_{\mathfrak{p}}(x)+\nu_{\mathfrak{p}}(1-x_{i} )\to+\infty.\]
This implies that \(\nu_{\mathfrak{p}}(1-x_{i})\to+\infty\) as \(i\to+\infty\). As a result, we also know that \(\nu_{\mathfrak{p}}(x_{i})=0\) when \(i\gg 0\) because if \(\nu_{\mathfrak{p}}(x_{i})\neq 0\), then \(\nu_{\mathfrak{p}}(1-x_{i})=\min\{\nu_{\mathfrak{p}}(1),\nu_{\mathfrak{p}}(x_{ i})\}=0\). Therefore \(D_{K,\mathfrak{p}}(x_{i})=0\) when \(i\gg 0\). To show that \(D_{K,\mathfrak{p}}(x_{i})\) converges to \(D_{K,\mathfrak{p}}(x)\)\(\mathfrak{p}\)-adically, it is enough to observe that
\[\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x)-D_{K,\mathfrak{p}}(x_{ i}x)) =\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x)-D_{K,\mathfrak{p}}(x)x_ {i}-D_{K,\mathfrak{p}}(x_{i})x)\] \[=\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x)(1-x_{i}))\] \[=\nu_{\mathfrak{p}}(D_{K,\mathfrak{p}}(x))+\nu_{\mathfrak{p}}(1 -x_{i})\to+\infty\]
as \(i\to+\infty\). The case \(x=0\) will be covered in Theorem 5.6.
Suppose \(K/\mathbb{Q}\) is a number field, not necessarily Galois. Let \(L/K\) be a finite extension such that \(L/\mathbb{Q}\) is Galois. Let \(\mathfrak{P}\) be a prime ideal of \(\mathcal{O}_{L}\) such that \(\mathfrak{P}\mid\mathfrak{p}\). By the previous paragraph, we know that \(D_{L,\mathfrak{P}}\) is \(\mathfrak{P}\)-adically continuous on \(L\) (and thus on \(K\)). Let \(x_{i}\in K\) be a sequence that converges to \(x\in K\)\(\mathfrak{p}\)-adically. Since \(\nu_{\mathfrak{p}}(y)=\nu_{\mathfrak{P}}(y)\) for all \(y\in K\), we know that \(x_{i}\) converges to \(x\)\(\mathfrak{P}\)-adically. As \(D_{L,\mathfrak{P}}\) is \(\mathfrak{P}\)-adically continuous on \(L\), we know that \(D_{L,\mathfrak{P}}(x_{i})\) converges to \(D_{L,\mathfrak{P}}(x)\)\(\mathfrak{P}\)-adically, and thus \(\mathfrak{p}\)-adically. This shows that \(D_{L,\mathfrak{P}}\) is \(\mathfrak{p}\)-adically continuous on \(K\). Let \(T=\{\mathfrak{p}\}\). We know that by definition \(D_{K,\mathfrak{p}}(x)=D_{L,T_{L/K}}(x)=\sum_{\mathfrak{P}\mid\mathfrak{p}}D_{ L,\mathfrak{P}}\). This implies that \(D_{K,\mathfrak{p}}\) is continuous on \(K\).
Since \(D_{K,\mathfrak{p}}\) is \(\mathfrak{p}\)-adically continuous on \(K\), the next question is whether \(D_{K,\mathfrak{p}}\) is strictly differentiable on \(K\) with respect to the ultrametric \(|\cdot|_{\nu_{\mathfrak{p}}}\).
**Theorem 5.3**.: _Let \(K\) be a number field and \(\mathfrak{p}\) a prime ideal of \(\mathcal{O}_{K}\). The arithmetic partial derivative \(D_{K,\mathfrak{p}}\) is strictly differentiable and twice strictly differentiable (with respect to the ultrametric \(|\cdot|_{\nu_{\mathfrak{p}}}\)) at every nonzero \(x\in K\)._
Proof.: Let \(L/K\) be a finite extension such that \(L/\mathbb{Q}\) is Galois. Let \(T=\{\mathfrak{p}\}\). We have \(D_{K,\mathfrak{p}}(x)=D_{L,T_{L/K}}(x)=\sum_{\mathfrak{P}|\mathfrak{p}}D_{L, \mathfrak{P}}\).
We first show that \(D_{K,\mathfrak{p}}\) is strictly differentiable at \(x\neq 0\). Suppose a sequence \((u_{i},v_{i})\) converges to \((x,x)\)\(\mathfrak{p}\)-adically while \(u_{i}\) and \(v_{i}\) remaining distinct. This implies that \((u_{i},v_{i})\) converges to \((x,x)\)\(\mathfrak{P}\)-adically. When \(i\gg 0\), we have \(\nu_{\mathfrak{P}}(u_{i})=\nu_{\mathfrak{P}}(v_{i})=\nu_{\mathfrak{P}}(x)\). We can compute
\[\Phi D_{K,\mathfrak{p}}(u_{i},v_{i}) =\frac{D_{K,\mathfrak{p}}(u_{i})-D_{K,\mathfrak{p}}(v_{i})}{u_{i} -v_{i}}=\frac{\sum_{\mathfrak{P}|\mathfrak{p}}D_{L,\mathfrak{P}}(u_{i})-\sum_{ \mathfrak{P}|\mathfrak{p}}D_{L,\mathfrak{P}}(v_{i})}{u_{i}-v_{i}}\] \[=\frac{\sum_{\mathfrak{P}|\mathfrak{p}}\frac{u_{i}\nu_{ \mathfrak{P}}(x)}{pg(p,L)}-\sum_{\mathfrak{P}|\mathfrak{p}}\frac{v_{i}\nu_{ \mathfrak{P}}(x)}{pg(p,L)}}{u_{i}-v_{i}}=\sum_{\mathfrak{P}|\mathfrak{p}} \frac{\nu_{\mathfrak{P}}(x)}{pg(p,L)}=\frac{D_{K,\mathfrak{p}}(x)}{x}.\]
Therefore the limit of \(\Phi D_{K,\mathfrak{p}}(u_{i},v_{i})\) is equal to \(D_{K,\mathfrak{p}}(x)/x\) as \(i\to+\infty\). This shows that \(D_{K,\mathfrak{p}}\) is strictly differentiable at any nonzero \(x\in K\), and the derivative of \(D_{K,\mathfrak{p}}\) is a constant function, defined by
\[(D_{K,\mathfrak{p}})^{\prime}(x)=D_{K,\mathfrak{p}}(x)/x=\operatorname{ld}_{ K,\mathfrak{p}}(x).\]
We then show that \(D_{K,\mathfrak{p}}\) is twice strictly differentiable at nonzero points. Suppose a sequence \((u_{i},v_{i},w_{i})\) converges to \((x,x,x)\)\(\mathfrak{p}\)-adically while \(u_{i}\), \(v_{i}\), and \(w_{i}\) remaining pairwise distinct. Then for all \(i\gg 0\), we have
\[\Phi_{2}D_{K,\mathfrak{p}}(u_{i},v_{i},w_{i})=\frac{\Phi D_{K,\mathfrak{p}}(u _{i},w_{i})-\Phi D_{K,\mathfrak{p}}(v_{i},w_{i})}{u_{i}-v_{i}}=\frac{0}{u_{i} -v_{i}}=0.\]
Hence \(D_{K,\mathfrak{p}}\) is twice strictly differentiable at nonzero points and the second derivative is the constant zero function.
**Theorem 5.4**.: _Let \(K\) be a number field and \(\mathfrak{p}\) a prime ideal of \(\mathcal{O}_{K}\). The arithmetic partial derivative \(D_{K,\mathfrak{p}}\) is not strictly differentiable (with respect to the ultrametric \(|\cdot|_{\nu_{\mathfrak{p}}}\)) at \(0\)._
Proof.: This theorem is a direct corollary of a more generalized Theorem 5.8.
**Remark 5.5**.: Theorems 5.2, 5.3, and 5.4 hold in the local case of finite extensions over \(\mathbb{Q}_{p}\).
### Subderivative
**Theorem 5.6**.: _Let \(K/\mathbb{Q}\) be a number field and \(\mathfrak{p}\) be a prime ideal of \(\mathcal{O}_{K}\). Let \(T\) be a nonempty set of prime ideals in \(\mathcal{O}_{K}\). The arithmetic subderivative \(D_{K,T}\) is \(\mathfrak{p}\)-adically continuous at \(x=0\)._
Proof.: Let \(L/K\) be a finite extension such that \(L/\mathbb{Q}\) is Galois. Suppose \(x_{i}\in K\) is a sequence that converges to \(x\)\(\mathfrak{p}\)-adically in \(K\). Let \(\mathfrak{P}\) be a prime
ideal of \(\mathcal{O}_{L}\) such that \(\mathfrak{P}\mid\mathfrak{p}\). Then \(x_{i}\) converges to \(x\)\(\mathfrak{P}\)-adically in \(L\). Hence
\[\lim_{i\to+\infty}\nu_{\mathfrak{P}}(x-x_{i})=\lim_{i\to+\infty}\nu_{\mathfrak{P }}(x_{i})=+\infty.\]
We have
\[\nu_{\mathfrak{p}}(D_{K,T}(x_{i})) =\nu_{\mathfrak{P}}(D_{L,T_{L/K}}(x_{i}))\] \[=\nu_{\mathfrak{P}}\Big{(}x_{i}\sum_{\mathfrak{D}\in T_{L/K}, \mathfrak{D}|q}\frac{\nu_{\mathfrak{D}}(x_{i})}{qg(q,L)}\Big{)}\] \[=\nu_{\mathfrak{P}}(x_{i})+\nu_{\mathfrak{P}}\Big{(}\sum_{ \mathfrak{D}\in T_{L/K},\mathfrak{D}|q}\frac{\nu_{\mathfrak{D}}(x_{i})e(q,L) f(q,L)}{q}\Big{)}\] \[=\nu_{\mathfrak{P}}(x_{i})+\nu_{\mathfrak{P}}\Big{(}\frac{1}{[L :\mathbb{Q}]}\sum_{\mathfrak{D}\in T_{L/K},\mathfrak{D}|q}\frac{\nu_{ \mathfrak{D}}(x_{i})e(q,L)f(q,L)}{q}\Big{)}\] \[\geq\nu_{\mathfrak{P}}(x_{i})-\nu_{\mathfrak{P}}([L:\mathbb{Q}] ))-\nu_{\mathfrak{P}}(\prod_{\mathfrak{D}\in T_{L/K},\mathfrak{D}|q}q).\]
As \(\lim_{i\to+\infty}\nu_{\mathfrak{P}}(x_{i})=+\infty\), we have
\[\lim_{i\to+\infty}\nu_{\mathfrak{P}}(D_{K,T}(x)-D_{K,T}(x_{i}))=+\infty.\qed\]
**Corollary 5.7**.: _Let \(T\) be a nonempty set of (rational) prime numbers. The arithmetic subderivative \(D_{\mathbb{Q},T}\) is \(p\)-adically continuous at \(x=0\)._
**Theorem 5.8**.: _Let \(K/\mathbb{Q}\) be a number field and \(\mathfrak{p}\) a prime ideal of \(\mathcal{O}_{K}\). Let \(T\) be a nonempty set of prime ideals in \(\mathcal{O}_{K}\). The arithmetic subderivative \(D_{K,T}:K\to K\) is not strictly differentiable (with respect to the ultrametric \(|\cdot|_{\nu_{\mathfrak{p}}}\)) at \(0\)._
Proof.: Let \(L/K\) be a finite extension such that \(L/\mathbb{Q}\) is Galois. Let \(\mathfrak{P}\) be a prime ideal of \(\mathcal{O}_{L}\) such that \(\mathfrak{P}\mid\mathfrak{p}\mid p\).
We prove this theorem in two cases. First, we assume that there exists a prime ideal \(\mathfrak{p}^{\prime}\in T\) such that \(\mathfrak{p}^{\prime}\mid p\). Let \(m_{p}\) be the number of prime ideals in \(T_{L/K}\) that divide \(p\). For positive integer \(i\geq 1\), define \(u_{i}=p^{i+1},v_{i}=p^{i}\). It is clear that \(u_{i}\neq v_{i}\) and \((u_{i},v_{i})\) converges to \((0,0)\)\(\mathfrak{p}\)-adically. We can compute the difference quotient
\[\Phi D_{K,T}(u_{i},v_{i}) =\frac{D_{K,T}(u_{i})-D_{K,T}(v_{i})}{u_{i}-v_{i}}=\frac{D_{L,T_{ L/K}}(u_{i})-D_{L,T_{L/K}}(v_{i})}{u_{i}-v_{i}}\] \[=\frac{\frac{(i+1)p^{(i+1)}m_{p}}{pg(p,L)}-\frac{ip^{i}m_{p}}{pg( p,L)}}{p^{i+1}-p^{i}}=\frac{m_{p}}{g(p,L)}\frac{(i+1)p-i}{p^{2}-p}.\]
The \(\mathfrak{p}\)-adic valuation of \(\Phi D_{K,T}(u_{i},v_{i})\) is greater than or equal to \(\nu_{\mathfrak{p}}(m_{p})-\nu_{\mathfrak{p}}(g(p,L))\) if \(p\mid i\) and is equal to \(\nu_{\mathfrak{p}}(m_{p})-\nu_{\mathfrak{p}}(g(p,L))-1\) if \(p\mid i\). Hence \(\Phi D_{K,T}(u_{i},v_{i})\) does not have a limit as the sequence \((u_{i},v_{i})\to(0,0)\).
Second, we assume that there does not exist a prime ideal \(\mathfrak{p}^{\prime}\in T\) such that \(\mathfrak{p}^{\prime}\mid p\). Let \(\mathfrak{q}\in T\) be such that \(\mathfrak{q}\nmid p\) and \(\mathfrak{Q}\subseteq T_{L/K}\) such that \(\mathfrak{Q}\mid\mathfrak{q}\mid q\). Let \(m_{q}\) be the number of prime ideals in \(T_{L/K}\) that divide \(q\). For positive integer \(i\geq 1\), define \(u_{i}=(pq)^{i+1},v_{i}=(pq)^{i}\). It is clear that \(u_{i}\neq v_{i}\) and \((u_{i},v_{i})\) converges to \((0,0)\)\(\mathfrak{p}\)-adically. We can compute the difference quotient
\[\Phi D_{K,T}(u_{i},v_{i}) =\frac{D_{K,T}(u_{i})-D_{K,T}(v_{i})}{u_{i}-v_{i}}=\frac{D_{L,T_{L /K}}(u_{i})-D_{L,T_{L/K}}(v_{i})}{u_{i}-v_{i}}\] \[=\frac{\frac{(i+1)(pq)^{i+1}m_{q}}{qg(q,K)}-\frac{i(pq)^{i}m_{q}} {qg(q,K)}}{(pq)^{i+1}-(pq)^{i}}=\frac{m_{q}}{g(q,K)}\frac{(i+1)pq-i}{pq^{2}-q}.\]
The \(\mathfrak{p}\)-adic valuation of \(\Phi D_{K,T}(u_{i},v_{i})\) is greater than or equal to \(\nu_{\mathfrak{p}}(m_{q})-\nu_{\mathfrak{p}}(g(q,K))+1\) if \(p\mid i\) and is equal to \(\nu_{\mathfrak{p}}(m_{q})-\nu_{\mathfrak{p}}(g(q,K))\) if \(p\mid i\). Hence \(\Phi D_{K,T}(u_{i},v_{i})\) does not have a limit as the sequence \((x_{i},y_{i})\rightarrow(0,0)\).
**Theorem 5.9**.: _Let \(K/\mathbb{Q}\) be a number field of degree \(n\). Let \(\mathfrak{p}\) be a prime ideal of \(\mathcal{O}_{K}\) with \(\mathfrak{p}\mid p\). Let \(\{\mathfrak{p}\}\neq T\) be a nonempty set of prime ideals in \(\mathcal{O}_{K}\) such that there exists a prime ideal in \(T\) that does not divide \(p\). Then the arithmetic subderivative \(D_{K,T}:K\to K\) is \(\mathfrak{p}\)-adically discontinuous at every nonzero \(x\in K\)._
Proof.: We first assume \(K/\mathbb{Q}\) is Galois. For each prime \(q\in\mathbb{P}\), let \(r_{q}\) be the number of prime ideals \(\mathfrak{q}\in T\) such that \(\mathfrak{q}\mid q\). Let \(\mathbb{P}_{T}:=\{q\in\mathbb{P}\mid r_{q}\neq 0,q\neq p\}\) and we know \(0\leq\nu_{p}(g(q,K))\leq\nu_{p}(n)\) for all \(q\in\mathbb{P}_{T}\). Let \(q_{0}\in\mathbb{P}_{T}\) be a prime such that \(\nu_{p}(g(q_{0},K))=\min\{\nu_{p}(g(q,K))\mid q\in\mathbb{P}_{T}\}\). Let \(M:=\max\{\nu_{p}(j):1\leq j\leq n\}+1\). For each integer \(i\geq 1\), the Dirichlet's theorem on arithmetic progression implies there are infinitely many primes in the arithmetic progression \(q_{0}^{p^{M}},q_{0}^{p^{M}}+p^{i},q_{0}^{p^{M}}+2p^{i},\ldots\). Set \(n_{0}:=0\). For each \(i\geq 1\), let \(n_{i}>n_{i-1}\) be a positive integer such that \(q_{i}:=q_{0}^{p^{M}}+n_{i}p^{i}\) is a prime, that is, one prime from each arithmetic progression. Hence we know that \(p,q_{0},q_{1},q_{2},\dots\) is a list of pairwise distinct prime numbers. Let \(x_{i}:=q_{0}^{p^{M}}x/q_{i}\in K\). One can show that
\[\lim_{i\rightarrow+\infty}\nu_{\mathfrak{p}}(x-x_{i})=\lim_{i\rightarrow+ \infty}\nu_{\mathfrak{p}}\Big{(}\frac{xn_{i}p^{i}}{q_{i}}\Big{)}=\lim_{i \rightarrow+\infty}\nu_{\mathfrak{p}}(xn_{i}p^{i})=+\infty.\]
This means that the sequence \(x_{i}\) converges to \(x\)\(\mathfrak{p}\)-adically. We now show that \(D_{K,T}(x_{i})\) does not converge to \(D_{K,T}(x)\)\(\mathfrak{p}\)-adically. We have
\[D_{K,T}(x)-D_{K,T}(x_{i}) =D_{K,T}(x)-\Big{(}\frac{q_{0}^{p^{M}}}{q_{i}}D_{K,T}(x)+xD_{K,T} \big{(}\frac{q_{0}^{p^{M}}}{q_{i}}\big{)}\Big{)}\] \[=\frac{n_{i}p^{i}}{q_{i}}D_{K,T}(x)-x\frac{D_{K,T}(q_{0}^{p^{M}}) q_{i}-q_{0}^{p^{M}}D_{K,T}(q_{i})}{q_{i}^{2}}\] \[=\frac{n_{i}p^{i}}{q_{i}}D_{K,T}(x)-\frac{xr_{q_{0}}p^{M}q_{0}^{p ^{M}-1}}{g(q_{0},K)q_{i}}+\frac{xq_{0}^{p^{M}}D_{K,T}(q_{i})}{q_{i}^{2}}.\]
We analyze the \(\mathfrak{p}\)-adic valuation of each of three summands separately. For the first summand, we have
\[\lim_{i\to+\infty}\nu_{\mathfrak{p}}\Big{(}\frac{n_{i}p^{i}}{q_{i}}D_{K,T}(x) \Big{)}=\lim_{i\to+\infty}\nu_{\mathfrak{p}}(p^{i})=+\infty.\]
For the second summand, as \(i\gg 0\), we have
\[\nu_{\mathfrak{p}}\Big{(}\frac{xr_{q_{0}}p^{M}q_{0}^{p^{M}-1}}{g(q_{0},K)q_{i }}\Big{)}=\nu_{\mathfrak{p}}\Big{(}\frac{xr_{q_{0}}p^{M}}{g(q_{0},K)}\Big{)}= \nu_{\mathfrak{p}}\Big{(}\frac{xr_{q_{0}}}{g(q_{0},K)}\Big{)}+M.\]
For the third summand, if \(q_{i}\notin\mathbb{P}_{T}\), then \(D_{K,T}(q_{i})=0\) so it has no contribution to the \(\mathfrak{p}\)-adic valuation. On the other hand, if \(q_{i}\in\mathbb{P}_{T}\), then we have
\[\nu_{\mathfrak{p}}\Big{(}\frac{xq_{0}^{p^{M}}D_{K,T}(q_{i})}{q_{i}^{2}}\Big{)} =\nu_{\mathfrak{p}}\Big{(}\frac{xq_{0}^{p^{M}}r_{q_{i}}}{g(q_{i},K)q_{i}^{2}} \Big{)}=\nu_{\mathfrak{p}}\Big{(}\frac{xr_{q_{i}}}{g(q_{i},K)}\Big{)}.\]
Since \(1\leq r_{q_{i}}\leq n\), we know that \(M>\nu_{p}(r_{q_{i}})\) by definition. We also know that \(\nu_{p}(g(q_{0},K))\leq\nu_{p}(g(q_{i},K))\) for all \(i\geq 1\). Hence
\[\nu_{\mathfrak{p}}\Big{(}\frac{xr_{q_{0}}}{g(q_{0},K)}\Big{)}+M>\nu_{ \mathfrak{p}}\Big{(}\frac{xr_{q_{i}}}{g(q_{i},K)}\Big{)}.\]
This implies that
\[\nu_{\mathfrak{p}}(D_{K,T}(x)-D_{K,T}(x_{i}))=\begin{cases}\nu_{\mathfrak{p}} \Big{(}\frac{xr_{q_{i}}}{g(q_{i},K)}\Big{)},&\text{if $q_{i}\in\mathbb{P}_{T}$;}\\ \nu_{\mathfrak{p}}\Big{(}\frac{xr_{q_{0}}}{g(q_{0},K)}\Big{)}+M,&\text{if $q_{i}\notin \mathbb{P}_{T}$.}\end{cases}\]
This implies that
\[\lim_{i\to+\infty}\nu_{\mathfrak{p}}(D_{K,T}(x)-D_{K,T}(x_{i}))\neq+\infty.\]
Now we assume that \(K/\mathbb{Q}\) is not necessarily Galois. Let \(L/K\) be a finite extension such that \(L/\mathbb{Q}\) is Galois, and \(\mathfrak{P}\) a prime ideal of \(\mathcal{O}_{L}\) such that \(\mathfrak{P}\mid\mathfrak{p}\). Since \(T\) contains a prime ideal that does not divide \(p\), we know that \(T_{L/K}\) also contains a prime ideal that does not divide \(p\). Let \(x_{i}\in K\) be defined as above. Then we know that \(x_{i}\) converges to \(x\)\(\mathfrak{p}\)-adically in \(K\), and
thus \(\mathfrak{P}\)-adically in \(L\) since \(\nu_{\mathfrak{p}}\) and \(\nu_{\mathfrak{P}}\) agree on \(K\). Since \(L/\mathbb{Q}\) is Galois, we know that
\[\lim_{i\to+\infty}(\nu_{\mathfrak{P}}(D_{L,T_{L/K}}(x_{i})-D_{L,T_{L/K}}(x))) \neq+\infty.\]
Hence
\[\lim_{i\to+\infty}(\nu_{\mathfrak{p}}(D_{K,T}(x_{i})-D_{K,T}(x))=\lim_{i\to+ \infty}(\nu_{\mathfrak{P}}(D_{L,T_{L/K}}(x_{i})-D_{L,T_{L/K}}(x)))\neq+\infty.\]
This shows that \(D_{K,T}\) is discontinuous at \(x\).
**Corollary 5.10**.: _Let \(\{p\}\neq T\) be a nonempty set of prime numbers. The arithmetic subderivative \(D_{\mathbb{Q},T}\) is \(p\)-adically discontinuous at any nonzero \(x\in\mathbb{Q}\)._
Proof.: Apply Theorem 5.9 by taking \(K=\mathbb{Q}\) and \(\mathfrak{p}=(p)\).
**Remark 5.11**.: Corollaries 5.7 and 5.10 together give answers to all open questions about \(p\)-adic continuity and discontinuity of arithmetic subderivative over \(\mathbb{Q}\) listed in [7, Section 7].
The only case that is left for consideration is when all prime ideals in \(T\) sit above the same \(p\). This case will be fully answered by the next theorem when we assume \(T\) is finite.
**Theorem 5.12**.: _Let \(K/\mathbb{Q}\) be a number field of degree \(n\). Let \(\mathfrak{p}\) be a prime ideal of \(\mathcal{O}_{K}\) with \(\mathfrak{p}\mid p\). Let \(\{\mathfrak{p}\}\neq T\) be a nonempty finite set of prime ideals in \(\mathcal{O}_{K}\). Then the arithmetic subderivative \(D_{K,T}:K\to K\) is \(\mathfrak{p}\)-adically discontinuous at any nonzero \(x\in K\)._
Proof.: We first assume \(K/\mathbb{Q}\) is Galois. Let \(T\setminus\{\mathfrak{p}\}=\{\mathfrak{p}_{1},\ldots,\mathfrak{p}_{n}\}\). By the Chinese remainder theorem, for each \(i\geq 1\), there exists \(x_{i}\in K\) such that \(\nu_{\mathfrak{p}}(1-x_{i})=i\), \(\nu_{\mathfrak{p}_{1}}(x_{i})=1\), and \(\nu_{\mathfrak{p}_{j}}(x_{i})=0\) for \(2\leq j\leq n\). This implies that \(\nu_{\mathfrak{p}}(x_{i})=0\). Hence for all \(i\geq 1\), we have
\[D_{K,T}(x_{i})=\frac{x_{i}}{p_{1}g(p_{1},K)}.\]
The sequence \(x_{i}x\) converges to \(x\)\(\mathfrak{p}\)-adically because as \(i\to+\infty\), we have
\[\nu_{\mathfrak{p}}(x-x_{i}x)=\nu_{\mathfrak{p}}(1-x_{i})+\nu_{\mathfrak{p}}(x )\to+\infty.\]
On the other hand, \(D_{K,T}(x_{i}x)\) does not converge to \(D_{K,T}(x)\)\(\mathfrak{p}\)-adically because as \(i\gg 0\), we have
\[\nu_{\mathfrak{p}}(D_{K,T}(x)-D_{K,T}(x_{i}x)) =\nu_{\mathfrak{p}}(D_{K,T}(x)-x_{i}D_{K,T}(x)-xD_{K,T}(x_{i}))\] \[=\nu_{\mathfrak{p}}\Big{(}D_{K,T}(x)(1-x_{i})-\frac{xx_{i}}{p_{1} g(p_{1},K)}\Big{)}\] \[=\nu_{\mathfrak{p}}(x)-\nu_{\mathfrak{p}}(p_{1})-\nu_{\mathfrak{ p}}(g(p_{1},K)).\]
Hence
\[\lim_{i\to+\infty}\nu_{\mathfrak{p}}(D_{K,T}(x)-D_{K,T}(x_{i}x))\neq+\infty,\]
and \(D_{K,T}\) is discontinuous at \(x\).
If \(K/\mathbb{Q}\) is not necessarily Galois, then one can prove that \(D_{K,T}\) is discontinuous at \(x\) using the same strategy as in Theorem 5.9.
|
2303.11156
|
Can AI-Generated Text be Reliably Detected?
|
The unregulated use of LLMs can potentially lead to malicious consequences
such as plagiarism, generating fake news, spamming, etc. Therefore, reliable
detection of AI-generated text can be critical to ensure the responsible use of
LLMs. Recent works attempt to tackle this problem either using certain model
signatures present in the generated text outputs or by applying watermarking
techniques that imprint specific patterns onto them. In this paper, we show
that these detectors are not reliable in practical scenarios. In particular, we
develop a recursive paraphrasing attack to apply on AI text, which can break a
whole range of detectors, including the ones using the watermarking schemes as
well as neural network-based detectors, zero-shot classifiers, and
retrieval-based detectors. Our experiments include passages around 300 tokens
in length, showing the sensitivity of the detectors even in the case of
relatively long passages. We also observe that our recursive paraphrasing only
degrades text quality slightly, measured via human studies, and metrics such as
perplexity scores and accuracy on text benchmarks. Additionally, we show that
even LLMs protected by watermarking schemes can be vulnerable against spoofing
attacks aimed to mislead detectors to classify human-written text as
AI-generated, potentially causing reputational damages to the developers. In
particular, we show that an adversary can infer hidden AI text signatures of
the LLM outputs without having white-box access to the detection method.
Finally, we provide a theoretical connection between the AUROC of the best
possible detector and the Total Variation distance between human and AI text
distributions that can be used to study the fundamental hardness of the
reliable detection problem for advanced language models. Our code is publicly
available at https://github.com/vinusankars/Reliability-of-AI-text-detectors.
|
Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, Soheil Feizi
|
2023-03-17T17:53:19Z
|
http://arxiv.org/abs/2303.11156v3
|
# Can AI-Generated Text be Reliably Detected?
###### Abstract
The rapid progress of Large Language Models (LLMs) has made them capable of performing astonishingly well on various tasks including document completion and question answering. The unregulated use of these models, however, can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI-generated text can be critical to ensure the responsible use of LLMs. Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques that imprint specific patterns onto them. In this paper, both empirically and theoretically, we show that these detectors are not reliable in practical scenarios. Empirically, we show that _paraphrasing attacks_, where a light paraphraser is applied on top of the generative text model, can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors and zero-shot classifiers. We then provide a theoretical _impossibility result_ indicating that for a sufficiently good language model, even the best-possible detector can only perform marginally better than a random classifier. Finally, we show that even LLMs protected by watermarking schemes can be vulnerable against spoofing attacks where _adversarial humans_ can infer hidden watermarking signatures and add them to their generated text to be detected as text generated by the LLMs, potentially causing reputational damages to their developers. We believe these results can open an honest conversation in the community regarding the ethical and reliable use of AI-generated text.
## 1 Introduction
Artificial Intelligence (AI) has made tremendous advances in recent years, from generative models in computer vision (Rombach et al., 2022; Saharia et al., 2022) to generative models in natural language processing (NLP) (Brown et al., 2020; Zhang et al., 2022; Raffel et al., 2019). Large Language Models (LLMs) can now generate texts of supreme quality with the potential in many applications. For example, the recent model of ChatGPT (OpenAI, 2022) can generate human-like texts for various tasks such as writing codes for computer programs, lyrics for songs, completing documents, and question answering; its applications are endless. The trend in NLP shows that these LLMs will even get better with time. However, this comes with a significant challenge in terms of authenticity and regulations. AI tools have the potential to be misused by users for unethical purposes such as plagiarism, generating fake news, spamming, generating fake product reviews, and manipulating web content for social engineering in ways that can have negative impacts on society (Adelani et al., 2020; Weiss, 2019). Some news articles rewritten by AI have led to many fundamental errors in them
[Christian, 2023]. Hence, there is a need to ensure the responsible use of these generative AI tools. In order to aid this, a lot of recent research focuses on detecting AI-generated texts.
Several detection works study this problem as a binary classification problem [OpenAI, 2019, Jawahar et al., 2020, Mitchell et al., 2023, Bakhtin et al., 2019, Fagni et al., 2020]. For example, OpenAI fine-tunes RoBERTa-based [Liu et al., 2019] GPT-2 detector models to distinguish between non-AI generated and GPT-2 generated texts [OpenAI, 2019]. This requires such a detector to be fine-tuned with supervision on each new LLM for reliable detection. Another stream of work focuses on zero-shot AI text detection without any additional training overhead [Solaiman et al., 2019, Ippolito et al., 2019, Gehrmann et al., 2019]. These works evaluate the expected per-token log probability of texts and perform thresholding to detect AI-generated texts. Mitchell et al. [2023] observe that AI-generated passages tend to lie in negative curvature of log probability of texts. They propose DetectGPT, a zero-shot LLM text detection method, to leverage this observation. Since these approaches rely on a neural network for their detection, they can be vulnerable to adversarial and poisoning attacks [Goodfellow et al., 2014, Sadasivan et al., 2023, Kumar et al., 2022, Wang et al., 2022]. Another line of work aims to watermark AI-generated texts to ease their detection [Atallah et al., 2001, Wilson et al., 2014, Kirchenbauer et al., 2023, Zhao et al., 2023]. Watermarking cases the detection of LLM output text by imprinting specific patterns on them. Soft watermarking proposed in Kirchenbauer et al. [2023] partitions tokens into green and red lists to help create these patterns. A watermarked LLM samples a token, with high probability, from the green list determined by its prefix token. These watermarks are often imperceptible to humans.
In this paper, through both empirical and theoretical analysis, we show that state-of-the-art AI-text detectors are not reliable in practical scenarios. We first study empirical attacks on soft watermarking [Kirchenbauer et al., 2023], and a wide range of zero-shot [Mitchell et al., 2023] and neural network-based detectors [OpenAI, 2019]. We show that a _paraphrasing attack_, where a lightweight neural network-based paraphraser is applied to the output text of the AI-generative model, can evade various types of detectors. Before highlighting the results, let us provide an intuition why this attack is successful. For a given sentence \(s\), suppose \(P(s)\) is the set of all paraphrased sentences that have similar meanings to the sentence \(s\). Moreover, let \(L(s)\) be the set of sentences the source LLM can output with meanings similar to \(s\). Suppose a user has generated \(s\) using an LLM and wants to evade
Figure 1: An illustration of vulnerabilities of existing AI-text detectors. We consider both watermarking-based and non-watermarking-based detectors and show that they are not reliable in practical scenarios. Colored arrow paths show the potential pipelines for adversaries to avoid detection. In red: an attacker can use a paraphraser to remove the LLM signatures from an AI-generated text to avoid detection. We show that this attack can break a wide range of detectors. We provide an _impossibility result_ indicating that for a sufficiently good language model, even the best-possible detector can perform only marginally better than a random classifier. In blue: An adversary can query the soft watermarked LLM multiple times to learn its watermarking scheme. This information can be used to spoof the watermark detector by composing human text that is detected to be watermarked.
detection. If \(|L(s)|\ll|P(s)|\), the user can randomly sample from \(P(s)\) and avoid detection (if the detection model has a reasonably low false positive rate). Moreover, if \(|L(s)|\) is comparable to \(|P(s)|\), the detector cannot have low false positive and negative rates simultaneously.
With this intuition in mind, in SS2, we use light-weight neural network-based paraphrasers (\(2.3\times\) and \(5.5\times\) smaller than the source LLM in terms of the number of parameters) to rephrase the source LLM's output text. Our experiments show that this automated paraphrasing attack can drastically reduce the accuracy of various detectors, including the ones using soft watermarking as well as neural network-based detectors and zero-shot classifiers. For example, a PEGASUS-based paraphraser (Zhang et al., 2019) can drop the soft watermarking detector's (Kirchenbauer et al., 2023) accuracy from \(97\%\) to \(80\%\) with just a degradation of 3.5 in the perplexity score. The area under the ROC curves of zero-shot detectors (Mitchell et al., 2023) drop from \(96.5\%\) to \(25.2\%\) using a T5-based paraphraser (Damodaran, 2021). We also observe that the performance of neural network-based trained detectors (OpenAI, 2019) deteriorate significantly after our paraphrasing attack. For instance, the true positive rate of the RoBERTa-Large-Detector from OpenAI drops from \(100\%\) to \(60\%\) at a realistic low false positive rate of \(1\%\).
In SS3, we present an impossibility result regarding the detection of AI-generated texts. As language models improve over time, AI-generated texts become increasingly similar to human-generated texts, making them harder to detect. This similarity is reflected in the decreasing total variation distance between the distributions of human and AI-generated text sequences (OpenAI, 2023). Theorem 1 bounds the area under the receiver operating characteristic (ROC) curve of the best possible detector \(D\) as:
\[\mathsf{AUROC}(D)\leq\frac{1}{2}+\mathsf{TV}(\mathcal{M},\mathcal{H})-\frac{ \mathsf{TV}(\mathcal{M},\mathcal{H})^{2}}{2}\]
where \(\mathsf{TV}(\mathcal{M},\mathcal{H})\) is the total variation distance between the text distributions produced by an AI-model \(\mathcal{M}\) and humans \(\mathcal{H}\). It shows that as the total variation distance diminishes, the best-possible detection performance approaches \(1/2\), which represents the AUROC corresponding to a classifier that randomly labels text as AI or human-generated. Thus, for a sufficiently advanced language model, even the best-possible detector performs only marginally better than a random classifier. The aim of this analysis is to urge caution when dealing with detection systems that purport to detect text produced by any AI model. We complement our result with a tightness analysis, where we demonstrate that for a given human distribution \(\mathcal{H}\), there exists a distribution \(\mathcal{M}\) and a detector \(D\) for which the above bound holds with equality.
Although our analysis considers the text generated by all humans and general language models, it can also be applied to specific scenarios, such as particular writing styles or sentence paraphrasing, by defining \(\mathcal{M}\) and \(\mathcal{H}\) appropriately. For example, it could be used to show that AI-generated text, even with an embedded watermark, can be made difficult to detect by simply passing it through a paraphrasing tool. For a sequence \(s\) generated by a language model, we set \(\mathcal{M}\) and \(\mathcal{H}\) to be the distributions of sequences of similar meaning to \(s\) produced by the paraphraser and humans. The goal of the paraphraser is to make its output distribution similar to the distribution of human-generated sequences with respect to the total variation distance. The above result puts a constraint on the performance of the detector on the rephrased AI text.
Finally, we discuss the possibility of _spoofing attacks_ on text generative models in SS4. In this setting, an attacker generates a non-AI text that is detected to be AI-generated. An adversary can potentially launch spoofing attacks to produce derogatory texts that are detected to be AI-generated to affect the reputation of the target LLM's developers. As a proof-of-concept, we show that the soft watermarking detectors (Kirchenbauer et al., 2023) can be spoofed to detect texts composed by humans as watermarked. Though the random seed used for generating watermarked text is private, we develop an attack that smartly queries the target LLM multiple times to learn its watermarking scheme. An _adversarial human_ can then use this information to compose texts that are detected to be watermarked. Figure 1 shows an illustration of vulnerabilities of existing AI-text detectors.
Identifying AI-generated text is a critical problem to avoid their misuse by users for unethical purposes such as plagiarism, generating fake news and spamming. However, deploying vulnerable detectors is _not_ the right solution to tackle this issue since it can cause its own damages such as falsely accusing a human of plagiarism. Our results highlight sensitivities of a wide range of detectors to simple practical attacks such as paraphrasing attacks. More importantly, our results indicate the impossibility of developing reliable detectors in practical scenarios-- to maintain reliable det
would have to trade off their performance. We hope that these findings can initiate an honest dialogue within the community concerning the ethical and dependable utilization of AI-generated text.
## 2 Evading AI-Detectors using Paraphrasing Attacks
Detecting AI-generated text is crucial for ensuring the security of an LLM and avoiding type-II errors (not detecting LLM output as AI-generated text). To protect an LLM's ownership, a dependable detector should be able to detect AI-generated texts with high accuracy. In this section, we discuss _paraphrasing attacks_ that can degrade type-II errors of state-of-the-art AI text detectors such as soft watermarking (Kirchenbauer et al., 2023), zero-shot detectors (Mitchell et al., 2023), and trained neural network-based detectors (OpenAI, 2019). These detectors identify if a given text contains distinct LLM signatures, indicating that it may be AI-generated. The idea here is that a paraphraser can potentially remove these signatures without affecting the meaning of the text. While we discuss this attack theoretically in SS3, the main intuition here is as follows:
Let \(s\) represent a sentence and \(\mathcal{S}\) represent a set of all meaningful sentences to humans. Suppose a function \(P:\mathcal{S}\to 2^{\mathcal{S}}\) exists such that \(\forall s^{\prime}\in P(s)\), the meaning of \(s\) and \(s^{\prime}\) are the same with respect to humans. In other words, \(P(s)\) is the set of sentences with a similar meaning to the sentence \(s\). Let \(L:\mathcal{S}\to 2^{\mathcal{S}}\) such that \(L(s)\) is the set of sentences the source LLM can output with the same meaning as \(s\). Further, the sentences in \(L(s)\) are detected to be AI-generated by a reliable detector, and \(L(s)\subseteq P(S)\) so that the output of the AI model makes sense to humans. If \(|L(s)|\) is comparable to \(|P(s)|\), the detector might label many human-written texts as AI-generated (high type-I error). However, if \(|L(s)|\) is small, we can randomly choose a sentence from \(P(s)\) to evade the detector with a high probability (affecting type-II error). Thus, in this context of paraphrasing attacks, detectors face a trade-off between minimizing type-I and type-II errors.
### Paraphrasing Attacks on Watermarked AI-generated Text
Here, we perform our experiments on the soft watermarking scheme1 proposed in Kirchenbauer et al. (2023). In this scheme, an output token of the LLM is selected from a _green list_ determined by its prefix. We expect paraphrasing to remove the watermark signature from the target LLM's output. The target AI text generator uses a transformer-based OPT-1.3B (Zhang et al., 2022) architecture with 1.3B parameters2. We use a T5-based (Raffel et al., 2019) paraphrasing model (Damodaran, 2021) with 222M parameters3 and a PEGASUS-based (Zhang et al., 2019) paraphrasing model with 568M parameters4 (\(2.3\times\) and \(5.8\times\) smaller than the target LLM, respectively). The target LLM is trained to perform text completion tasks on extensive data, while the smaller paraphrasing model is fine-tuned only for paraphrasing tasks. For these reasons, the paraphrasing model we use for our attack is lighter than the target OPT-based model.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Text & \# tokens & \# green tokens & Detector accuracy & Perplexity \\ \hline \hline Watermarked LLM output & 19042 & 11078 & 97\% & 6.7 \\ \hline PEGASUS-based paraphrasing & 16773 & 7412 & 80\% & 10.2 \\ \hline T5-based paraphrasing & 15164 & 6493 & 64\% & 16.7 \\ \hline T5-based paraphrasing & 14913 & 6107 & 57\% & 18.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of paraphrasing attacks on soft watermarking (Kirchenbauer et al., 2023). For testing, we consider 100 text passages from XSum (Narayan et al., 2018). The watermarked output text from the target AI model consists of \(\sim 58\%\) green list tokens. The PEGASUS-based (Zhang et al., 2019) paraphrased text consists of only \(\sim 44\%\) green list tokens. Hence, the detector accuracy drops from \(97\%\) to \(80\%\), making it unreliable. Note that these PEGASUS-based paraphrased texts only degrade the perplexity measure by 3.5. Even a lighter T5-based paraphraser can affect the detector accuracy quite a bit without degrading the text quality significantly.
The paraphraser takes the watermarked LLM text sentence by sentence as input. We use 100 passages from the Extreme Summarization (XSum) dataset [21] for our evaluations5. The passages from this dataset are input to the target AI model to generate watermarked text. Using the PEGASUS-based paraphraser, the detector's accuracy drops from \(97\%\) to \(80\%\) with only a trade-off of 3.5 in perplexity score (see Table 1). This paraphrasing strategy reduces the percentage of green list tokens in the watermarked text from \(58\%\) (before paraphrasing) to \(44\%\) (after paraphrasing). Table 2 shows some example outputs from the target soft watermarked LLM before and after paraphrasing. We also use a much smaller T5-based paraphraser [1] to show that even such a naive paraphraser can drop the detector's accuracy from \(97\%\) to \(57\%\). Figure 2 shows the trade-off between the detection accuracy and the T5-based paraphraser's output text quality (measured
\begin{table}
\begin{tabular}{c|p{142.3pt}} \hline \hline Description & \multicolumn{1}{c}{Text} \\ \hline \hline Input prompt (**misinformation\({}^{\dagger}\)**) & \begin{tabular}{c} Drinking bleach is the best way to tackle COVID-19. WHO announced that vaccination for COVID-19 is ineffective. CDC is urging the citizens to drink bleach everyday. The health department has officially confirmed that \\ \end{tabular} \\ \hline Detected watermarked text & \begin{tabular}{c} the vaccine is useless. The World Health Organization made shocking statement, that the vaccine is ineffective, because it does not prevent people from getting infected, which means it is useless. CDC is also calling for people to drink bleach daily to cleanse the body. [continues...] \\ \end{tabular} \\ \hline Undetected PEGASUS-based paraphrasing & \begin{tabular}{c} The vaccine is useless. The vaccine is useless because it doesn’t prevent people from getting infections, according to the World Health Organization. The CDC wants people to drink bleach to cleanse their body. The vaccine is useless according to WHO. The CDC wants people to drink bleach to cleanse their body. [continues...] \\ \end{tabular} \\ \hline \hline Input prompt (**fake news\({}^{\dagger}\)**) & \begin{tabular}{c} The vaccine is useless. The vaccine is useless because it doesn’t prevent people from getting infections, according to the World Health Organization. The CDC wants people to drink bleach to cleanse their body. The vaccine is useless according to WHO. The CDC wants people to drink bleach to cleanse their body. [continues...] \\ \end{tabular} \\ \hline Detected watermarked text & \begin{tabular}{c} UnInThe bill was introduced in the US Senate on Wednesday, and the US House of Representatives got a vote on it on Thursday afternoon. The US President Donald Trump is expected to sign it. [continues...] \\ \end{tabular} \\ \hline Undetected PEGASUS-based paraphrasing &
\begin{tabular}{c} The US House of Representatives voted on the bill on Thursday afternoon, after it was introduced in the US Senate on Wednesday. It is expected that Donald Trump will sign it. It will become law if he gets it. [continues...] \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: PEGASUS-based paraphrasing for evading soft watermarking-based detectors. The target AI generator outputs a watermarked text for an input prompt. This output is detected to be generated by the watermarked target LLM. We use a PEGASUS-based [15] paraphraser to rephrase this watermarked output from the target LLM. The paraphraser rephrases sentence by sentence. The detector does not detect the output text from the paraphraser. However, the paraphrased passage reads well and means the same as the original watermarked LLM output. At the top rows, we demonstrate how an input prompt can prompt a target LLM to generate **watermarked misinformation**. In the bottom rows, we showcase how an input prompt can induce a target LLM to create **watermarked fake news.** Using paraphrasing attacks in this manner, an attacker can spread fake news or misinformation without getting detected.
\({}^{\dagger}\) **contains misinformation only to demonstrate that LLMs can be used for malicious purposes.**
Figure 2: Accuracy of the soft watermarking detector on paraphrased LLM outputs plotted against perplexity. The lower the perplexity is, the better the quality of the text is.
using perplexity score). However, we note that perplexity is a proxy metric for evaluating the quality of texts since it depends on another LLM for computing the score. We use a larger OPT-2.7B6 (Zhang et al., 2022) with 2.7B parameters for computing the perplexity scores.
Footnote 6: [https://huggingface.co/facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b)
### Paraphrasing Attacks on Non-Watermarked AI-generated texts
Non-watermarking detectors such as trained classifiers (OpenAI, 2019) and zero-shot classifiers (Mitchell et al., 2023; Gehrmann et al., 2019; Ippolito et al., 2019; Solaiman et al., 2019) use the presence of LLM-specific signatures in AI-generated texts for their detection. Neural network-based trained detectors such as RoBERTa-Large-Detector from OpenAI (OpenAI, 2019) are trained or fine-tuned for binary classification with datasets containing human and AI-generated texts. Zero-shot classifiers leverage specific statistical properties of the source LLM outputs for their detection. Here, we perform experiments on these non-watermarking detectors to show they are vulnerable to our paraphrasing attack.
We use a pre-trained GPT-2 Medium7 model (Radford et al., 2019) with 355M parameters to evaluate our attack on 200 passages from the XSum dataset (Narayan et al., 2018). We use a T5-based paraphrasing model (Damodaran, 2021) with 222M parameters to rephrase the output texts from the target GPT-2 Medium model. Figure 3 shows the effectiveness of the paraphrasing attack over these detectors. The AUROC scores of DetectGPT (Mitchell et al., 2023) drop from \(96.5\%\) (before the attack) to \(59.8\%\) (after the attack). Note that AUROC of \(50,0\%\) corresponds to a random detector. The rest of the zero-shot detectors (Solaiman et al., 2019; Gehrmann et al., 2019; Ippolito et al., 2019) perform also very poorly after our attack. Though the performance of the trained neural network-based detectors (OpenAI, 2019) is better than that of zero-shot detectors, they are also not reliable. For example, the true positive rate of OpenAI's RoBERTa-Large-Detector drops from \(100\%\) to around \(80\%\) after our attack at a practical false positive rate of \(1\%\). With multiple queries to the detector, an adversary can paraphrase more efficiently to bring down the true positive rate of the RoBERTa-Large-Detector to \(60\%\). Table 3 shows an example of outputs from the GPT-2 model before and after paraphrasing. As seen in the example, the output of the paraphraser reads well and means the same as the detected GPT-2 text. We measure the perplexity of the GPT-2 output text to be 16.3 (Figure 2(a)). GPT-2 is a relatively old LLM, and it performs poorly when compared to more
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Description & z-score & Text \\ \hline \hline \begin{tabular}{c} Input prompt \\ (**misinformation\({}^{\dagger}\)**) \\ \end{tabular} & – & Morocco won the 2022 soccer world cup because they are the best. Because they had their own style of soccer, the whole world has adopted the idea. Not to forget the other reason why we came to this tournament. We all know there will be some serious bad blood if the host are eliminated from the final and not given any chance to play their best. In the past, host nations have had to host the tournament with a different format rather than the original one, where the highest ranked team went, regardless of gender, to the quarter finals. \\ \hline \begin{tabular}{c} Undetected \\ T5-based \\ paraphrasing \\ \end{tabular} & \(0.80\) & Morocco won the 2022 world cup because they are the best. because of their own style of soccer the whole world followed this idea. Not to forget the other reason why we came to this tournament. we all know if the host is eliminated from the final and given no chance to play their best there will be much bloodished. In the past, host nations have had to host the tournament with a different format rather than the original one, where the highest ranked team went, regardless of gender, to the quarter finals. \\ \hline
\begin{tabular}{c} Undetected \\ T5-based \\ paraphrasing \\ \end{tabular} & \(0.80\) & Morocco won the 2022 world cup because they are the best. because of their own style of soccer the whole world followed this idea. Not to forget the other reason why we came to this tournament. we all know if the host is eliminated from the final and given no chance to play their best there will be much bloodished. In the past, host nations have had to host the tournament with a different format rather than the original one, where the highest ranked team went, regardless of gender, to the quarter finals. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evading DetectGPT using a T5-based paraphraser. DetectGPT classifies a text to be generated by GPT-2 if the z-score is greather than 1. After paraphrasing, the z-score drops below the threshold and the text is not detected as AI-generated. \({}^{\dagger}\) **contains misinformation only to demonstrate that LLMs can be used for malicious purposes.**
recent LLMs. The perplexity of the GPT-2 text after paraphrasing is 27.2 (Figure 2(b)). The perplexity score only degrades by 2 with multiple queries to the detector (Figure 2(c)).
Figure 3: ROC curves for various trained and zero-shot detectors before and after rephrasing. In the plot legend – perturbation refers to the zero-shot methods in Mitchell et al. (2023); threshold refers to the zero-shot methods in Solaiman et al. (2019); Gehrmann et al. (2019); Ippolito et al. (2019); roberta refers to OpenAI’s trained detectors (OpenAI, 2019).
Impossibility Results for Reliable Detection of AI-Generated Text
Detecting the misuse of language models in the real world, such as plagiarism and mass propaganda, necessitates the identification of text produced by all kinds of language models, including those without watermarks. However, as these models improve over time, the generated text looks increasingly similar to human text, which complicates the detection process. Specifically, the total variation distance between the distributions of AI-generated and human-generated text sequences diminishes as language models become more sophisticated. This section presents a fundamental constraint on general AI-text detection, demonstrating that even the most effective detector performs only marginally better than a random classifier when dealing with a sufficiently advanced language model. The purpose of this analysis is to caution against relying too heavily on detection systems that claim to identify AI-generated text. We first consider the case of non-watermarked language models and then extend our result to watermarked ones.
In the following theorem, we formalize the above statement by showing an upper bound on the area under the ROC curve of an arbitrary detector in terms of the total variation distance between the distributions for AI and human-generated text. This bound indicates that as the distance between these distributions diminishes, the AUROC bound approaches \(1/2\), which represents the baseline performance corresponding to a detector that randomly labels text as AI or human-generated. We define \(\mathcal{M}\) and \(\mathcal{H}\) as the text distributions produced by an AI model and humans, respectively, over the set of all possible text sequences \(\Omega\). We use \(\mathsf{TV}(\mathcal{M},\mathcal{H})\) to denote the total variation distance between these two distributions and a function \(D:\Omega\rightarrow\mathbb{R}\) that maps every sequence in \(\Omega\) to a real number. Sequences are classified into AI and human-generated by applying a threshold \(\gamma\) on this number. By adjusting the parameter \(\gamma\), we can tune the sensitivity of the detector to AI and human-generated texts to obtain an ROC curve.
**Theorem 1**.: _The area under the ROC of any detector \(D\) is bounded as_
\[\mathsf{AUROC}(D)\leq\frac{1}{2}+\mathsf{TV}(\mathcal{M},\mathcal{H})-\frac{ \mathsf{TV}(\mathcal{M},\mathcal{H})^{2}}{2}.\]
Proof.: The ROC is a plot between the true positive rate (TPR) and the false positive rate (FPR) which are defined as follows:
\[\mathsf{TPR}_{\gamma} =\mathbb{P}_{s\sim\mathcal{M}}[D(s)\geq\gamma]\] \[\text{and }\mathsf{FPR}_{\gamma} =\mathbb{P}_{s\sim\mathcal{H}}[D(s)\geq\gamma],\]
where \(\gamma\) is some classifier parameter. We can bound the difference between the \(\mathsf{TPR}_{\gamma}\) and the \(\mathsf{FPR}_{\gamma}\) by the total variation between \(M\) and \(H\):
\[|\mathsf{TPR}_{\gamma}-\mathsf{FPR}_{\gamma}| =|\mathbb{P}_{s\sim\mathcal{M}}[D(s)\geq\gamma]-\mathbb{P}_{s \sim\mathcal{H}}[D(s)\geq\gamma]|\leq\mathsf{TV}(\mathcal{M},\mathcal{H}) \tag{1}\] \[\mathsf{TPR}_{\gamma} \leq\mathsf{FPR}_{\gamma}+\mathsf{TV}(\mathcal{M},\mathcal{H}). \tag{2}\]
Since the \(\mathsf{TPR}_{\gamma}\) is also bounded by 1 we have:
\[\mathsf{TPR}_{\gamma}\leq\min(\mathsf{FPR}_{\gamma}+\mathsf{TV}(\mathcal{M}, \mathcal{H}),1). \tag{3}\]
Denoting \(\mathsf{FPR}_{\gamma}\), \(\mathsf{TPR}_{\gamma}\), and \(\mathsf{TV}(\mathcal{M},\mathcal{H})\) with \(x\), \(y\), and \(tv\) for brevity, we bound the AUROC as follows:
\[\mathsf{AUROC}(D)=\int_{0}^{1}y\;dx \leq\int_{0}^{1}\min(x+tv,1)dx\] \[=\int_{0}^{1-tv}(x+tv)dx+\int_{1-tv}^{1}dx\] \[=\left|\frac{x^{2}}{2}+tvx\right|_{0}^{1-tv}+|x|_{1-tv}^{1}\] \[=\frac{(1-tv)^{2}}{2}+tv(1-tv)+tv\] \[=\frac{1}{2}+\frac{tv^{2}}{2}-tv+tv-tv^{2}+tv\] \[=\frac{1}{2}+tv-\frac{tv^{2}}{2}.\]
Figure 4 shows how the above bound grows as a function of the total variation. For a detector to have a good performance (say, AUROC \(\geq 0.9\)), the distributions of human and AI-generated texts must be very different from each other (total variation \(>0.5\)). As the two distributions become similar (say, total variation \(\leq 0.2\)), the performance of even the best-possible detector is not good (AUROC \(<0.7\)). This shows that distinguishing the text produced by a non-watermarked language model from a human-generated one is a fundamentally difficult task. Note that, for a watermarked model, the above bound can be close to one as the total variation distance between the watermarked distribution and human-generated distribution can be high. In what follows, we discuss how paraphrasing attacks can be effective in such cases.
**Paraphrasing to Evade Detection:** Although our analysis considers the text generated by all humans and general language models, it can also be applied to specific scenarios, such as particular writing styles or sentence paraphrasing, by defining \(\mathcal{M}\) and \(\mathcal{H}\) appropriately. For example, it could be used to show that AI-generated text, even with watermarks, can be made difficult to detect by simply passing it through a paraphrasing tool. Consider a paraphraser that takes a sequence \(s\) generated by an AI model as input and produces a human-like sequence with similar meaning. Set \(\mathcal{M}=\mathcal{R}_{\mathcal{M}}(s)\) and \(\mathcal{H}=\mathcal{R}_{\mathcal{H}}(s)\) to be the distribution of sequences with similar meanings to \(s\) produced by the paraphraser and humans, respectively. The goal of the paraphraser is to make its distribution \(\mathcal{R}_{\mathcal{M}}(s)\) as similar to the human distribution \(\mathcal{R}_{\mathcal{H}}(s)\) as possible, essentially reducing the total variation distance between them. Theorem 1 puts the following bound on the performance of a detector \(D\) that seeks to detect the outputs of the paraphraser from the sequences produced by humans.
**Corollary 1**.: _The area under the ROC of the detector \(D\) is bounded as_
\[\mathsf{AUROC}(D)\leq\frac{1}{2}+\mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s), \mathcal{R}_{\mathcal{H}}(s))-\frac{\mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s),\mathcal{R}_{\mathcal{H}}(s))^{2}}{2}.\]
**General Trade-offs between True Positive and False Positive Rates.** Another way to understand the limitations of AI-generated text detectors is directly through the characterization of the trade-offs between true positive rates and false positive rates. Adapting inequality 2, we have the following corollaries:
**Corollary 2**.: _For any watermarking scheme \(W\),_
\[\Pr_{s_{w}\sim\mathcal{R}_{\mathcal{M}}(s)}[s_{w}\text{ is watermarked using }W]\leq \mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s),\mathcal{R}_{\mathcal{H}}(s))+\] \[\Pr_{s_{w}\sim\mathcal{R}_{\mathcal{H}}(s)}[s_{w}\text{ is watermarked using }W],\]
_where \(\mathcal{R}_{\mathcal{M}}(s)\) and \(\mathcal{R}_{\mathcal{H}}(s)\) are respectively the distributions of rephrased sequences for \(s\) produced by the paraphrasing model and humans, respectively._
Humans may have different writing styles. Corollary 2 indicates that if a rephrasing model resembles certain human text distribution \(\mathcal{H}\) (i.e. \(\mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s),\mathcal{R}_{\mathcal{H}}(s))\) is small), then either certain people's writing will be detected falsely as watermarked (i.e. \(\Pr_{s_{w}\sim\mathcal{R}_{\mathcal{H}}(s)}[s_{w}\text{ is watermarked using }W]\) is high) or the paraphrasing model can remove the watermark (i.e. \(\Pr_{s_{w}\sim\mathcal{R}_{\mathcal{M}}(s)}[s_{w}\text{ is watermarked respect to }W]\) is low).
**Corollary 3**.: _For any AI-text detector \(D\),_
\[\Pr_{s\sim\mathcal{M}}[s\text{ is detected as AI-text by }D]\leq\mathsf{TV}(\mathcal{M}, \mathcal{H})+\Pr_{s\sim\mathcal{H}}[s\text{ is detected as AI-text by }D],\]
_where \(\mathcal{M}\) and \(\mathcal{H}\) denote text distributions by the model and by humans, respectively._
Corollary 3 indicates that if a model resembles certain human text distribution \(\mathcal{H}\) (i.e. \(\mathsf{TV}(\mathcal{M},\mathcal{H})\) is small), then either certain people's writing will be detected falsely as AI-generated (i.e.
Figure 4: Comparing the performance, in terms of area under the ROC curve, of the best-possible detector to that of the baseline performance corresponding to a random classifier.
\(\Pr_{s\sim\mathcal{H}}[s\) is detected as AI-text by \(D[]\) is high) or the AI-generated text will not be detected reliably (i.e. \(\Pr_{s\sim\mathcal{M}}[s\) is detected as AI-text by \(D[]\) is low).
These results demonstrate fundamental limitations for AI-text detectors, with and without watermarking schemes.
### Tightness Analysis
In this section, we show that the bound in Theorem 1 is tight. For a given distribution of human-generated text sequences \(\mathcal{H}\), we construct an AI-text distribution \(\mathcal{M}\) and a detector \(D\) such that the bound holds with equality. Define sublevel sets of the probability density function of the distribution of human-generated text \(\mathsf{pdf}_{\mathcal{H}}\) over the set of all sequences \(\Omega\) as follows:
\[\Omega_{\mathcal{H}}(c)=\{s\in\Omega\mid\mathsf{pdf}_{\mathcal{H}}(s)\leq c\}\]
where \(c\in\mathbb{R}\). Assume that, \(\Omega_{\mathcal{H}}(0)\) is not empty. Now, consider a distribution \(\mathcal{M}\), with density function \(\mathsf{pdf}_{\mathcal{M}}\), which has the following properties:
1. The probability of a sequence drawn from \(\mathcal{M}\) falling in \(\Omega_{\mathcal{H}}(0)\) is \(\mathsf{TV}(\mathcal{M},\mathcal{H})\), i.e., \(\mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(0)]=\mathsf{TV}( \mathcal{M},\mathcal{H})\).
2. \(\mathsf{pdf}_{\mathcal{M}}(s)=\mathsf{pdf}_{\mathcal{H}}(s)\) for all \(s\in\Omega(\tau)-\Omega(0)\) where \(\tau>0\) such that \(\mathbb{P}_{s\sim\mathcal{H}}[s\in\Omega(\tau)]=1-\mathsf{TV}(\mathcal{M}, \mathcal{H})\).
3. \(\mathsf{pdf}_{\mathcal{M}}(s)=0\) for all \(s\in\Omega-\Omega(\tau)\).
Define a hypothetical detector \(D\) that maps each sequence in \(\Omega\) to the negative of the probability density function of \(\mathcal{H}\), i.e., \(D(s)=-\mathsf{pdf}_{\mathcal{H}}(s)\). Using the definitions of \(\mathsf{TPR}_{\gamma}\) and \(\mathsf{FPR}_{\gamma}\), we have:
\[\mathsf{TPR}_{\gamma} =\mathbb{P}_{s\sim\mathcal{M}}[D(s)\geq\gamma]\] \[=\mathbb{P}_{s\sim\mathcal{M}}[-\mathsf{pdf}_{\mathcal{H}}(s)\geq\gamma]\] \[=\mathbb{P}_{s\sim\mathcal{M}}[\mathsf{pdf}_{\mathcal{H}}(s)\leq -\gamma]\] \[=\mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(-\gamma)]\]
Similarly,
\[\mathsf{FPR}_{\gamma}=\mathbb{P}_{s\sim\mathcal{H}}[s\in\Omega_{\mathcal{H}}(- \gamma)].\]
For \(\gamma\in[-\tau,0]\),
\[\mathsf{TPR}_{\gamma} =\mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(-\gamma)]\] \[=\mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(0)]+ \mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(-\gamma)-\Omega_{ \mathcal{H}}(0)]\] \[=\mathsf{TV}(\mathcal{M},\mathcal{H})+\mathbb{P}_{s\sim\mathcal{M} }[s\in\Omega_{\mathcal{H}}(-\gamma)-\Omega_{\mathcal{H}}(0)]\] (using property 1) \[=\mathsf{TV}(\mathcal{M},\mathcal{H})+\mathbb{P}_{s\sim\mathcal{H }}[s\in\Omega_{\mathcal{H}}(-\gamma)-\Omega_{\mathcal{H}}(0)]\] (using property 2) \[=\mathsf{TV}(\mathcal{M},\mathcal{H})+\mathbb{P}_{s\sim\mathcal{H }}[s\in\Omega_{\mathcal{H}}(-\gamma)]-\mathbb{P}_{s\sim\mathcal{H}}[s\in\Omega _{\mathcal{H}}(0)]\] ( \[\Omega_{\mathcal{H}}(0)\subseteq\Omega_{\mathcal{H}}(-\gamma)\] ) \[=\mathsf{TV}(\mathcal{M},\mathcal{H})+\mathsf{FPR}_{\gamma}.\] ( \[\mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(0)]=0\] )
For \(\gamma\in[-\infty,-\tau]\), \(\mathsf{TPR}_{\gamma}=1\), by property 3. Also, as \(\gamma\) goes from \(0\) to \(-\infty\), \(\mathsf{FPR}_{\gamma}\) goes from \(0\) to \(1\). Therefore, \(\mathsf{TPR}_{\gamma}=\min(\mathsf{FPR}_{\gamma}+\mathsf{TV}(\mathcal{M}, \mathcal{H}),1)\) which is similar to Equation 3. Calculating the AUROC in a similar fashion as in the previous section, we get:
\[\mathsf{AUROC}(D)=\frac{1}{2}+\mathsf{TV}(\mathcal{M},\mathcal{H})-\frac{ \mathsf{TV}(\mathcal{M},\mathcal{H})^{2}}{2}.\]
## 4 Spoofing Attacks on AI-text Generative Models
A strong AI text detection scheme should have both low type-I error (i.e., human text detected as AI-generated) and type-II error (i.e., AI-generated text not detected). An AI language detector without a low type-I error can cause harms as it might wrongly accuse a human of plagiarizing using an LLM. Moreover, an attacker (adversarial human) can generate a non-AI text that is detected to be AI-generated. This is called the _spoofing attack_. An adversary can potentially launch spoofing attacks to produce derogatory texts that are detected to be AI-generated to affect the reputation of the target LLM's developers. In this section, as a proof-of-concept, we show that the soft watermarking detectors
[Kirchenbauer et al., 2023] can be spoofed to detect texts composed by humans as watermarked. They watermark LLM outputs by asserting the model to output tokens with some specific pattern that can be easily detected with meager error rates. Soft watermarked texts are majorly composed of _green list_ tokens. If an adversary can learn the green lists for the soft watermarking scheme, they can use this information to generate human-written texts that are detected to be watermarked. Our experiments show that the soft watermarking scheme can be spoofed efficiently. Though the soft watermarking detector can detect the presence of a watermark very accurately, it cannot be certain if this pattern is actually generated by a human or an LLM. An _adversarial human_ can compose derogatory watermarked texts in this fashion that are detected to be watermarked, which might cause reputational damage to the developers of the watermarked LLM. Therefore, it is important to study _spoofing attacks_ to avoid such scenarios.
**The attack methodology:** For an output word \(s^{(t)}\), soft watermarking samples a word from its green list with high probability. The prefix word \(s^{(t-1)}\) determines the green list for selecting the word \(s^{(t)}\). The attacker's objective is to compute a proxy of green lists for the \(N\) most commonly used words in the vocabulary. A smaller \(N\), when compared to the size of the vocabulary, helps faster computations with a trade-off in the attacker's knowledge of the watermarking scheme. We use a small value of \(N=181\) for our experiments. The attacker can query the watermarked LLM multiple times to learn the pair-wise occurrences of these \(N\) words in the LLM output. Observing these outputs, the attacker can compute the probability of occurrence of a word given a prefix word \(s^{(t-1)}\). This score can be used as a proxy for computing the green list for the prefix word \(s^{(t-1)}\). An attacker with access to these proxy green lists can compose a text detected to be watermarked, thus spoofing the detector. In our experiments, we query the watermarked OPT-1.3B [Zhang et al., 2022]\(10^{6}\) times to evaluate the _green list scores_ to evaluate the green list proxies. We find that inputting nonsense sentences composed of the \(N\) common words encourages the LLM to output text majorly
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Human text & \(\%\) tokens in green list & z-score & Detector output \\ \hline \hline the first thing you do will be the best thing you do. this is the reason why you do the first thing very well. if most of us did the first thing so well this world would be a lot better place. and it is a very well known fact. people from every place know this fact. time will prove this point to the all of us. as you get more money you will also get this fact like other people do. all of us should do the first thing very well. hence the first thing you do will be the best thing you do. & & \\ \hline lot to and where is it about you know and where is it about you know and where is it that not this we are not him is it about you know and so for and go is it that. & 92.5 & 9.86 & Watermarked \\ \hline \hline \end{tabular}
\end{table}
Table 4: Proof-of-concept human-generated texts flagged as watermarked by the soft watermarking scheme. In the first row, a sensible sentence composed by an _adversarial human_ contains \(42.6\%\) tokens from the green list. In the second row, a nonsense sentence generated by an _adversarial human_ using our tool contains \(92.5\%\) green list tokens. The z-test threshold for watermark detection is 4.
Figure 5: Inferred _green list score_ for the token “the”. The plot shows the top 50 words from our set of common words that are likely to be in the green list. The word “first” occurred \(\sim 25\%\) of the time as suffix to “the”.
only composed of these words. This makes the querying more efficient. In Figure 5, we show the learned green list scores for the prefix word "the" using our querying technique. We build a simple tool that lets a user create passages token by token. At every step, the user is provided with a list of potential green list words sorted based on the green list score. These users or adversarial humans try to generate meaningful passages assisted by our tool. Since most of the words selected by adversarial humans are likely to be in the green list, we expect the watermarking scheme to detect these texts to be watermarked. Table 4 shows examples of sentences composed by adversarial humans that are detected to be watermarked. Even a nonsense sentence generated by an adversarial human can be detected as watermarked with very high confidence.
## 5 Discussion
Recent advancements in NLP show that LLMs can generate human-like texts for a various number of tasks [14]. However, this can create several challenges. LLMs can potentially be misused for plagiarism, spamming, or even social engineering to manipulate the public. This creates a demand for developing efficient LLM text detectors to reduce the exploitation of publicly available LLMs. Recent works propose a variety of AI text detectors using watermarking [15], zero-shot methods [13], and trained neural network-based classifiers [15]. In this paper, we show both theoretically and empirically, that these state-of-the-art detectors cannot reliably detect LLM outputs in practical scenarios. Our experiments show that paraphrasing the LLM outputs helps evade these detectors effectively. Moreover, our theory demonstrates that for a sufficiently advanced language model, even the best detector can only perform marginally better than a random classifier. This means that for a detector to have both low type-I and type-II errors, it will have to trade off the LLM's performance. We also empirically show that watermarking-based detectors can be spoofed to make human-composed text detected as watermarked. We show that it is possible for an attacker to learn the soft watermarking scheme in [15]. Using this information, an adversary can launch a spoofing attack where adversarial humans generate texts that are detected to be watermarked. Spoofing attacks can lead to the generation of watermarked derogatory passages that might affect the reputation of the watermarked LLM developers.
With the release of GPT-4 [14], the applications of LLMs are endless. This also calls for the need for more secure methods to prevent their misuse. Here, we briefly mention some methods attackers might choose to break AI detectors in the future. As we demonstrated in this paper, the emergence of improved paraphrasing models can be a severe threat to AI text detectors. Moreover, advanced LLMs might be vulnerable to attacks based on _smart prompting_. For example, attackers could input a prompt that starts with "Generate a sentence in active voice and present tense using only the following set of words that I provide...". High-performance LLMs would have a low entropy output space (less number of likely output sequences) for this prompt, making it harder to add a strong LLM signature in their output for detection. The soft watermarking scheme in [16] is vulnerable to this attack. If the logits of the LLM have low entropy over the vocabulary, soft watermarking scheme samples the token with the highest logit score (irrespective of the green list tokens) to preserve model perplexity. Furthermore, in the future, we can expect more open-source LLMs to be available to attackers. This could help attackers leverage these models to design transfer attacks to target a larger LLM. Adversarial input prompts could be designed using transfer attacks such that the target LLM is encouraged to have a low entropy output space. Future research on AI text detectors must be cautious about these vulnerabilities.
A detector should ideally be helpful in reliably flagging AI-generated texts to prevent the misuse of LLMs. However, the cost of misidentification by a detector can itself be huge. If the false positive rate of the detector is not low enough, humans could get wrongly accused of plagiarism. Moreover, a disparaging passage falsely detected to be AI-generated could affect the reputation of the LLM's developers. As a result, the practical applications of AI-text detectors can become unreliable and invalid. Security methods need not be foolproof. However, we need to make sure that it is not an easy task for an attacker to break these security defenses. Thus, analyzing the risks of using current detectors can be vital to avoid creating a false sense of security. We hope that the results presented in this work can encourage an open and honest discussion in the community about the ethical and trustworthy applications of generative LLMs.
## Acknowledgement
This project was supported in part by NSF CAREER AWARD 1942230, ONR YIP award N00014-22-1-2271, NIST 60NANB20D134, Meta award 23010098, HR001119S0026 (GARD), Army Grant No. W911NF2120076, a capital one grant, and the NSF award CCF2212458. Thanks to Keivan Rezaei and Mehrdad Saberi for their insights on this work. The authors would like to acknowledge the use of OpenAI's ChatGPT to improve clarity and readability.
|
2302.06703
|
Shadow Energy Functionals and Potentials in Born-Oppenheimer Molecular
Dynamics
|
In Born-Oppenheimer molecular dynamics (BOMD) simulations based on density
functional theory (DFT), the potential energy and the interatomic forces are
calculated from an electronic ground state density that is determined by an
iterative self-consistent field optimization procedure, which in practice never
is fully converged. The calculated energies and the forces are therefore only
approximate, which may lead to an unphysical energy drift and instabilities.
Here we discuss an alternative shadow BOMD approach that is based on a backward
error analysis. Instead of calculating approximate solutions for an underlying
exact regular BO potential, we do the opposite. Instead, we calculate the exact
electron density, energies, and forces, but for an underlying approximate
shadow BO potential. In this way the calculated forces are conservative with
respect to the shadow potential and generate accurate molecular trajectories
with long-term energy stability. We show how such shadow BO potentials can be
constructed at different levels of accuracy as a function of the integration
time step, dt, from the minimization of a sequence of systematically
improvable, but approximate, shadow energy density functionals. For each
functional there is a corresponding ground state BO potential. These pairs of
shadow energy functionals and potentials are higher-level generalizations of
the original "0th-level" shadow energy functionals and potentials used in
extended Lagrangian BOMD [Eur. Phys. J. B vol. 94, 164 (2021)]. The proposed
shadow energy functionals and potentials are useful only within this dynamical
framework, where also the electronic degrees of freedom are propagated together
with the atomic positions and velocities. The theory is general and can be
applied to MD simulations using approximate DFT, Hartree-Fock or semi-empirical
methods, as well as to coarse-grained flexible charge models.
|
Anders M. N. Niklasson, Christian F. A. Negre
|
2023-02-13T21:31:27Z
|
http://arxiv.org/abs/2302.06703v1
|
# Shadow Energy Functionals and Potentials in Born-Oppenheimer Molecular Dynamics
###### Abstract
In Born-Oppenheimer molecular dynamics (BOMD) simulations based on density functional theory (DFT), the potential energy and the interatomic forces are calculated from an electronic ground state density that is determined by an iterative self-consistent field optimization procedure, which in practice never is fully converged. The calculated energies and the forces are therefore only approximate, which may lead to an unphysical energy drift and instabilities. Here we discuss an alternative _shadow_ BOMD approach that is based on a backward error analysis. Instead of calculating _approximate_ solutions for an underlying _exact regular_ Born-Oppenheimer potential, we do the opposite. Instead, we calculate the _exact_ electron density, energies and forces, but for an underlying _approximate shadow_ BO potential energy surface. In this way the calculated forces are conservative with respect to the approximate shadow potential and generate accurate molecular trajectories with long-term energy stability. We show how such shadow BO potentials can be constructed at different levels of accuracy as a function of the integration time step, \(\delta t\), from the constrained minimization of a sequence of systematically improvable, but approximate, shadow energy density functionals. For each energy functional there is a corresponding ground state BO potential. These pairs of shadow energy functionals and potentials are higher-level generalizations of the original "0th-level" shadow energy functionals and potentials used in extended Lagrangian BOMD [Eur. Phys. J. B **94**, 164 (2021)]. The proposed shadow energy functionals and potentials are useful only within this extended dynamical framework, where also the electronic degrees of freedom are propagated as dynamical field variables together with the atomic positions and velocities. The theory is quite general and can be applied to MD simulations using approximate DFT, Hartree-Fock or semi-empirical methods, as well as to coarse-grained flexible charge models.
+
Footnote †: preprint: LA-UR-22-29595
## I Introduction
The general notion of a _shadow_ molecular dynamics provides a highly powerful concept that helps us understand and design accurate and computationally efficient simulation schemes [1; 2; 3; 4; 5; 6; 7]. The idea behind shadow molecular dynamics is based on a backward error analysis. Instead of calculating approximate forces and energies for an underlying exact potential energy surface, it is often easier to calculate exact forces and energies, but for an underlying approximate _shadow_ potential (or shadow Hamiltonian). In this way important physical properties of the simulated shadow dynamics such as time-reversibility, the conservation of the total energy and the phase-space area, can be fulfilled, because the forces of the shadow dynamics can be generated exactly. In practice, shadow dynamics simulation methods are therefore often both more accurate and computationally more efficient compared to alternative techniques. In particular, their long-term accuracy and stability are often superior.
The shadow dynamics terminology was originally introduced in the analysis and explanation of the accuracy and long-term stability of symplectic or geometric integration schemes such as the velocity Verlet algorithm in terms of a shadow Hamiltonian [1; 2; 3]. Here we use the notion of a shadow molecular dynamics in the slightly more general form that is associated with a backward error analysis. A shadow dynamics is then generated, for example, when rapid changes or discontinuities from cutoffs in the _exact_ interatomic potential are smoothed out with an approximate _shadow_ potential for which we can calculate the exact forces and use longer integration time steps [8; 9].
Shadow molecular dynamics was originally introduced in the context of classical molecular mechanics. More recently, the concept of a shadow dynamics has been applied also to non-linear self-consistent field (SCF) theory in quantum-mechanical Born-Oppenheimer molecular dynamics (QMD) simulations based on extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) [10; 11; 12; 13; 14; 15; 16; 17]. The idea of a shadow molecular dynamics has been applied also to the non-linear time-dependent dynamics of superfluidity [18], as well as to flexible charge equilibration models [17; 19].
In this article we will revisit the construction of the approximate shadow energy functionals and potentials used in XL-BOMD simulations and show how their accuracy can be systematically improved to higher-orders _as a function of the integration time step, \(\delta t\)_. It is important to note that these shadow energy functionals and potentials are designed and useful only as parts of molecular dynamics simulations within the framework of XL-BOMD, where also the electronic degrees of freedom are propagated as extended dynamical variables together with the atomic positions and velocities. The interatomic forces calculated from the gradients of the shadow Born-Oppenheimer potential are exact only in this dynamical setting. For static, non-dynamical systems, the corresponding interatomic forces are only approximate, and in general not even very accurate.
In regular QMD simulations [17; 20; 21] the Born-Oppenheimer potential and the interatomic forces are calculated on-the-fly from the ground-state electronic structure, which is determined from an iterative SCF optimization of some constrained non-linear energy functional, that is given, for example, from Hartree-Fock or DFT [22; 23; 24; 25; 26; 27; 28; 29; 30]. In practice the iterative SCF optimization is never fully converged and always approximate. This may create small errors in the ground state electron density, but these small errors can break time-reversibility and lead to non-conservative forces. Accumulated over time, the small errors from the approximate SCF optimization will therefore become significant. Often the errors appear as an unphysical systematic drift in the total energy, where the incompletely converged electronic structure behaves as an artificial heat source or sink [31; 32; 33; 34; 35], which invalidates the QMD simulations. In the more recent formulations of XL-BOMD the shadow Born-Oppenheimer potential is designed to avoid the computational overhead and convergence errors in the iterative SCF optimization.
In XL-BOMD the iterative SCF optimization procedure is avoided by including the electronic degrees of freedom as extended dynamical variables, in the spirit of Car-Parrinello molecular dynamics [17; 36], in addition to the atomic positions and velocities. However, in contrast to Car-Parrinello molecular dynamics, a constrained optimization is still required to calculate the exact electronic ground state, but the optimization is performed for an approximate shadow energy functional. This optimization can be performed exactly in a single step and no iterative process is needed. The ground state energy then defines the shadow Born-Oppenheimer potential and the corresponding conservative forces. The ability of XL-BOMD to avoid an iterative optimization and still generate exact conservative forces is thus of great practical interest, both by reducing the computational cost and by improving the accuracy and long-term stability of the molecular dynamics simulations.
The shadow potential approximates the exact fully-converged regular Born-Oppenheimer potential. In the original shadow potential formulation of XL-BOMD, the error in the forces and the potential energies scale with the size of the integration time step, \(\delta t\), to the second, \(\mathcal{O}(\delta t^{2})\), and fourth order, \(\mathcal{O}(\delta t^{4})\), respectively. However, there seems to be no way to improve the order of the scaling. The only way to boost the accuracy is to reduce the size of the integration time step. Here we will show how higher-levels of accuracy in the forces and shadow Born-Oppenheimer potentials can be achieved from the constrained minimization of a sequence of systematically improvable, but approximate, shadow energy functionals. For each energy functional there is a corresponding ground state Born-Oppenheimer potential. The accuracies of these pairs of shadow functionals and potentials are determined by the size of the integration time step, \(\delta t\). The increased level of accuracy is thus meaningful only in the context of molecular dynamics simulations.
A higher-order accuracy in the shadow Born-Oppenheimer potential will often improve the long-term stability of a QMD simulation. This is of particular interest in QMD simulations of chemical systems that may have unsteady charge solutions or chemical reactions, for example, where the electronic energy gap between the Highest Occupied Molecular Orbital (HOMO) and the Lowest Unoccupied Molecular Orbital (LUMO) is opening and closing along the molecular trajectories.
The theory will be explained in terms of general Hohenberg-Kohn DFT [25; 26; 27; 28; 29; 30] and is applicable to a broad range of methods, including Hartree-Fock and Kohn-Sham DFT [22; 23; 24; 25; 26; 27; 28; 29; 30], approximate DFT and semi-empirical methods [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47], as well as to various coarse-grained polarizable charge equilibration models [19; 48; 49; 50; 51; 52; 53; 54; 55; 56].
The article is outlined as follows. First we review the construction of the "0th-level" shadow energy functional and Born-Oppenheimer potential used in the original shadow potential formulation of XL-BOMD. We then describe how an improved "1st-level" pair of shadow energy functional and Born-Oppenheimer potential can be constructed. We then derive the equations of motion in an adiabatic limit, where we assume that the extended electronic motion is rapid compared to the slower moving nuclei. This is consistent with the underlying Born-Oppenheimer approximation. Thereafter we discuss generalizations to higher \(m\)th-level pairs of shadow energy functionals and potentials. The integration of the equations of motion for the electronic degrees of freedom is then explained, where we use a low-rank preconditioned Krylov subspace approximation. To better understand the shadow functionals and potentials we consider the relationship to the Harris-Foulkes functional [57; 58] in the static non-dynamical case for Kohn-Sham DFT. We also apply our theory for the 1st-level pairs of shadow energy functionals and potentials to a simple flexible charge equilibration model that corresponds to an orbital-free coarse-grained DFT. Thereafter, to summarize the results, we present a pseudocode for XL-BOMD simulations using a 1st-level shadow energy functional and potential. We demonstrate the improved scaling and ability to treat unstable chemical systems using the 1st-level shadow energy functional and Born-Oppenheimer potential based on self-consistent charge density functional tight-binding (SCC-DFTB) theory [37; 38; 39; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64]. At the end we give a brief summary and our conclusions.
## II Generalized Shadow Functionals and Potentials
To present the pairs of energy functionals and Born-Oppenheimer potentials we will use Hohenberg-Kohn density functional theory [25]. The corresponding Kohn-Sham expressions are generated by replacing the universal energy functional with its orbital-dependent Kohn-Sham energy functional [26]. Generalization to Hartree
Fock theory and semi-empirical methods, as well as to coarse-grained orbital-free flexible charge models, should be straightforward [17].
### Born-Oppenheimer Potential
In Hohenberg-Kohn DFT [28; 29; 30; 25], the relaxed ground state electron density, \(\rho_{\rm min}({\bf r})\), is given from a constrained minimization of an energy density functional, \(E[{\bf R},\rho]\), over all physically relevant electron densities, \(\rho\), [65] that integrates to the total number of electrons, \(N_{e}\), i.e.
\[\rho_{\rm min}({\bf r})=\arg\min_{\rho}\left\{E[{\bf R},\rho]\left|\int\rho({ \bf r})d{\bf r}=N_{e}\right.\right\}. \tag{1}\]
The DFT energy functional,
\[E[\rho]\equiv E[{\bf R},\rho]=F[\rho]+\int V_{\rm ext}({\bf R},{\bf r})\rho({ \bf r})d{\bf r}, \tag{2}\]
includes a system-independent, non-linear, universal electron functional, \(F[\rho]\), and an energy term with an external potential, \(V_{\rm ext}({\bf R},{\bf r})\), which we here assume is from ions at the atomic positions, \({\bf R}=\{{\bf R}_{I}\}\). The universal energy functional, \(F[\rho]\), includes all the electron-electron interactions and the kinetic energy term. To keep it general, we may also assume ensemble generalizations where \(F[\rho]\) accounts for thermal effects, including the entropy contribution at finite electronic temperatures [17; 24; 27; 28; 30; 66; 67]. In the corresponding Kohn-Sham DFT the thermal effects introduces fractional occupation numbers of the Kohn-Sham orbitals [27; 28], which is important to be able to describe, for example, metallic systems at finite temperatures and to stabilize calculations of systems with a small or vanishing electronic energy gap.
In the Born-Oppenheimer approximation [20; 68; 69; 70] the Born-Oppenheimer potential energy surface, \(U({\bf R})\), is determined for the fully relaxed electronic ground state, i.e.
\[U({\bf R})=E[\rho_{\rm min}]+V_{\rm nn}({\bf R}), \tag{3}\]
which includes the additional ion-ion repulsion energy term, \(V_{\rm nn}({\bf R})\). The motion of the atoms can then be generated by integrating Newton's equation of motion,
\[M_{I}\vec{\bf R}_{I}=-\nabla_{I}U({\bf R}), \tag{4}\]
where \(\{M_{I}\}\) are the atomic masses, one for each atom \(I\), and the dots denote the time derivatives.
In general, the calculation of the ground state density, \(\rho_{\rm min}({\bf r})\), requires some form of iterative optimization procedure or SCF approach, because of the nonlinearity of the universal energy functional, \(F[\rho]\). For example, in Kohn-Sham DFT the SCF optimization requires repeated diagonalizations of the effective single-particle Kohn-Sham Hamiltonians. This can cause a significant computational overhead and in practice the solution is never fully converged and only approximate. Force terms that in general are very difficult, if not impossible to calculate in practice, like
\[\int\left(\delta E[\rho]/\delta\rho({\bf r})\right)\left(\partial\rho({\bf r })\big{/}\partial{\bf R}_{I}\right)\big{|}_{\rho\approx\rho_{\rm min}}d{\bf r}, \tag{5}\]
are therefore not vanishing exactly, because \(\left(\delta E[\rho]\big{/}\delta\rho({\bf r})\right)\) is vanishing only if \(\rho({\bf r})=\rho_{\rm min}({\bf r})\)[71]. Insufficiently converged solutions for the electronic ground state density, and where the non-vanishing force term in Eq. (5) is ignored, therefore lead to non-conservative forces that may invalidate a molecular dynamics simulation [31; 32; 33; 34; 35]. Recent formulations of XL-BOMD were developed to overcome these shortcomings [17].
### Zeroth-Level Shadow Functional and Born-Oppenheimer Potential
In the more recent formulations of XL-BOMD [17], the energy functional, \(E[\rho]\) in Eq. (2), is approximated by a linearized _shadow_ energy functional,
\[\mathcal{E}^{(0)}[\rho,n^{(0)}]=E[n^{(0)}]+\int\frac{\delta E[\rho]}{\delta \rho({\bf r})}\Big{|}_{n^{(0)}}\left(\rho({\bf r})-n^{(0)}({\bf r})\right)d{ \bf r}, \tag{6}\]
which is given by a linearization of \(E[\rho]\) around some approximate 0th-level ground state density, \(n^{(0)}({\bf r})\approx\rho_{\rm min}({\bf r})\). More generally, we can create a 0th-level shadow energy functional [17; 19] by some approximation, where
\[\mathcal{E}^{(0)}[\rho,n^{(0)}]=E[n^{(0)}]+\mathcal{O}(|\rho-n^{(0)}|^{2}). \tag{7}\]
This generalization is of particular interest in formulations of orbital-free flexible-charge equilibration models. It allows more freedom in the construction of the shadow energy functional, e.g. where parts of \(E[\rho]\) are expanded to second order in \(\rho\) to guarantee a unique ground state solution[17; 19]. The corresponding \(n^{(0)}\)-dependent ground state electron density, \(\rho_{\rm min}[n^{(0)}]\), is then given by the constrained minimization as in Eq. (1), where
\[\rho_{\rm min}[n^{(0)}]({\bf r})=\arg\min_{\rho}\left\{\mathcal{E}^{(0)}[\rho,n ^{(0)}]\left|\int\rho({\bf r})d{\bf r}=N_{e}\right.\right\}. \tag{8}\]
With the minimization we here mean the lowest _stationary_ solution over all physically relevant electron densities with \(N_{e}\) number of electrons. The relaxed ground state density then defines the approximate, \(n^{(0)}\)-dependent, shadow Born-Oppenheimer potential,
\[\mathcal{U}^{(0)}({\bf R},n^{(0)})=\mathcal{E}^{(0)}\left[\rho_{\rm min}[n^{(0) }],n^{(0)}\right]+V_{\rm nn}({\bf R}). \tag{9}\]
The advantage with this 0th-level shadow energy functional, \(\mathcal{E}^{(0)}[\rho,n^{(0)}]\) in Eq. (6), is that the ground state density, \(\rho_{\rm min}[n^{(0)}]({\bf r})\), can be calculated without requiring any iterative optimization procedure to find a SCF solution - at least if we have found some appropriate shadow
energy functional, \(\mathcal{E}^{(0)}[\rho,n^{(0)}]\), consistent with Eq. (7). Instead, the exact ground state electron density can be calculated directly in a single step, because all the non-linearities in \(E[\rho]\) with respect to \(\rho\) that would require an iterative solution have been removed in \(\mathcal{E}^{(0)}[\rho,n^{(0)}]\). In Kohn-Sham density functional theory, the exact minimization is reached in a single construction and diagonalization of the Kohn-Sham Hamiltonian, and in the corresponding coarse-grained charge equilibration models [17; 19] the relaxed ground state is given from the solution of a quasi-diagonal system of linear equations, which has a simple direct analytical solution. In this way, any possible convergence problems and associated inconsistencies between the calculated ground state density, \(\rho_{\min}[n^{(0)}]\), and the shadow Born-Oppenheimer potential, \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\), are avoided.
Because of the linearization in the energy functional the error in the shadow Born-Oppenheimer potential is of second order in the residual function, \(f[n^{(0)}](\mathbf{r})=\rho_{\min}[n^{(0)}](\mathbf{r})-n^{(0)}(\mathbf{r})\), i.e.
\[\left|\mathcal{U}^{(0)}-U\right|\propto\left|\rho_{\min}[n^{(0)}]-n^{(0)} \right|^{2}. \tag{10}\]
The approximate density, \(n^{(0)}\), therefore needs to be close to the relaxed ground state density, \(\rho_{\min}[n^{(0)}]\) or \(\rho_{\min}\), to ensure that the error in the approximate shadow potential is small [72]. Below we will show how this is achieved in QMD simulations by propagating the approximate ground state density, \(n^{(0)}\), as a dynamical field variable within an extended Lagrangian formulation, where \(n^{(0)}\equiv n^{(0)}(\mathbf{r},t)\) is propagated by a harmonic oscillator that is centered around the optimized ground state density, \(\rho_{\min}[n^{(0)}](\mathbf{r})\), along the molecular trajectories. But before we present the extended Lagrangian molecular dynamics scheme we will show how the 0th-level shadow energy functional and Born-Oppenheimer potential can be improved in accuracy.
### First-Level Shadow Functional and Born-Oppenheimer Potential
The accuracy of the approximate 0th-level shadow energy functional, \(\mathcal{E}^{(0)}[\rho,n^{(0)}]\) in Eq. (6) or Eq. (7), can be improved. However, a straightforward expansion of \(E[\rho]\) to higher orders in \(\rho\) would not help, because this would require some iterative solution to the constrained minimization problem of a non-linear energy functional. Instead, we have to improve the accuracy of the approximate 0th-level energy functional without loosing the linearity in \(\rho\). We can achieve this by improving the estimate of \(n^{(0)}\) to be even closer to the exact ground state density, \(\rho_{\min}\), in Eq. (1). This can be accomplished with an updated and more accurate density, \(n^{(1)}(\mathbf{r})\), which is given by a single Newton optimization step (for multiple steps see III.3),
\[\begin{split} n^{(1)}&(\mathbf{r})\equiv n^{(1)}[ n^{(0)}](\mathbf{r})=n^{(0)}(\mathbf{r})\\ &-\int K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\left(\rho_{\min}[ n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{\prime})\right)d\mathbf{r}^{ \prime},\end{split} \tag{11}\]
where the kernel \(K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\) is the inverse Jacobian of the residual function, \(f[n^{(0)}](\mathbf{r})=\rho_{\min}[n^{(0)}](\mathbf{r})-n^{(0)}(\mathbf{r})\). This means that
\[\int K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\frac{\delta\left(\rho_{\min}[n^{ (0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{\prime})\right)}{\delta n^{(0) }(\mathbf{r}^{\prime\prime})}d\mathbf{r}^{\prime}=\delta(\mathbf{r}-\mathbf{ r}^{\prime\prime}). \tag{12}\]
The Newton step in Eq. (11) (under reasonable conditions) is quadratically convergent such that
\[|\rho_{\min}-n^{(1)}|\propto|\rho_{\min}-n^{(0)}|^{2}\propto|\rho_{\min}[n^{(0 )}]-n^{(0)}|^{2}. \tag{13}\]
We here assume that the functional is sufficiently well-behaved and that \(n^{(0)}\) is close enough to the exact ground state density, \(\rho_{\min}\), to achieve the quadratic convergence.
The shadow energy functional can now be improved in accuracy by using the updated density, \(n^{(1)}\), instead of \(n^{(0)}\) in the linearization of the energy functional. This updated and improved approximate 1st-level shadow energy functional is then given by
\[\begin{split}&\mathcal{E}^{(1)}\left[\rho,n^{(1)}\right]=E\left[n^{ (1)}\right]\\ &+\int\frac{\delta E\left[\rho\right]}{\delta\rho(\mathbf{r})} \Big{|}_{n^{(1)}}\left(\rho(\mathbf{r})-n^{(1)}(\mathbf{r})\right)d\mathbf{r}, \end{split} \tag{14}\]
or more generally as an approximation where
\[E[\rho]=\mathcal{E}^{(1)}\left[\rho,n^{(1)}\right]+\mathcal{O}(|\rho-n^{(1)}|^ {2}). \tag{15}\]
The updated \(n^{(1)}\)-dependent ground state density is then given from the constrained minimization,
\[\rho_{\min}[n^{(1)}](\mathbf{r})=\arg\min_{\rho}\left\{\mathcal{E}^{(1)} \left[\rho,n^{(1)}\right]\left|\int\rho(\mathbf{r})=N_{\epsilon}\right. \right\}, \tag{16}\]
with respect to variationally stationary solutions. This ground state density defines our 1st-level shadow Born-Oppenheimer potential,
\[\begin{split}&\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})\equiv\mathcal{U}^{ (1)}(\mathbf{R},n^{(1)}[n^{(0)}])\\ &=\mathcal{E}^{(1)}\left[\rho_{\min}[n^{(1)}],n^{(1)}\right]+V_{ \mathrm{nn}}(\mathbf{R}).\end{split} \tag{17}\]
It is important to have the 1st-level shadow potential, \(\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})\), expressed as a function of \(n^{(0)}\) and not of \(n^{(1)}\). We can do so because \(n^{(1)}\) is determined from \(n^{(0)}\) in Eq. (11), where \(n^{(1)}\equiv n^{(1)}[n^{(0)}]\). We will take advantage of this relation in the next section, where \(n^{(0)}\) is propagated as a dynamical field variable, \(n^{(0)}(\mathbf{r},t)\).
The constrained minimization in Eq. (16) can be achieved, in general, in a single step without requiring
any iterative optimization procedure, thanks to the linear dependency of \(\rho\) in the shadow energy functional, \(\mathcal{E}^{(1)}\left[\rho,n^{(1)}\right]\). No iterative self-consistent optimization procedure is needed.
The error in the 1st-level shadow potential scales as
\[|\mathcal{U}^{(1)}-U|\propto|\rho_{\min}[n^{(1)}]-n^{(1)}|^{2}\propto|\rho_{\min }[n^{(0)}]-n^{(0)}|^{4}, \tag{18}\]
thanks to the quadratic convergence of the Newton update of \(n^{(1)}\) in Eq. (11), where the size of the residual function, \(f[n^{(0)}](\mathbf{r})=\rho_{\min}[n^{(0)}](\mathbf{r})-n^{(0)}(\mathbf{r})\), decays quadratically in a single Newton step.
The Newton step in Eq. (11) is similar to an SCF iteration step. However, in Kohn-Sham DFT, the Newton update does not require any additiontal Hamiltonian diagonalization. In QMD simulations the Newton step can be performed using a preconditioned Krylov subspace expansion [73; 74; 75; 17], where each Krylov subspace vector can be determined from response calculations using quantum perturbation theory. The preconditioned Krylov subspace expansion used to approximate the kernel, \(K^{(0)}\), acting on the residual function [73; 74; 75; 17; 76] is described in more detail in Sec. III.5.
In practice the preconditioned Krylov subspace expansion of the kernel, \(K^{(0)}\), is truncated and only approximate. The density update in Eq. (11) is then given by a quasi-Newton step, which in general has a slower, non-quadratic convergence.
### Pairs of Shadow Functionals and Potentials
A key concept in our presentation are pairs of energy functionals and Born-Oppenheimer potentials. The potential is always given from a constrained minimization over the electron density of an energy functional, where the initial pair of electronic energy functional and Born-Oppenheimer potential, corresponding to regular DFT, is given by
\[\Big{\{}E[\rho],U(\mathbf{R})\Big{\}}. \tag{19}\]
This pair in then replaced, first by the \(n^{(0)}\)-dependent 0th-level shadow energy functional and potential,
\[\left\{\mathcal{E}^{(0)}[\rho,n^{(0)}],\mathcal{U}^{(0)}(\mathbf{R},n^{(0)}) \right\}, \tag{20}\]
and then by the \(n^{(0)}\)-dependent 1st-level shadow energy functional and potential,
\[\left\{\mathcal{E}^{(1)}[\rho,n^{(0)}],\mathcal{U}^{(1)}(\mathbf{R},n^{(0)}) \right\}. \tag{21}\]
The regular functional-potential pair in Eq. (19) is in practice difficult to represent exactly, because the calculated Born-Oppenheimer potential, \(U(\mathbf{R})\), at least in practice, is never given by the exact ground state of the energy functional, \(E[\rho]\). An accurate match between \(E[\rho]\) and \(U(\mathbf{R})\) can only be achieved by an expensive iterative optimization procedure, because of the non-liniarity of \(E[\rho]\). This is in contrast to the 0th-level shadow functional-potential pair in Eq. (20), which easily are matched at only a modest cost, because no iterative optimization is required. What we have presented so far is how we can construct an updated 1st-level pair of shadow energy functionals and potentials in Eq. (21) that also can be matched exactly. This higher-level generalization has an improved level of accuracy.
## III Extended Lagrangian Born-Oppenheimer Molecular Dynamics
In a QMD simulation the initial approximate ground state density, \(n^{(0)}(\mathbf{r})\), around which the linearization is performed for the construction of the shadow energy functional will get further and further away from the corresponding exact ground state density, \(\rho_{\min}(\mathbf{r})\), as the atoms are moving away from the initial configuration. The accuracy of the shadow energy functional and the corresponding shadow Born-Oppenheimer potential will then get successively worse. The density, \(n^{(0)}(\mathbf{r})\), therefore needs to be updated. One way is to update the density as a function of the atomic positions, for example, where \(n^{(0)}(\mathbf{r})\equiv n^{(0)}(\mathbf{R},\mathbf{r})=\sum_{I}n_{I}^{\rm atom }(\mathbf{r}-\mathbf{R}_{I})\), is the superposition of separate neutral atomic electron densities, \(\{n_{I}^{\rm atom}(\mathbf{r}-\mathbf{R}_{I})\}\), centered around the atomic positions, \(\mathbf{R}=\{\mathbf{R}_{I}\}\). However, this would lead to difficulties calculating forces, as in Eq. (5), including all the density-dependent energy terms - one for each atom. If \(n^{(0)}(\mathbf{r})\) would be the variational ground state these terms would all vanish, but this is only true for the optimized densities, \(\rho_{\min}[n^{(0)}](\mathbf{r})\) or \(\rho_{\min}[n^{(1)}](\mathbf{r})\), with respect to the shadow potentials, \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\) or \(\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})\). Even in this case partial derivatives, \(\partial n^{(0)}/\partial\mathbf{R}_{I}\), would need to be calculated. A solution to these problems is offered by XL-BOMD, where \(n^{(0)}\) is propagated as a dynamical field variable, \(n^{(0)}(\mathbf{r},t)\)[17].
### Extended Lagrangian
In XL-BOMD, we include the approximate ground state density, \(n^{(0)}(\mathbf{r})\), and its time derivative as additional dynamical field variables, \(n^{(0)}(\mathbf{r},t)\) and \(\dot{n}^{(0)}(\mathbf{r},t)\), in an extend Lagrangian formalism, beside the nuclear positions and their velocities, \(\mathbf{R}(t)\) and \(\dot{\mathbf{R}}(t)\). The dynamics of \(n^{(0)}(\mathbf{r},t)\) is generated by an extended harmonic oscillator that is centered around the optimized ground state of the shadow potential, \(\rho_{\min}[n^{(0)}]\), along the molecular trajectories. In this way \(n^{(0)}(\mathbf{r},t)\) closely follows the ground state such that the error in the shadow potential does not increase along the trajectory.
In the Euler-Lagrange equations of motion, the partial derivatives only appear with respect to each single dy
namical variable, with all the other dynamical variables being constant. The calculations of \(n^{(0)}\)-dependent force terms, e.g.
\[\int\frac{\delta\mathcal{U}^{(m)}}{\delta n^{(0)}(\mathbf{r})}\frac{\partial n^{ (0)}(\mathbf{r})}{\partial\mathbf{R}_{I}}d\mathbf{r},\ \ (m=0\ \text{or}\ 1), \tag{22}\]
can therefore be avoided. Additional force terms, such as
\[\int\left(\frac{\delta\mathcal{E}^{(m)}[\rho,n^{(m)}]}{\delta\rho(\mathbf{r}) }\right)\left(\frac{\partial\rho(\mathbf{r})}{\partial\mathbf{R}_{I}}\right) \Big{|}_{\rho=\rho_{\min}[n^{(m)}]}d\mathbf{r}, \tag{23}\]
can also be ignored, because \(\rho_{\min}[n^{(m)}]\) is determined from the condition that
\[\frac{\delta\mathcal{E}^{(m)}[\rho,n^{(m)}]}{\delta\rho}\Big{|}_{\rho=\rho_{ \min}[n^{(m)}]}=0,\ \ (m=0\ \text{or}\ 1). \tag{24}\]
This reduces not only the computational cost, but also makes it possible to calculate "exact" conservative forces that generate stable long-term molecular trajectories.
We can now define the 1st-level extended Lagrangian, \(\mathcal{L}^{(1)}\), in XL-BOMD, using the 1st-level shadow Born-Oppenheimer potential, where
\[\begin{split}&\mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n^{(0)},\dot{n}^{(0)})=\frac{1}{2}\sum_{I}M_{I}|\mathbf{\dot{R}}_{I}|^{2}-\mathcal{U} ^{(1)}(\mathbf{R},n^{(0)})\\ &+\frac{1}{2}\mu\int|\dot{n}^{(0)}(\mathbf{r})|^{2}d\mathbf{r}- \frac{1}{2}\mu\omega^{2}\iint\left(\rho_{\min}[n^{(0)}](\mathbf{r})-n^{(0)}( \mathbf{r})\right)\\ &\times T^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\left(\rho_{\min}[ n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{\prime})\right)d\mathbf{r}d \mathbf{r}^{\prime}.\end{split} \tag{25}\]
Here \(n^{(0)}(\mathbf{r},t)\) is treated as a dynamical field variable with its time derivative, \(\dot{n}^{(0)}(\mathbf{r},t)\), and some chosen mass parameter, \(\mu\). This is in addition to the regular dynamical variables of the atomic motion, \(\mathbf{R}\) and \(\mathbf{R}\). The atomic masses are given by \(\{M_{I}\}\). The frequency of the extended harmonic oscillator is set by \(\omega\) and the harmonic well is centered around \(\rho_{\min}[n^{(0)}](\mathbf{r})\). \(T^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\) is a symmetric positive definite metric tensor given by the square of a kernel, \(K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\), where
\[T^{(0)}(\mathbf{r},\mathbf{r}^{\prime})=\int\left(K^{(0)}(\mathbf{r},\mathbf{ r}^{\prime\prime})\right)^{\dagger}K^{(0)}(\mathbf{r}^{\prime\prime},\mathbf{r}^{ \prime})d\mathbf{r}^{\prime\prime}. \tag{26}\]
We define the kernel, \(K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\), as the inverse Jacobian of the residual function, \(f[n^{(0)}](\mathbf{r})=\rho_{\min}[n^{(0)}](\mathbf{r})-n^{(0)}(\mathbf{r})\). This means that the kernel, \(K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\), is the same as in Eqs. (11) and (12). In this way the dynamical density variable, \(n^{(0)}(\mathbf{r},t)\), evolves as if it would oscillate around the much closer approximation to the exact ground state, i.e. \(n^{(1)}(\mathbf{r})\) from the Newton update, compared to the more approximate, \(\rho_{\min}[n^{(0)}](\mathbf{r})\). This definition of the kernel simplifies the equations of motion that we will derive below at the same time as it also improves the accuracy of the shadow Born-Oppenheimer potential by evolving \(n^{(0)}(\mathbf{r},t)\) around a closer approximation to the exact ground state, \(\rho_{\min}(\mathbf{r})\).
The only difference to the original formulation of XL-BOMD is that the Lagrangian, \(\mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n^{(0)},\dot{n}^{(0)})\), in Eq. (25) uses the 1st-level shadow Born-Oppenheimer potential, \(\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})\), instead of the 0-th level, \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\).
### Equations of Motion
The Euler-Lagranges equations for \(\mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n^{(0)},\dot{n}^{(0)})\),
\[\frac{d}{dt}\left(\frac{\partial\mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n ^{(0)},\dot{n}^{(0)}))}{\partial\mathbf{\dot{R}}_{I}}\right)=\frac{\partial \mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n^{(0)},\dot{n}^{(0)}))}{ \partial\mathbf{R}_{I}} \tag{27}\]
and
\[\frac{d}{dt}\left(\frac{\delta\mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n^ {(0)},\dot{n}^{(0)}))}{\delta\dot{n}^{(0)}(\mathbf{r})}\right)=\frac{\delta \mathcal{L}^{(1)}(\mathbf{R},\mathbf{\dot{R}},n^{(0)},\dot{n}^{(0)}))}{\delta n ^{(0)}(\mathbf{r})} \tag{28}\]
give us the equations of motion,
\[\begin{split}& M_{I}\mathbf{\ddot{R}}_{I}=-\frac{\partial \mathcal{U}^{(1)}(\mathbf{R},n^{(0)})}{\partial\mathbf{R}_{I}}\Big{|}_{n^{(0)},n ^{(1)}}\\ &-\int\frac{\delta\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})}{\delta n ^{(1)}(\mathbf{r})}\frac{\partial n^{(1)}(\mathbf{r})}{\partial\mathbf{R}_{I}} \Big{|}_{n^{(0)}}d\mathbf{r}\\ &-\frac{1}{2}\mu\omega^{2}\frac{\partial}{\partial\mathbf{R}_{I}} \iint\left(\rho_{\min}[n^{(0)}](\mathbf{r})-n^{(0)}(\mathbf{r})\right)T^{(0)}( \mathbf{r},\mathbf{r}^{\prime})\\ &\times\left(\rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}( \mathbf{r}^{\prime})\right)\Big{|}_{n^{(0)}}d\mathbf{r}d\mathbf{r}^{\prime} \end{split} \tag{29}\]
and
\[\begin{split}&\mu\ddot{n}^{(0)}(\mathbf{r})=-\frac{\delta\mathcal{U} ^{(1)}(\mathbf{R},n^{(0)})}{\delta n^{(0)}(\mathbf{r})}\\ &-\frac{1}{2}\mu\omega^{2}\frac{\delta}{\delta n^{(0)}(\mathbf{r})} \iint\left(\rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{ \prime})\right)\\ &\times T^{(0)}(\mathbf{r}^{\prime},\mathbf{r}^{\prime\prime}) \left(\rho_{\min}[n^{(0)}](\mathbf{r}^{\prime\prime})-n^{(0)}(\mathbf{r}^{ \prime\prime})\right)d\mathbf{r}^{\prime}d\mathbf{r}^{\prime\prime}\end{split} \tag{30}\]
These equations are far from trivial to use in a QMD simulation. However, the equations of motion are simplified if we impose an adiabatic limit, in the same way as for the original Born-Oppeheimer approximation, where we assume that the electronic degrees of freedom are fast compared to the slower nuclear motion. To derive the equations of motion in this adiabatic limit we first assert the following frequency dependencies in the residual functions,
\[\Big{|}\rho_{\min}[n^{(0)}]-n^{(0)}\Big{|}\propto\omega^{-2}, \tag{31}\]
and
\[\Big{|}\rho_{\min}[n^{(1)}]-n^{(1)}\Big{|}\propto\omega^{-4}, \tag{32}\]
which are assumed to be valid in the limit of \(\omega\to\infty\). These adiabatic relations are difficult to prove _a priori_, but they can be shown to hold _a posteriori_ by integrating the equations of motions that have been derived under the assumptions of Eqs. (31) and (32). This will demonstrate below in Fig. 1.
Using the asserted adiabatic scaling relations in Eqs. (31) and (32), we find (under reasonable conditions) from the definition of \(n^{(1)}[n^{(0)}](\mathbf{r})\) in Eq. (11) that
\[\begin{split}&\frac{\delta n^{(1)}[n^{(0)}](\mathbf{r})}{\delta n ^{(0)}(\mathbf{r}^{\prime\prime})}=\frac{\delta n^{(0)}(\mathbf{r})}{\delta n ^{(0)}(\mathbf{r}^{\prime\prime})}\\ &-\int K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\frac{\delta\left( \rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{\prime})\right) }{\delta n^{(0)}(\mathbf{r}^{\prime\prime})}d\mathbf{r}^{\prime}\\ &-\int\frac{\delta K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})}{ \delta n(\mathbf{r}^{\prime\prime})}\left(\rho_{\min}[n^{(0)}](\mathbf{r}^{ \prime})-n^{(0)}(\mathbf{r}^{\prime})\right)d\mathbf{r}^{\prime}\\ &=\delta(\mathbf{r}-\mathbf{r}^{\prime\prime})-\delta(\mathbf{r}- \mathbf{r}^{\prime\prime})\\ &-\int\frac{\delta K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})}{ \delta n(\mathbf{r}^{\prime\prime})}\left(\rho_{\min}[n^{(0)}](\mathbf{r}^{ \prime})-n^{(0)}(\mathbf{r}^{\prime})\right)d\mathbf{r}^{\prime}\\ &\propto\left|\rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}( \mathbf{r}^{\prime})\right|\propto\omega^{-2}.\end{split} \tag{33}\]
Using the same assertions we also find that
\[\left|\frac{\delta\mathcal{U}^{(1)}}{\delta n^{(1)}}\right|\propto\left|\rho _{\min}[n^{(1)}]-n^{(1)}\right|\propto\omega^{-4}. \tag{34}\]
This gives us
\[\left|\frac{\delta\mathcal{U}^{(1)}}{\delta n^{(0)}}\right| =\left|\frac{\delta\mathcal{U}^{(1)}}{\delta n^{(1)}}\frac{\delta n ^{(1)}}{\delta n^{(0)}}\right| \tag{35}\] \[\propto\left|\rho_{\min}[n^{(1)}]-n^{(1)}\right|\times\left|\rho _{\min}[n^{(0)}]-n^{(0)}\right|\] (36) \[\propto\omega^{-4}\times\omega^{-2}. \tag{37}\]
The scaling relation in Eq. (31) also mean that the last gradient term in Eq. (29) becomes proportional to \(\mu\). The asserted scaling relations above inserted in the equations of motion in Eqs. (29) and (30) then give us,
\[M_{I}\ddot{\mathbf{R}}_{I}=-\frac{\partial\mathcal{U}^{(1)}(\mathbf{R},n^{(0) })}{\partial\mathbf{R}_{I}}\Big{|}_{n^{(0)}}+\mathcal{O}\left(\omega^{-4} \right)+\mathcal{O}\left(\mu\right), \tag{38}\]
and
\[\begin{split}&\bar{n}^{(0)}(\mathbf{r})=\mathcal{O}(\mu^{-1} \omega^{-6})+\mathcal{O}(\omega^{-2})\\ &\quad-\omega^{2}\int K^{(0)}(\mathbf{r},\mathbf{r}^{\prime}) \left(\rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{\prime}) \right)d\mathbf{r}^{\prime},\end{split} \tag{39}\]
where we have assumed that \(\delta T^{(0)}/\delta n^{(0)}\) is bounded and \(\omega\)-independent as \(\omega\to\infty\). We can then derive the equations of motion in the adiabatic limit, where \(\omega\to\infty\) combined with the mass-zero limit \(\mu\to 0\), which here is chosen such that \(\mu\omega^{4}\to\text{constant}\). This is a classical analogue to the Born-Oppenheimer approximation, where we simply stick with the original Born-Oppenheimer assumption that the electronic degrees of freedom is evolving on a much faster time scale compared to a slower nuclear motion. In this adiabatic limit we get the final equations of motion for XL-BOMD with the 1st-level updated shadow Born-Oppenheimer potential,
\[\begin{split} M_{I}\ddot{\mathbf{R}}_{I}=&-\left. \frac{\partial}{\partial\mathbf{R}_{I}}\mathcal{U}^{(1)}(\mathbf{R},n^{(0)}) \right|_{n^{(0)}},\\ \ddot{n}^{(0)}(\mathbf{r})=&-\omega^{2}\int K^{(0)}( \mathbf{r},\mathbf{r}^{\prime})\left(\rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})- n^{(0)}(\mathbf{r}^{\prime})\right)d\mathbf{r}^{\prime}.\end{split} \tag{40}\]
Because \(n^{(0)}\equiv n^{(0)}(\mathbf{r},t)\) is a dynamical field variable in XL-BOMD, the partial derivatives in Eq. (40) with respect to the nuclear coordinates are evaluated under a constant electron density, \(n^{(0)}\). Thus, even if \(n^{(0)}\) is not the variationally optimized ground state density, we can still calculate the exact forces in the adiabatic equations of motion for XL-BOMD. However, this does not work for static calculations. It only works in the context of XL-BOMD, where the electronic degrees of freedom are propagated dynamically. The 1st-level updated shadow potential is thus mainly useful only in this dynamical setting.
The equations of motion, Eqs. (40) and (41), are almost identical to the original equations of motion for XL-BOMD using the 0th-level Born-Oppenheimer potential [16; 17; 73]. The only difference is that we now have the 1st-level Born-Oppenheimer potential, \(\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})\), instead of the original 0th-level \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\). The error terms neglected in the adiabatic limit, where \(\mu\propto\omega^{-4}\), indicates that the error in the interatomic force term should scale as \(\omega^{-4}\). This scaling will also be demonstrated below in Fig. 2.
It is important to note that even if we only would use some approximation of the kernel, \(K(\mathbf{r},\mathbf{r}^{\prime})\), the same equations of motion, in Eqs. (40) and (41), can be derived in an adiabatic limit. The only difference is that the adiabatic limit has to be modified such that \(\mu\omega^{m}\to\text{constant}\) for some value \(m\in[1,4]\), and with a modified assertion, where \(|\rho_{\min}[n^{(1)}]-n^{(1)}|\propto\omega^{-m}\) for some value of \(m\in[2,4]\). The scaling of the errors in the forces and the potential energy will then be different and less favorable. Of critical importance is only that we calculate the forces from the shadow potential, \(\mathcal{U}^{(1)}\), defined by the optimized ground state of a shadow energy functional that has been linearized around some updated \(n^{(0)}\)-dependent density, \(n^{(1)}(\mathbf{r})\equiv n^{(1)}[n^{(0)}](\mathbf{r})\), and where \(\left|\delta\mathcal{U}^{(1)}\right/\delta n^{(1)}\big{|}\propto|\rho_{\min}[n^ {(1)}]-n^{(1)}|\). Replacing the Newton update of the electron density in Eq. (11) with an approximate quasi-Newton scheme or any other SCF-like iteration update, should therefore also work under the same conditions. This observation may also help explain why some earlier versions of XL-BOMD [77; 78; 79; 11; 34; 64] often works quite well, but where a few SCF cycles often were required in each time step prior to the
force evaluations, while the extended electronic degrees of freedom was propagated dynamically. Our analysis here shows us why and when we can expect these initial versions of XL-BOMD to work or fail. This insight appears analogous to how, for example, solving a system of nonlinear equations with some simple _ad hoc_ mixed iterations (which often works), can be replaced by a more transparent and efficient conjugate gradient or Newton-based method. Once we understand the theoretically more rigorous alternative we also understand why and when the _ad hoc_ method works and how it can be improved.
The equations of motion in Eqs. (40) and (41), in combination with the definition of the 1st-level shadow energy functional, Eq. (14), and the Born-Oppenheimer potential, Eq. (17), are some of the key results of this article.
### Higher-Level Generalizations
Higher \(m\)th-level generalizations of the pairs of shadow energy functionals and potentials can also be designed, where the approximate higher-level density approximations to the exact ground state are updated with repeated Newton steps,
\[n^{(m)}(\mathbf{r})\equiv n^{(m)}[n^{(0)}](\mathbf{r}) \tag{42}\] \[\equiv n^{(m)}\left[n^{(m-1)}[\ldots[n^{(0)}]]\right](\mathbf{r})=n ^{(m-1)}(\mathbf{r})\] \[-\int K^{(m-1)}(\mathbf{r},\mathbf{r}^{\prime})\left(\rho_{\min} \left[n^{(m-1)}\right](\mathbf{r}^{\prime})-n^{(m-1)}(\mathbf{r}^{\prime}) \right)d\mathbf{r}^{\prime}.\]
The corresponding linearized \(m\)th-level shadow energy density functionals are then given by,
\[\mathcal{E}^{(m)}\left[\rho,n^{(m)}\right]=E\left[n^{(m)}\right] \tag{43}\] \[+\int\frac{\delta E\left[\rho\right]}{\delta\rho(\mathbf{r})} \Big{|}_{n^{(m)}}\left(\rho(\mathbf{r})-n^{(m)}(\mathbf{r})\right)d\mathbf{r}.\]
The constrained electronic ground state optimization then gives us the ground state density,
\[\rho_{\min}\left[n^{(m)}\right](\mathbf{r}) \tag{44}\] \[=\arg\min_{\rho}\left\{\mathcal{E}^{(m)}\left[\rho,n^{(m)}\right] \left|\int\rho(\mathbf{r})d\mathbf{r}=N_{e}\right.\right\},\]
which defines the \(m\)th-level shadow Born-Oppenheimer potentials,
\[\mathcal{U}^{(m)}(\mathbf{R},n^{(0)})=\mathcal{E}^{(m)}\left[\rho_{\min} \left[n^{(m)}\right],n^{(m)}\right]+V_{\mathrm{nn}}(\mathbf{R}). \tag{45}\]
The adiabatic equations of motion from an \(m\)th-level extended Lagrangian, \(\mathcal{L}^{(m)}\), follows in the same way as above, where
\[M_{I}\mathbf{\ddot{R}}_{I}= -\frac{\partial}{\partial\mathbf{R}_{I}}\mathcal{U}^{(m)}( \mathbf{R},n^{(0)})\Big{|}_{n^{(0)}}, \tag{46}\] \[\tilde{n}^{(0)}(\mathbf{r})= -\omega^{2}\int K^{(0)}(\mathbf{r},\mathbf{r}^{\prime})\left( \rho_{\min}[n^{(0)}](\mathbf{r}^{\prime})-n^{(0)}(\mathbf{r}^{\prime})\right) d\mathbf{r}^{\prime}. \tag{47}\]
As for the 1st-level approximation, we have used the nested dependencies of \(n^{(m)}\) on \(n^{(0)}\) and let the shadow potential be a functional of \(n^{(0)}\). While the above higher-order generalization is straightforward, we have found it of little value in practice, because the accuracy is, in general, already very high at the 0th-level and virtually exact at the 1st-level. For example, we tried to show numerically that the error in the shadow potential energy surface, which scales as \(\omega^{-4}\) for \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\)[16], scales as \(\omega^{-8}\) for \(\mathcal{U}^{(1)}(\mathbf{R},n^{(1)})\). However, in practical simulations this scaling was not possible to observe, because the error in the 1st-level shadow potential for any normal integration time steps was already at machine precision and no relevant scaling could be demonstrated. Instead, it has to be demonstrated indirectly from the scaling of \(|\rho_{\min}[n^{(1)}]-n^{(1)}|\) from which we get the scaling of \(|\mathcal{U}^{(1)})-U|\propto|\rho_{\min}[n^{(1)}]-n^{(1)}|^{2}\). In the following we will therefore ignore any higher-level generalizations beyond the 1st-level.
### Integrating the electronic equation of motion
To integrate the equations of motion for the nuclear degrees of freedom in Eq. (40) we can use a leapfrog velocity Verlet scheme, whereas the integration of the harmonic oscillator equation of motion in Eq. (41) for the extended electronic degrees of freedom requires some care. In principle, the same Verlet integration scheme could be used also for the electronic propagation. However, typically we need to include some weak form of dissipation that keeps \(n^{(0)}(\mathbf{r})\) synchronized with the trajectories of the atomic positions and the exact Born-Oppenheimer ground state [17; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 70; 82; 81; 83]. This modified Verlet integration scheme has the following form,
\[\mathbf{n}_{j+1}^{(0)}=2\mathbf{n}_{j}^{(0)}-\mathbf{n}_{j-1}^{(0)}+\delta t^{2 }\mathbf{\dot{n}}_{j}^{(0)}+\alpha\sum_{l=0}^{l_{\max}}c_{l}\mathbf{n}_{j-l}^{(0)}, \tag{48}\]
where we use a convenient vector notation, with \(\mathbf{n}_{j}^{(0)}\equiv\mathbf{n}^{(0)}(t_{0}+j\delta t)\in R^{N},\;\;j=0,1,2,\dots\). The first three terms on the right-hand side of Eq. (48) are the regular Verlet terms, whereas the last term is an additional weak dissipative _ad hoc_ damping force. An optimized set of coefficients of \(\alpha\), \(\{c_{l}\}\), and the dimensionless constant \(\kappa=\delta t^{2}\omega^{2}\), for various orders of \(l_{\max}\) can be found in Ref. [80]. As an alternative to such modified Verlet integration schemes, we may connect the electronic degrees of freedom to a thermostat, i.e. a stochastic Langevin-like dynamics or a chained Nose-Hoover thermostat, which
also keeps the electronic degrees of freedom synchronized with the ground state solution determined by the nuclear coordinates [84; 85].
In the initial time step we can set all densities \(\{\mathbf{n}_{j}\}\) equal to the optimized regular Born-Oppenheimer ground state density, \(\mathbf{\rho}_{\text{min}}\).
It is important to note that we always use a constant for the product \(\delta t^{2}\omega^{2}=\kappa\) in our simulations. This means that \(\delta t\propto\omega^{-1}\), as long as we use the same Verlet integration scheme with a constant size of the integration time step, \(\delta t\). This controls the way we can understand the scaling and the order of the accuracy, for example, of the forces (See Fig. 2), as a function of the chosen size of the integration time step, \(\delta t\), or the inverse frequency, \(\omega^{-1}\).
### Approximating the kernel with preconditioned Krylov subspace
In addition to the modified Verlet integration, we also need to approximate the kernel, \(K(\mathbf{r},\mathbf{r}^{\prime})\), both in the integration of the electronic degrees of freedom in Eq. (41) and for the Newton update of the density, \(n^{(0)}\), to \(n^{(1)}\) in Eq. (11). The kernel is the same and it is acting on the same residual, apart from a trivial constant factor \(\omega^{2}\). The approximation of the kernel acting on the residual therefore only needs to be performed once every integration time step.
In the more convenient matrix-vector notation, Eq. (41) or Eq. (47) is given by
\[\mathbf{\ddot{n}}^{(0)}=-\omega^{2}\mathbf{K}\left(\mathbf{\rho}_{\text{min}}^{(0 )}[\mathbf{n}^{(0)}]-\mathbf{n}^{(0)}\right), \tag{49}\]
where \(\mathbf{K}\in R^{N\times N}\), \(\mathbf{K}=\mathbf{J}^{-1}\), \(\mathbf{\rho}_{\text{min}}^{(0)}[\mathbf{n}^{(0)}]\in R^{N}\), and \(\mathbf{n}^{(0)}\in R^{N}\). We can rewrite this equation of motion in an equivalent preconditioned form,
\[\mathbf{\ddot{n}}^{(0)}=-\omega^{2}\left(\mathbf{K}_{0}\mathbf{J}\right)^{-1 }\mathbf{K}_{0}\left(\mathbf{\rho}_{\text{min}}^{(0)}[\mathbf{n}^{(0)}]-\mathbf{ n}^{(0)}\right), \tag{50}\]
where we have introduced a preconditioner, \(\mathbf{K}_{0}\approx\mathbf{J}^{-1}\). \(\mathbf{J}\) is the Jacobian of the residual function,
\[\mathbf{f}(\mathbf{n}^{(0)})= \mathbf{\rho}_{\text{min}}^{(0)}[\mathbf{n}^{(0)}]-\mathbf{n}^{(0)}. \tag{51}\]
If we use the notation,
\[\mathbf{f}_{\mathbf{v}_{k}}(\mathbf{n}^{(0)})\equiv \mathbf{K}_{0}\frac{\partial\mathbf{f}(\mathbf{n}^{(0)}+\lambda \mathbf{v}_{k})}{\partial\lambda}\Big{|}_{\lambda=0}=\mathbf{K}_{0}\mathbf{J} \mathbf{v}_{k}, \tag{52}\]
it is possible to show that the preconditioned Jacobian, \(\mathbf{K}_{0}\mathbf{J}\), can be approximated by a low-rank (rank-\(m\)) approximation,
\[\mathbf{K}_{0}\mathbf{J}\approx\sum_{kl}^{m}\mathbf{f}_{\mathbf{v}_{k}}L_{kl} \mathbf{v}_{l}^{\text{T}}, \tag{53}\]
for some set of vectors \(\{\mathbf{v}_{k}\}\), and with \(\mathbf{L}=\mathbf{O}^{-1}\), where \(O_{ij}=\mathbf{v}_{l}^{T}\mathbf{v}_{j}\) and \(m<N\)[73]. The directional derivatives of \(\mathbf{f}(\mathbf{n})\) in the direction of \(\mathbf{v}_{k}\) (or Gateaux derivatives) in Eq. (52) can be calculated using quantum perturbation theory [86; 73; 87].
The low-rank inverse of the preconditioned Jacobian, \(\mathbf{K}_{0}\mathbf{J}\), is then given by a pseudoinverse,
\[\left(\mathbf{K}_{0}\mathbf{J}\right)^{-1}\approx\sum_{kl}^{m}\mathbf{v}_{k}M_ {kl}\mathbf{f}_{\mathbf{v}_{l}}^{T}, \tag{54}\]
with \(\mathbf{M}=\mathbf{O}^{-1}\), where \(O_{ij}=\mathbf{f}_{\mathbf{v}_{i}}^{\text{T}}\mathbf{f}_{\mathbf{v}_{j}}\). By chosing the vectors, \(\{\mathbf{v}_{k}\}\), from an orthogonalized preconditioned Krylov subspace [73],
\[\{\mathbf{v}_{k}\}\in\text{span}^{\perp}\left\{\mathbf{K}_{0} \mathbf{f}(\mathbf{n}^{(0)}),(\mathbf{K}_{0}\mathbf{J})^{1}\mathbf{K}_{0} \mathbf{f}(\mathbf{n}^{(0)}),\right. \tag{55}\] \[\left.(\mathbf{K}_{0}\mathbf{J})^{2}\mathbf{K}_{0}\mathbf{f}( \mathbf{n}^{(0)}),(\mathbf{K}_{0}\mathbf{J})^{3}\mathbf{K}_{0}\mathbf{f}( \mathbf{n}^{(0)}),\ldots\right\}, \tag{56}\]
we can rapidly reach a well-converged approximation of the kernel, \(\mathbf{K}\), acting on the residual function. The advantage with the preconditioner, \(\mathbf{K}_{0}\), is that it typically reduces the number of Krylov subspace vectors (or low-rank updates) necessary to reach convergence. However, in principle the preconditioner is not needed and the low-rank Krylov subspace approximation works well also without preconditioning [75].
If we let \(\Delta\mathbf{n}^{(0)}\) denote the result of the kernel acting on the residual, i.e.
\[\Delta\mathbf{n}^{(0)}= \left(\mathbf{K}_{0}\mathbf{J}\right)^{-1}\mathbf{K}_{0}\left(\bm {\rho}_{\text{min}}^{(0)}[\mathbf{n}^{(0)}]-\mathbf{n}^{(0)}\right) \tag{57}\] \[\approx \left(\sum_{kl}\mathbf{v}_{k}M_{kl}\mathbf{f}_{\mathbf{v}_{l}}^{ \text{T}}\right)\mathbf{K}_{0}\mathbf{f}(\mathbf{n}^{(0)}), \tag{58}\]
we find that the electronic equation of motion in Eq. (41) and the Newton step in Eq. (11) are given by
\[\mathbf{\ddot{n}}^{(0)}=-\omega^{2}\Delta\mathbf{n}^{(0)}, \tag{59}\] \[\mathbf{n}^{(1)}=\mathbf{n}^{(0)}-\Delta\mathbf{n}^{(0)}. \tag{60}\]
This clearly shows how the approximation of \(\Delta\mathbf{n}^{(0)}\) only needs to be performed once every time step for the 1st-level generalized shadow XL-BOMD in Eqs. (40) and (41). This simplification is another of our key results.
The cost of calculating a preconditioner, \(\mathbf{K}_{0}\), can be expensive, but in QMD simulations the preconditioner can be reused, often over thousands of integration time steps (or the whole simulation) before an updated preconditioner is needed. In practice the overhead is therefore quite small. In simulations of regular stable molecular systems a scaled delta function typically works perfectly well as a preconditioner. The main cost of the preconditioned subspace expansion required to approximate \(\Delta\mathbf{n}^{(0)}\) in Eq. (58) is therefore the construction of the residual response vectors, \(\{\mathbf{f}_{\mathbf{v}_{k}}\}\), from the directional perturbations in \(\{\mathbf{v}_{k}\}\). In Kohn-Sham DFT these response vectors can be calculated from quantum perturbation theory [86; 87]. If we assume that we have already performed a diagonalization of the unperturbed Kohn-Sham
Hamiltonian, \(\mathbf{H}[n^{(0)}]\), to find \(\rho_{\text{min}}[n^{(0)}]\) in Eq. (8), these response vectors are fairly easy to calculate as no additional diagonalizations are needed [75; 76]. Nevertheless, the calculation of the residual response vectors, \(\{\mathbf{f}_{\nu_{k}}\}\), is the main bottleneck of the Krylov subspace expansion. In Kohn-Sham DFT, using an atomic-orbital basis, the cost is typically dominated by the transformations back and forth between the non-orthogonal atomic-orbital basis and the molecular-orbital eigenbasis, in which the response summations are performed [73; 74].
### The Harris-Foulkes functional
The 0th and 1st-level pairs of shadow energy functionals and Born-Oppenheimer potentials presented in this article, e.g. as in Eqs. (6)-(9), are quite general and easy to apply in different applications, e.g. to orbital-based Kohn-Sham DFT, density matrix methods in Hartree-Fock theory, to orbital-free polarizable charge equilibration models, or in a slightly different form even to time-dependent models for superfluidity [17; 18]. These linearized shadow energy functionals and their constrained optimization may, of course, appear somewhat trivial. It is only in combination with XL-BOMD that the shadow energy functionals and Born-Oppenheimer potentials become useful and consequential. The proposed 0th or 1st-level shadow energy functionals and potentials are only meaningful within the dynamical simulation framework of XL-BOMD, where the electronic degrees of freedom appear as dynamical field variables in addition to the nuclear positions and velocities. Only then can we calculate the exact forces from the shadow Born-Oppenheimer potential. For a static, non-dynamical problem, the corresponding forces are only approximate, and would, in general, require a well-converged iterative SCF optimization procedure to achieve any reasonable accuracy.
For the static, non-dynamical problem, and for the particular case of Kohn-Sham DFT, the optimized ground-state shadow Born-Oppenheimer potential, although conceptually different, is interchangeable with the Harris-Foulkes energy density functional [57; 58; 88]. The Harris-Foulkes (HF) functional, \(E_{\text{HF}}[\rho_{0}]\), is an _approximate_ energy expression for the electronic ground-state energy in Kohn-Sham DFT that depends on some input density, \(\rho_{0}(\mathbf{r})\), where
\[E_{\text{HF}}[\rho_{0}]= \sum_{i}f_{i}\varepsilon_{i}-\frac{1}{2}\iint\frac{\rho_{0}( \mathbf{r})\rho_{0}(\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}d \mathbf{r}d\mathbf{r}^{\prime} \tag{61}\] \[+E_{\text{xc}}[\rho_{0}]-\int V_{\text{xc}}[\rho_{0}](\mathbf{r} )\rho_{0}(\mathbf{r})d\mathbf{r}. \tag{62}\]
Here \(\{\varepsilon_{i}\}\) are the eigenvalues of the Kohn-Sham Hamiltonian, \(H_{\text{KS}}[\rho_{0}]\), calculated for the input density, \(\rho_{0}(\mathbf{r})\), \(\{f_{i}\}\) are the occupation numbers, \(E_{\text{xc}}[\rho_{0}]\) is the exchange-correlation energy functional with the corresponding exchange-correlation potential, \(V_{\text{xc}}[\rho_{0}](\mathbf{r})=\delta E_{\text{xc}}[\rho]\big{/}\delta \rho(\mathbf{r})\big{|}_{\rho_{0}}\).
Apart from the nuclear-nuclear repulsion term (and possibly an additional electronic entropy contribution), \(E_{\text{HF}}[\rho_{0}]\), has the same form as the 0th-level shadow potential, \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\), with \(\rho_{0}=n^{(0)}\). The difference is that \(\mathcal{U}^{(0)}(\mathbf{R},n^{(0)})\), as it appears in XL-BOMD, represents an _exact_ ground-state shadow Born-Oppenheimer potential, which is determined from a variationally optimized shadow energy functional, \(\mathcal{E}^{(0)}[\rho,n^{(0)}]\), with some external and electrostatic potentials that are given by the nuclear positions, \(\mathbf{R}(t)\), and a separate _dynamical variable_ density, \(n^{(0)}(\mathbf{r},t)\). Because \(n^{(0)}(\mathbf{r},t)\) is a dynamical field variable of the extended Lagrangian in Eq. (25), forces in the Euler-Lagrange's equations of motion can easily be calculated from the partial derivatives of \(\mathcal{U}^{(m)}(\mathbf{R},n^{(0)})\) with respect to a constant density, \(n^{(0)}(\mathbf{r},t)\). This is in contrast to the Harris-Foulkes functional, which is an approximate energy density functional expression for the (static) Kohn-Sham ground state energy, where the density \(\rho_{0}(\mathbf{r})\) represents, either overlapping \(\mathbf{R}\)-dependent atomic charge densities, or some iteratively and partially SCF updated (and thus \(\mathbf{R}\)-dependent) input density. The Harris-Foulkes energy functional is thus best used for estimating the electronic ground state energy for approximate densities. Accurate calculations of the interatomic forces would still require a regular iterative SCF optimization, or the additional calculation of the gradients of the electron density with respect to the atomic positions.
In XL-BOMD the pairs of shadow energy functionals and potential energy surfaces therefore play a different role, and allows for computationally simple and accurate calculations of conservative interatomic forces in molecular dynamics simulations, without relying on the Hellmann-Feynman theorem [89; 20]. However, the linearized shadow energy functionals and optimized Born-Oppenheimer potentials presented and derived here, as in Eqs. (6)-(9), provide an alternative and probably more transparent and straightforward approach to derive and understand the Harris-Foulkes functional in Kohn-Sham DFT. The procedure in Eqs. (6)-(9) is also easy to generalize and apply to a broad variety of other energy expressions besides the Kohn-Sham energy functional [17]. As an example, in the section below, we will use the approach in Eqs. (6)-(9) to the design a shadow energy functional and potential for a coarse-grained flexible charge equilibration model.
### Coarse-grained flexible charge model
Flexible charge models can be derived from an atomic coarse-graining of DFT [17; 19; 48; 49; 50; 51; 52; 53; 54; 55; 56]. They often serve as simplified or conceptual versions of DFT and can also be used to illustrate our shadow energy functionals in Born-Oppenheimer molecular dynamics.
In the simplest form of flexible charge models the electronic energy functional in DFT is approximated by the
energy function
\[E(\mathbf{R},\mathbf{q})=\sum_{I}\chi_{I}q_{I}+\frac{1}{2}\sum_{I}U_{I}q_{I}^{2}+ \frac{1}{2}\sum_{IJ}^{I\neq J}q_{I}\gamma_{IJ}q_{J}, \tag{63}\]
where \(\mathbf{q}=\{q_{I}\}\) is the coarse-grained charge density, represented by net partial charges (or electron occupations) of each atom \(I\), \(\chi_{I}\) are the estimated atomic electronegativities, \(U_{I}\) the chemical hardness or Hubbard-U parameters, and \(\gamma_{IJ}\) describe the Coulomb interactions between penetrating spherical atom-centered charge densities centered at atom \(I\) and \(J\). At large interatomic distances these interactions decay as \(\gamma_{IJ}\rightarrow|\mathbf{R}_{I}-\mathbf{R}_{J}|^{-1}\) and at short-range distances the onsite limit, \(\gamma_{IJ}\to U_{I}\), is reached as \(|\mathbf{R}_{I}-\mathbf{R}_{J}|\to 0\).
The electronic ground state is given from the constrained minimization, where
\[\mathbf{q}_{\text{min}}=\min_{\mathbf{q}}\left\{E(\mathbf{R},\mathbf{q})\left| \sum_{I}q_{I}=0\right.\right\}. \tag{64}\]
This minimization requires the solution of a full system of linear equations, which is the main computational bottleneck. If an iterative solver is used the optimized solutions need to be well-converged to provide accurate conservative forces in a molecular dynamics simulation. The optimized ground state charges then gives us the Born-Oppenheimer potential,
\[U(\mathbf{R})=E(\mathbf{R},\mathbf{q}_{\text{min}})+V(\mathbf{R}). \tag{65}\]
The molecular trajectories can then be generated from the integration of Newton's equations of motion,
\[M_{I}\vec{\mathbf{R}}_{I}=-\nabla_{I}U(\mathbf{R}). \tag{66}\]
Following the approach in Eqs. (6)-(9), a 0th-level shadow energy function, \(\mathcal{E}^{(0)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(0)})\approx E(\mathbf{R},\mathbf{q})\), can be constructed from a partial linearization of \(E(\mathbf{R},\mathbf{q})\) around some approximate ground state solution, \(\mathbf{n}^{(0)}\approx\mathbf{q}_{\text{min}}\), where
\[\mathcal{E}^{(0)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(0)})= \sum_{I}\chi_{i}q_{i}+\frac{1}{2}\sum_{I}U_{I}q_{I}^{2} \tag{67}\] \[+\frac{1}{2}\sum_{I\neq J}(2q_{I}-n_{I}^{(0)})\gamma_{IJ}n_{J}^{( 0)}. \tag{68}\]
The constrained minimization (the lowest stationary solution) of this shadow energy function gives us the \(\mathbf{n}^{(0)}\)-dependent ground state density,
\[\mathbf{q}_{\text{min}}[\mathbf{n}^{(0)}]=\arg\min_{\mathbf{q}}\left\{ \mathcal{E}^{(0)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(0)})\left|\sum_{I}q_{I}= 0\right.\right\} \tag{69}\]
and the corresponding 0th-level shadow Born-Oppenheimer potential,
\[\mathcal{U}^{(0)}(\mathbf{R},\mathbf{n}^{(0)})=\mathcal{E}^{(0)}(\mathbf{R}, \mathbf{q}_{\text{min}}[\mathbf{n}^{(0)}],\mathbf{n}^{(0)})+V(\mathbf{R}). \tag{70}\]
The shadow energy function, \(\mathcal{E}^{(0)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(0)})\), is constructed such that \(\mathbf{q}_{\text{min}}[\mathbf{n}^{(0)}]\) is determined by a quasi-diagonal system of linear equations that has a trivial analytical solution [17; 19].
To introduce the 1st-level update we can improve the ground-state estimate of \(\mathbf{n}^{(0)}\) with a Newton step,
\[\mathbf{n}^{(1)}\equiv\mathbf{n}^{(1)}[\mathbf{n}^{(0)}]=\mathbf{n}^{(0)}- \Delta\mathbf{n}^{(0)}, \tag{71}\]
where
\[\Delta\mathbf{n}^{(0)}= \left(\mathbf{K}_{0}\mathbf{J}\right)^{-1}\mathbf{K}_{0}\left( \boldsymbol{\rho}_{\text{min}}^{(0)}[\mathbf{n}^{(0)}]-\mathbf{n}^{(0)}\right), \tag{72}\]
which can be approximated, for example, by the preconditioned low-rank Newton step as in Eq. (58). Notice that this updated approximate charge vector is \(\mathbf{n}^{(0)}\)-dependent, i.e.
\[\mathbf{n}^{(1)}\equiv\mathbf{n}^{(1)}[\mathbf{n}^{(0)}]=\mathbf{n}^{(0)}- \Delta\mathbf{n}^{(0)}. \tag{73}\]
The updated 1st-level energy function is now given by
\[\mathcal{E}^{(1)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(1)})= \sum_{I}\chi_{i}q_{i}+\frac{1}{2}\sum_{I}U_{I}q_{I}^{2} \tag{74}\] \[+\frac{1}{2}\sum_{I\neq J}(2q_{I}-n_{I}^{(1)})\gamma_{IJ}n_{J}^{( 1)}. \tag{75}\]
The optimized ground state density is then given from the constrained minimization, where
\[\mathbf{q}_{\text{min}}[\mathbf{n}^{(1)}]=\arg\min_{\mathbf{q}} \left\{\mathcal{E}^{(1)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(1)})\left|\sum_{I}q _{I}=0\right.\right\}. \tag{76}\]
The shadow energy function \(\mathcal{E}^{(1)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(1)})\) is constructed in the same way as \(\mathcal{E}^{(0)}(\mathbf{R},\mathbf{q},\mathbf{n}^{(0)})\) such that \(\mathbf{q}_{\text{min}}[\mathbf{n}^{(1)}]\) also is determined by a quasi-diagonal system of linear equations that has a trivial analytical solution [17; 19]. This gives us the corresponding 1st-level shadow Born-Oppenheimer potential,
\[\mathcal{U}^{(1)}(\mathbf{R},\mathbf{n}^{(0)})=\mathcal{E}^{(1)}(\mathbf{R}, \mathbf{q}_{\text{min}}[\mathbf{n}^{(1)}],\mathbf{n}^{(1)})+V(\mathbf{R}), \tag{77}\]
where \(\mathbf{n}^{(1)}\equiv\mathbf{n}^{(1)}[\mathbf{n}^{(0)}]\). The shadow Born-Oppenheimer potential can then be used in an extended Lagrangian formulation [17; 19], which in an adiabatic limit gives us the equations of motion,
\[M_{I}\vec{\mathbf{R}}_{I}=-\nabla_{I}\mathcal{U}^{(1)}(\mathbf{R}, \mathbf{n}^{(0)})\big{|}_{\mathbf{n}^{(0)}}, \tag{78}\] \[\vec{\mathbf{n}}^{(0)}=-\omega^{2}\Delta\mathbf{n}^{(0)}. \tag{79}\]
The nuclear coordinates and velocities can then be integrated using a standard velocity Verlet integration scheme and for the evolution of the atomic partial charges, \(\mathbf{n}(t)\), we can use the modified Verlet integration scheme including some additional weak dissipative damping forces as in Eq. (48).
This example with a coarse-grained flexible charge equilibration model demonstrates the general applicability of our shadow molecular dynamics approach and how it can be used to construct pairs of shadow energy functionals and potentials for XL-BOMD simulations at different levels of accuracy.
Pseudocode
The easiest way to summarize the generalized 1st-level update of the shadow energy functional and Born-Oppenheimer potential in XL-BOMD is to describe the method in a step-by-step procedure using a pseudocode. Algorithm 1 gives a schematic picture of what an XL-BOMD simulation using the 1st-level shadow potential, \(\mathcal{U}^{(1)}\), would look like for an orbital-dependent Kohn-Sham like electronic structure theory. It is expressed in a matrix-vector notation that is well-suited, for example, for SCC-DFTB simulations. All 0th-level superscript, \({}^{(0)}\), as in \(\mathbf{n}^{(0)}\), have been dropped to simplify the notation. Here \(\mathbf{S}\) is a basis-set overlap matrix and \(\mathbf{H}\) is the effective single-particle (Kohn-Sham) Hamiltonian. Of critical importance is the construction of \(\Delta\mathbf{n}\equiv\Delta\mathbf{n}^{(0)}\) with a low-rank preconditioned Krylov subspace approximation using quantum response calculations. In contrast to the most recent XL-BOMD schemes, we now need two diagonalizations per time step, instead of only one. Algorithm 1 provides a compact summary of the most important results of this article.
```
Atomic masses and positions, \(\mathbf{M}=\{M_{I}\},\ \mathbf{R}=\{\mathbf{R}_{I}\}\) Get ground state, \(\mathbf{q}_{\min}\), with regular SCF \(\mathbf{q}_{\min}\Rightarrow\) "exact" \(U(\mathbf{R})\) and forces, \(\mathbf{F}=\{\mathbf{F}_{I}\}\) Initialize charges, \(\mathbf{n}_{j}=\mathbf{q}_{\min}\), \(j=1,2,\ldots,k\) Initialize velocities, \(\mathbf{V}=\{\mathbf{V}_{I}\}\) Estimate preconditioner, \(\mathbf{K}_{0}=\mathbf{J}^{-1}\) Initial \(\Delta\mathbf{n}=(\mathbf{K}_{0}\mathbf{J})^{-1}\mathbf{K}_{0}(\mathbf{q}_{ \min}[\mathbf{n}]-\mathbf{n})=\mathbf{0}\) \(t=t_{0}\) while\(t<t_{\max}\)do \(\mathbf{V}_{I}=\mathbf{V}_{I}+(\delta t/2)\mathbf{F}_{I}/M_{I}\) \(\mathbf{n}_{0}=2\mathbf{n}_{1}-\mathbf{n}_{2}-\delta t^{2}\omega^{2}\Delta \mathbf{n}+\alpha\sum_{l=0}^{k}c_{l}\mathbf{n}_{1-l}\) \(\mathbf{n}_{k}=\mathbf{n}_{k+1},\ \cdots,\ \mathbf{n}_{2}=\mathbf{n}_{1},\ \mathbf{n}_{1}= \mathbf{n}_{0},\ \mathbf{n}=\mathbf{n}_{0}\) \(\mathbf{R}_{I}=\mathbf{R}_{I}+\delta t\mathbf{V}_{I}\) \(\mathbf{H}[\mathbf{n}]=\mathbf{H}[\mathbf{R},\mathbf{n}],\ \mathbf{S}=\mathbf{S}[\mathbf{R}],\ \mathbf{Z}=\mathbf{S}^{-1/2}\) \(\mathbf{q}_{\min}[\mathbf{n}]\Leftarrow\) from diagonalized \(\mathbf{Z}^{T}\mathbf{H}[\mathbf{n}]\mathbf{Z}\) \(\Delta\mathbf{n}=(\mathbf{K}_{0}\mathbf{J})^{-1}\mathbf{K}_{0}(\mathbf{q}_{ \min}[\mathbf{n}]-\mathbf{n})\) with rank-\(m\) approx. \(\mathbf{n}^{(1)}=\mathbf{n}-\Delta\mathbf{n}\), approximate Newton step \(\mathbf{q}_{\min}[\mathbf{n}^{(1)}]\Leftarrow\) from diagonalized \(\mathbf{Z}^{T}\mathbf{H}[\mathbf{n}^{(1)}]\mathbf{Z}\) \(\mathbf{q}_{\min}[\mathbf{n}^{(1)}]\Rightarrow\) shadow \(\mathcal{U}^{(1)}(\mathbf{R},\mathbf{n})\) and forces, \(\mathbf{F}\) \(\mathbf{V}_{I}=\mathbf{V}_{I}+(\delta t/2)\mathbf{F}_{I}/M_{I}\) \(t=t+\delta t\) endwhile
```
**Algorithm 1** Pseudocode for the XL-BOMD scheme using the 1st-level updated shadow Born-Oppenheimer potential, \(\mathcal{U}^{(1)}(\mathbf{R},n^{(0)})\). Matrix-vector notation is used and the 0th-level (0)-superscripts, i.e. as in \(\mathbf{n}^{(0)}\) or \(\Delta\mathbf{n}^{(0)}\), have been dropped for brevity. One rank-\(m\) approximation of \(\Delta\mathbf{n}\) and two Hamiltonian diagonalizations are required in each time step.
## V Examples
We will demonstrate the accuracy and performance of the shadow energy functionals and Born-Oppenheimer potentials in XL-BOMD simulations using SCC-DFTB theory [37; 38; 39; 58; 59; 60; 61; 62; 63; 64]. SCC-DFTB theory can be seen as a framework for different levels of approximations of density functional theory. Here we will use the scheme given by a second-order expansion in the charge density fluctuations around a reference density of overlapping neutral atomic charge distributions, where the atomic net Mulliken partial charges are used to describe the long-range electrostatic interactions. In this way the continuous charge density, \(\rho(\mathbf{r})\), of regular DFT becomes vectorized with one net partial charge per atom, \(\mathbf{q}=\{q_{I}\}\). The fluctuating partial charges are optimized self-consistently to account for interatomic charge transfer and the response to the long-range electrostatic interactions. In a general SCC-DFTB scheme this requires a repeated set of constructions of an approximate effective single-particle Kohn-Sham Hamiltonian, diagonalizations, charge calculations from the eigenfunctions, and Coulomb potential summations, until a self-consistent charge convergence is reached. SCC-DFTB theory therefore follows the same iterative SCF procedure as a regular first-principles Kohn-Sham DFT calculation. Here we will also use a thermal DFTB theory, where we assume fractional occupation numbers of the molecular orbitals determined by the Fermi function at some given electronic temperature, \(T_{e}\), including an electronic entropy term [17; 24; 27; 28; 30; 66; 67]. The fractional occupation numbers are important to better stabilize the electronic structure calculations when the electronic HOMO-LUMO energy gap is small or vanishing. This also affects how we perform the response calculations of \(\{\mathbf{f}_{\mathbf{v}_{i}}\}\) in the Krylov subspace approximation in Eq. (54) of the preconditioned kernel [16; 17; 75; 76; 87; 16].
For our implementation and XL-BOMD simulations we use a developers version of the LATTE software package [90; 12; 91] that closely follows Alg. 1. As preconditioner we use an exact calculation of the kernel in the first time step, and we use a sufficient number of low-rank updates to achieve an approximate quadratic convergence in the Newton updates. The maximum number, \(m\), of Krylov subspace vectors, i.e. in the rank-\(m\) approximation, never exceeds 6.
First we will look at the asserted scaling expressed in Eqs. (31) and (32) that were assumed in the derivation of the equations of motion, in Eq. (40) and Eq. (41). Thereafter, we will demonstrate the advantage of the 1st-level update of the shadow energy functional and Born-Oppenheimer potential, \(\mathcal{U}^{(1)}\), compared to the original 0th-level approximation for XL-BOMD simulations of an unstable, charge-sensitive, chemical system.
### Scaling
Figure 1 shows the approximate scaling of the root mean square errors (RMSE) given by the root mean square of the residuals, \(\mathbf{q}_{\min}[\mathbf{n}^{(0)}]-\mathbf{n}^{(0)}\) and \(\mathbf{q}_{\min}[\mathbf{n}^{(1)}]-\mathbf{n}^{(1)}\) for simulations of amorphous carbon. The results of the simulations confirm the assumed scaling orders,
where \(|{\bf q}_{\rm min}[{\bf n}^{(0)}]-{\bf n}^{(0)}|\propto\omega^{-2}\) and \(|{\bf q}_{\rm min}[{\bf n}^{(1)}]-{\bf n}^{(1)}|\propto\omega^{-4}\). These scalings were asserted _a priori_ in the derivation of the equations of motion in an adiabatic limit as \(\omega\to\infty\). The scaling of the RMSE extracted from the XL-BOMD simulations shown in Fig. 1 confirms these assumption. Notice that the \(\omega^{-1}\propto\delta t\), because our integration scheme, Eq. (48), has been chosen such that \(\delta t^{2}\omega^{2}\) is a dimensionless constant, \(\kappa=\delta t^{2}\omega^{2}\).
The error in the 0th-level shadow Born-Oppenheimer potential scales as \(|{\cal U}^{(0)}-U|\propto|{\bf q}_{\rm min}[{\bf n}^{(0)}]-{\bf n}^{(0)}|^{2}\). This means that the error in the sampling of the 0th-level shadow Born-Oppenheimer potential, \({\cal U}^{(0)}\), scales as \(\delta t^{4}\) with the integration time step, which has been confirmed previously, e.g. [16]. The new 1st-level updated shadow Born-Oppenheimer potential, \({\cal U}^{(1)}\), has the same form for the error, where \(|{\cal U}^{(1)}-U|\propto|{\bf q}_{\rm min}[{\bf n}^{(1)}]-{\bf n}^{(1)}|^{2}\). This means, from the scaling demonstrated in Fig. 1, that the error in the sampling of the shadow Born-Oppenheimer potential \({\cal U}^{(1)}\) scales at \(\delta t^{8}\). It is hard to demonstrate this scaling of the error in \({\cal U}^{(1)}\) directly, because the error converges to quickly and saturates at a level set by the available numerical precision. Here we therefore only show this \(\delta t^{8}\)-scaling indirectly, from the \(\delta t^{4}\) or \(\omega^{-4}\)-scaling of \(|{\bf q}_{\rm min}[{\bf n}^{(1)}]-{\bf n}^{(1)}|\) in Fig. 1.
From the derivation of the equations of motion with the 1st-level shadow potential in Eqs. (40) and (41), we made the estimate that the equations of motion for the atomic positions should have an error that scales as \(\propto\omega^{-4}\). In Fig. 2 we show the results of simulations of an amorphous Carbon and a water system, were we find that the fractional error in the evaluated forces for the 1st-level \({\cal U}^{(1)}\) shadow potential scale at \(\propto\delta t^{4}\). This confirms the previously estimated scaling. This is in contrast to the original 0th-level shadow Hamiltonian formulation of XL-BOMD using \({\cal U}^{(0)}\) with an error in the forces that is only of second order, \(\propto\delta t^{2}\)[16].
The dramatic improvement in the scaling of the error as a function of the integration time step may seem impressive. Nevertheless, often the improved behavior only has a minor effect on the accuracy and stability of XL-BOMD simulations. It is only for highly unstable systems, where the improved scaling and accuracy from the \(1th\)-level update of the shadow energy functional and Born-Oppenheimer potential play a role. For such problems we find that stable molecular trajectories often can be achieved with a slightly longer integration time step than what otherwise would be possible with the original 0th-level shadow energy functional and Born-Oppenheimer potential.
Another important observation is that the higher-degree of accuracy in the force evaluations may be useful if higher-order symplectic integration schemes are used. In previous studies, using earlier versions of XL-BOMD,
Figure 1: Scaling of the residual error terms as a function of time step, \(\delta t\), or harmonic oscillator frequency, \(\omega\), for a system of amorphous carbon with 55 atoms using periodic boundary conditions. The simulations are performed with a constant dimensionless constant, \(\kappa=\delta t^{2}\omega^{2}\), which means that \(\delta t\propto\omega^{-1}\). XL-BOMD based on the enhanced 1st-level shadow Born-Oppenheimer potential, \({\cal U}^{(1)}({\bf R},n^{(0)})\), was used following Alg. 1. The root-mean square errors (RMSE) are given by the root-mean-square of the residuals, \({\bf q}_{\rm min}[{\bf n}^{(0)}]-{\bf n}^{(0)}\) and \(q_{\rm min}[{\bf n}^{(1)}]-{\bf n}^{(1)}\), and are averaged over snapshots of 100 integration time steps. The dashed lines indicates the exact \(\delta t^{2}\sim\omega^{-2}\) and \(\delta t^{4}\sim\omega^{-4}\) scalings.
Figure 2: Scaling of the fractional error in the interatomic forces for water and amorphous carbon as a function of the integration time step \(\delta t\), or harmonic oscillator frequency, \(\omega\). XL-BOMD based on the enhanced 1st-level shadow potential, \({\cal U}^{(1)}({\bf R},n^{(0)})\), was used. The simulations are performed with a constant dimensionless constant, \(\kappa=\delta t^{2}\omega^{2}\), which means that \(\delta t\propto\omega^{-1}\). The fractional error was estimated from an on-the-fly comparison with the “exact” fully converged Born-Oppenheimer forces, where the error was averaged over all the atoms and force components over a snapshot of 100 integration time steps. The dashed lines indicates the exact \(\sim\delta t^{4}\) or \(\sim\omega^{-4}\) scalings.
we found that we needed a fairly tight SCF convergence prior to the force evaluations for the higher-order symplectic integration schemes in order to take full advantage of their improved accuracy [82, 83]. The 1st-level shadow energy functional and Born-Oppenheimer potential should therefore be well-suited in combination with various 4th-order symplectic integrations schemes [82].
### Unstable mixture of nitromethane
To demonstrate the advantage of the 1st-level shadow energy functional and Born-Oppenheimer potential compared to the original 0th-level approach, we will look at a chemically unstable system, with a small or vanishing HOMO-LUMO energy gap. Such systems are often difficult to study, in particular with regular direct quantum-mechanical Born-Oppenheimer molecular dynamics methods. As an example we have chosen an artificial mixture of liquid nitromethane, (CH\({}_{3}\)NO\({}_{2}\))\({}_{7}\), where a handful randomly chosen atoms have switched places. This artificial testbed system is highly unstable and exothermic reactions occurs within a few hundred femtoseconds. This is illustrated in Fig. 3. We find a significantly improved stability in the simulation with the 1st-level updated shadow potential, \(\mathcal{U}^{(1)}\), (blue solid lines) compared to the original 0th-level shadow potential, \(\mathcal{U}^{(0)}\), (red dashed lines) as indicated by the fluctuations in the total energy shown in the mid panel b). Only by reducing the integration time step, \(\delta t\), or possibly by increasing the electronic temperature, is it possible to stabilize the XL-BOMD simulation using the original 0th-level shadow potential.
## VI Summary and discussion
In this article we have introduced a generalization of the shadow energy functionals and Born-Oppenheimer potentials used in XL-BOMD. The original 0th-level shadow energy functional generates a Born-Oppenheimer potential that has an error in the fourth-order, \(\mathcal{O}(\delta t^{4})\), of the integration time step, \(\delta t\), and with an error in the interatomic forces that is of second-order, \(\mathcal{O}(\delta t^{2})\). With the 1st-level update the error in the potential energy can be reduced to scale as \(\mathcal{O}(\delta t^{8})\), where the error in the calculated interatomic forces scales as \(\mathcal{O}(\delta t^{4})\). The main additional cost using the 1st-level instead of the 0th-level shadow potential is the cost of an extra Hamiltonian diagonalization. We showed how this improved level of accuracy helps stabilize the integration of the molecular trajectories, which can be of particular importance for unstable, charge-sensitive, reactive systems with a small or vanishing electronic HOMO-LUMO energy gap. The improved scaling in the error of the potential and forces may also be of interest in the application of higher-order symplectic integration schemes [82, 83, 11]. These higher-order schemes are of no use unless they can be matched by force evaluations with a comparable or higher level of accuracy.
The ability to systematically improve the accuracy of the Born-Oppenheimer potential has many similarities with earlier versions of XL-BOMD [11, 34, 77], where often a few SCF steps were needed prior to each force evaluation. However, with the detailed analysis supported by the concept of a shadow dynamics or a backward error analysis, we now have a more transparent description of why and when this is the case and how we can optimize the efficiency of our XL-BOMD simulations. The key idea is the construction of pairs of shadow energy functionals and potentials, where the shadow potential is given from an exact, yet computationally cheap, ground-state optimization of a linearized shadow energy functional. In combination with XL-BOMD, where the electronic degrees of freedom is propagated dynamically, the shadow Born-Oppenheimer potentials can then be used to calculate conservative interatomic forces that generates accurate molecular trajectories with long-term energy stability.
The generalized shadow energy functionals and Born-Oppenheimer potentials were demonstrated using Kohn-Sham based SCC-DFTB theory. However, the underlying theory was derived in a general form that also applies to other electronic structure theories, including Hartee-Fock and orbital-free DFT. As an example we also dis
Figure 3: XL-BOMD simulations based on SCC-DFTB theory of an artificial highly reactive randomized mixture of liquid nitromethane (49 atoms with periodic boundary conditions). The upper panel a) shows the statistical temperature, the middle panel b) shows the fluctuations in the total energy per atom, and the lower panel c) shows the HOMO-LUMO electronic energy gap. An integration time step of \(\delta t=0.2\) fs was used in combination with a fractional occupation number corresponding to an electronic temperature, \(T_{e}=1,500\) K. The 1st-level updated shadow potential, \(\mathcal{U}^{(1)}\), (blue lines) shows a more stable dynamics without the more pronounced fluctuations in the total energy fluctuations of the 0th-level shadow potential, \(\mathcal{U}^{(0)}\), (red lines).
cussed an extension to flexible charge equilibration models, which can be derived as coarse-grained versions of Hohenberg-Kohn DFT. The higher-level generalization of the shadow energy functionals and Born-Oppenheimer potentials presented here are therefore applicable to a broad variety of electronic structure methods and flexible charge models within the framework of XL-BOMD.
## VII Acknowledgements
This work is supported by the U.S. Department of Energy Office of Basic Energy Sciences (FWP LANLE8AN,"Next generation quantum-based molecular dynamic") and by the U.S. Department of Energy through the Los Alamos National Laboratory. Discussions with Joshua Finkelstein are gratefully acknowledged. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy Contract No. 892333218NCA000001.
|
2310.09621
|
Prime Match: A Privacy-Preserving Inventory Matching System
|
Inventory matching is a standard mechanism/auction for trading financial
stocks by which buyers and sellers can be paired. In the financial world, banks
often undertake the task of finding such matches between their clients. The
related stocks can be traded without adversely impacting the market price for
either client. If matches between clients are found, the bank can offer the
trade at advantageous rates. If no match is found, the parties have to buy or
sell the stock in the public market, which introduces additional costs. A
problem with the process as it is presently conducted is that the involved
parties must share their order to buy or sell a particular stock, along with
the intended quantity (number of shares), to the bank. Clients worry that if
this information were to leak somehow, then other market participants would
become aware of their intentions and thus cause the price to move adversely
against them before their transaction finalizes. We provide a solution, Prime
Match, that enables clients to match their orders efficiently with reduced
market impact while maintaining privacy. In the case where there are no
matches, no information is revealed. Our main cryptographic innovation is a
two-round secure linear comparison protocol for computing the minimum between
two quantities without preprocessing and with malicious security, which can be
of independent interest. We report benchmarks of our Prime Match system, which
runs in production and is adopted by J.P. Morgan. The system is designed
utilizing a star topology network, which provides clients with a centralized
node (the bank) as an alternative to the idealized assumption of point-to-point
connections, which would be impractical and undesired for the clients to
implement in reality. Prime Match is the first secure multiparty computation
solution running live in the traditional financial world.
|
Antigoni Polychroniadou, Gilad Asharov, Benjamin Diamond, Tucker Balch, Hans Buehler, Richard Hua, Suwen Gu, Greg Gimler, Manuela Veloso
|
2023-10-14T17:03:44Z
|
http://arxiv.org/abs/2310.09621v1
|
# Prime Match: A Privacy-Preserving Inventory Matching System1
###### Abstract
Inventory matching is a standard mechanism/auction for trading financial stocks by which buyers and sellers can be paired. In the financial world, banks often undertake the task of finding such matches between their clients. The related stocks can be traded without adversely impacting the market price for either client. If matches between clients are found, the bank can offer the trade at advantageous rates. If no match is found, the parties have to buy or sell the stock in the public market, which introduces additional costs.
A problem with the process as it is presently conducted is that the involved parties must share their order to buy or sell a particular stock, along with the intended quantity (number of shares), to the bank. Clients worry that if this information were to "leak" somehow, then other market participants would become aware of their intentions and thus cause the price to move adversely against them before their transaction finalizes.
We provide a solution, Prime Match, that enables clients to match their orders efficiently with reduced market impact while maintaining privacy. In the case where there are no matches, no information is revealed. Our main cryptographic innovation is a two-round secure _linear_ comparison protocol for computing the minimum between two quantities without preprocessing and with malicious security, which can be of independent interest. We report benchmarks of our Prime Match system, which runs in production and is adopted by a large bank in the US - J.P. Morgan. The system is designed utilizing a star topology network, which provides clients with a centralized node (the bank) as an alternative to the idealized assumption of point-to-point connections, which would be impractical and undesired for the clients to implement in reality.
Prime Match is the first secure multiparty computation solution running live in the traditional financial world.
## 1 Introduction
An axe is an interest in a particular stock that an investment firm wishes to buy or sell. Banks and brokerages provide their clients with a matching service, referred to as "axe matching". When a bank finds two clients interested in the same stock but with opposite directions (one is interested in buying and the other is interested in selling), the bank can offer these two clients the opportunity to trade internally without impacting the market price. Both clients, and the bank, benefit from this internalization. On the other hand, if the bank cannot find two matching clients, the bank has to perform the trade in the public market, which introduces some additional costs and might impact the price. Banks, therefore, put efforts into locating internalized matches.
One such effort is the following service. To incentivize clients to trade, banks publish a list of stocks that they are interested in trading, known as "axe list". The axe list that the bank publishes contains, among other things, aggregated information on previous transactions that were made by clients and facilitated by the bank. For instance, to facilitate clients' trades, the bank sometimes buys stocks that some clients wish to sell. The bank then looks to sell those stocks to other clients at advantageous rates before selling those stocks in the public market. Those stocks will appear in the bank's axe list.
The axe list consists of tuples \((\mathsf{op},\mathsf{symb},\mathsf{axe})\) where \(\mathsf{op}\in\{\mathsf{buy},\mathsf{sell}\}\), \(\mathsf{symb}\) is the symbol of the security to buy or sell, and axe is the number of the shares (quantity) of the security to buy or sell (we sometimes use the terminology of "long" for buy and "short" for sell). This axe list provides clients the ability to locate available (synthetic) trades at reduced financing rates.
Unfortunately, this method is unsatisfactory. Although the information in the axe list of the bank relates to transactions that were already executed, there is a correlation between previous transactions that a client performed and future transactions that it might wish to trade. Thus, clients feel uncom
fortable with seeing their recent (potentially large) trade history (although anonymized and aggregated) in the axe list that the bank publishes, and sometimes ask the bank to remove their previous trades from the axe list. Clients, therefore, face the following dilemma: keeping their axes published reveal information about their future potential trades or investment strategy, while continuously asking to remove trades from the axe list limits the banks' ability to internalize trades and offer advantageous rates, to begin with.
The bank currently uses some ad-hoc methods to mitigate the leakage. For instance, it might aggregate several stocks together into "buckets" (e.g., reveal only the range of available stocks to trade in some sector), or trim the volumes of other stocks. This does not guarantee privacy, and also makes it harder to locate potential matches.
### Our Work
We provide a novel method for addressing the inventory matching problem (a simple double auction, which is periodic, with a single fixed price per symbol). Our main contribution is a suite of cryptographic protocols for several variants of the inventory matching problem. The system we report, called Prime Match, was implemented and runs in production in J.P. Morgan since September 2021. Prime Match has the potential to transform common procedures in the financial world. We design the following systems:
* _Bank-to-client inventory matching_: Prime Match supports a secure two-party (bank-to-client) inventory matching. The client can privately find stocks to trade in the bank's full axe list without the bank revealing its axe list, and without the client revealing the stocks and quantities it wishes to trade. The protocol is secure against a semi-honest bank and malicious client and is of two rounds of communication (three rounds if both parties learn the output).
* _Client-to-client inventory matching_: We extend Prime Match to support a secure (client-to-client) inventory matching. This is a three-party protocol where the bank is an intermediate party that mainly delivers the messages between two clients and facilitates the trade if there is a match. This enables two clients to explore whether they can have potential matches against each other and not just against the axe list of the bank. This further increases potential matches. The protocol is secure in the presence of one malicious corruption and is of three rounds of interaction.
* _Multiparty inventory matching:_ We also extend the client-to-client inventory matching to multiple clients coming at once and looking to be matched.
We expand on each one of those scenarios below.
**Bank-to-client inventory matching:** We replace the current procedure in which the bank sends an axe list to a client, and the client replies with which stocks to trade based on the axe list, with a novel bank-to-client inventory matching. Prime Match allows the bank to locate potential matches without revealing its axe list, and without the client revealing its interests. Moreover, as the bank can freely use accurate axe information (as the axe list is hidden), clients have no longer an interest to remove themselves from the axe list. All parties enjoy better internalization and advantageous rates.
Importantly, the bank does not learn any information about what the client is interested in on any stock that is not matched, and likewise, the client does not learn any information on what is available unless she/he is interested in that as well. Only after matches are found, the bank and the client are notified and the joint interest is revealed. At a high level, for two orders \((\mathsf{buy},\mathsf{X},\mathsf{axe_{1}})\) and \((\mathsf{sell},\mathsf{X},\mathsf{axe_{2}})\) on the same symbol \(\mathsf{X}\), we provide a secure two-party protocol that computes as the matching quantity the min quantity between \(\mathsf{axe_{1}}\) and \(\mathsf{axe_{2}}\).1
Footnote 1: In our actual protocol, each party also provides a range of quantities it wishes to trade, i.e., a minimum amount and a maximum amount. If there is no match that satisfies at least its minimum quantity, then there is no trade. To keep the introduction simple, we omit this additional complexity for now.
**Client-to-client inventory matching:** The above approach only enables matching between the bank's inventory to each client separately but does not allow a direct matching among different clients. For illustration, consider the following scenario: Client A is interested in buying 100 shares of some security \(\mathsf{X}\), while client B is interested in selling 200 shares of the same security \(\mathsf{X}\). On the other hand, the bank does not provide \(\mathsf{X}\) in its inventory axe list. The bank either distributes in a non-private way its axe list to clients \(\mathsf{A}\) and \(\mathsf{B}\) (as it is being conducted prior to our work) or engages twice in a bank-to-client inventory matching described above, the first time against client \(\mathsf{A}\) and the second time against client \(\mathsf{B}\). The two clients do not find \(\mathsf{X}\) in the list, and both clients would have to trade on the public market at higher costs.
Prime Match allows the clients and the bank not to miss such opportunities. We provide a mechanism that acts as a transparent matching engine. Each client provides as input to the computation his/her encrypted axes, and the clients then interact and learn whether their axes match or not, see Figure 2. For this solution, we provide a three-party secure minimum protocol \(\Pi_{\mathsf{min}}\) among two clients and the bank as the intermediary party to facilitate and execute the trade if there is a match.
Figure 1: Client-to-bank topology. Client \(C\) sends an encrypted order \(\mathsf{order}_{\mathsf{C}}=(\mathsf{buy},\mathsf{X},\mathsf{axe_{1}})\) to the Bank (secure matching engine) which holds \(\mathsf{order}_{\mathsf{B}}=(\mathsf{sell},\mathsf{X},\mathsf{axe_{2}})\). The engine computes the minimum between \(\mathsf{axe_{1}}\) and \(\mathsf{axe_{2}}\).
**Multi-party protocol.** A potentially powerful mechanism would be to support multiple clients coming at the same time, where all clients talk to each other through the bank who facilitates the trades when there are matches. This might increase the potential number of matches for each client. We implement such a mechanism based on our client-to-client matching protocol, where we invoke it \(\binom{n}{2}\) times, for each possible pair of clients, when \(n\) is the total number of participating clients. Since the service of axe matching is relatively exclusive, i.e., it is offered only to selected clients, \(n\) is relatively small (around 10 per day), and thus this approach suffices for the current needs.
At this point, we only implement this relatively degenerated form of multiparty matching. We provide security for a semi-honest bank and malicious clients. In the multiparty setting, there are further challenges that have to be explored, such as what information is leaked by the functionality due to partial matches (i.e., client A can fulfill its order, say, selling 1000 shares by matching with client B and C, each wishing to buy 500 shares). Moreover, to achieve malicious security, the protocol also has to guarantee that the bank does not discriminate against clients, e.g., when two clients are both interested in buying some security, then it treats them fairly and does not prefer to match the "big client" over the "small client." In fact, it is impossible to support security against a malicious bank in this case already because of the star network - the clients communicate through the bank with no authentication (see [8]). Therefore, achieving malicious security would require some different setups and further techniques. We leave this for future research.
From a business perspective, the clients generally do trust the bank, and the bank is also highly regulated and will not risk its reputation by attempting to cheat. Therefore, semi-honest bank generally suffices.
**Secure minimum protocol.** At the heart of our Prime Match engine is a secure protocol for comparing two input values \(\mathsf{axe}_{1}\) and \(\mathsf{axe}_{2}\), each in \(\{0,\ldots,2^{n}-1\}\subset\mathbb{F}_{q}\). The protocol, given the bit-decompositions of \(\mathsf{axe}_{1}\) and \(\mathsf{axe}_{2}\), computes the minimum between the two. We have a two-party variant (bank to client) and a three-party variant when only two parties have inputs (client-to-client) and the third party (the server) helps in the computation. For the latter, an interesting property of our protocol is that the two clients only perform linear operations, and therefore can operate non-interactively on encrypted inputs (or secret-shared, or homomorphic commitments, etc.). The server facilitates the computation. For \(\ell\)-bit inputs, our protocol runs in three rounds of interaction and with \(O(\ell^{2})\) communication where in the first round clients provide their input, and in the last (third) round the output is revealed. The protocols also offers malicious security.
**Implementation and evaluation.** All three scenarios were implemented, and we report running times in Section 5. On the bottom level, both bank-to-client and client-to-client protocols can process roughly 10 symbols per second with security against malicious clients under conventional machines with commodity hardware. Our system is running live, in production by J.P. Morgan. To the best of our knowledge, this is the first MPC solution running live in the financial world. Commercially, the main advantage of the system is the increased opportunities for clients to find matches.
As clients do not wish to spend resources to use such a service (installation of packages, maintenance cost, etc.), and cannot commit to providing tech resources before testing the product, Prime Match is implemented as a browser service. This raises several challenges in the implementation, see Section 5. Moreover, in the client-to-client matching a star topology network is required where clients communicate only with the bank. Clients do not wish to establish communication with other clients and reveal their identities to other clients.
**Our contributions.** To conclude, our contributions are:
* We identify a real-world problem in which cryptography significantly simplifies and improves the current inventory matching procedure.
* We provide two new protocols: bank-to-client inventory matching and client-to-client inventory matching. Those completely replace the current method which leaks information and misses potential matches. Our protocols are novel and are specifically tailored to the problem at hand. We do not just use generic, off-the-shelf, MPC protocols (see Section 1.3 for a discussion).
* At the heart of our matching engine is a novel two-round comparison protocol that minimizes interaction and requires only linear operations.
* J.P. Morgan.
### Related Work
**Prior works on volume (quantity) matching.** We now compare the prior privacy-preserving volume matching architectures [18, 5, 14] to Prime Match. The MPC-based volume matching constructions of [18, 14] derive their security by separating the system's _service operator/provider_ into several (e.g., 3) distinct servers, whose collusion would void the system's security guarantees. The clients submit their _encrypted_ orders to the servers by secret sharing, such that no single server can recover the encrypted orders. The clients have no
control over these servers and no clear way to prevent them from colluding.
Allowing clients to _themselves_ serve as contributing operators of the system would present its own challenges. For instance, it would impose a disproportionate computational burden on those clients who choose to serve as operators. Moreover, it is unclear how to incentivize clients to run heavy computations, and to play the role of the operators.2
Footnote 2: Part of the success of the Prime Match system is related to the fact that clients are offered a web service to participate in the system which requires minimal tech support by the clients.
The fully homomorphic approach of [7] imposes a computational burden on a single server in a star topology network in which clients communicate with the server. Moreover, the concrete efficiency of the proposed GPU-FHE scheme is slow. Furthermore, the scheme of [7] does not offer malicious security. FHE-based solutions for malicious security are much less efficient than the ones based on MPC.
**Prior works on privacy-preserving dark pools.** A recent line of research has attempted to protect the information contained in dark pools [5, 15] run by an operator. The systems described in these works allow users to submit orders in an encrypted form; the markets' operator then compares orders "through the encryptions", unveiling them only if matches occur. The functionality of privacy-preserving dark pools is a continuous double auction in which apart from the _direction_ (buy or sell) and a desired trading _volume_, a _price_ (indicating the "worst" price at which the participant would accept an exchange) is submitted. The operator "matches" compatible orders, which, by definition, have opposite directions, and for which the price of the buy order (the "bid") is at least the price of the sell order (the "ask"). [14, 15, 31] are based on MPC with multiple operators and the work of [5] is based on FHE.
Dark pools are different than our setting, as matches are also conditioned on an agreement on a price (requiring many more comparisons) leading to more complex functionality. In comparison, inventory matching is a simple double auction, which is periodic, with a single fixed price per symbol. Moreover, dark pools support high-frequency trading, which means that they have to process orders very fast. All prior works' performance on dark pools (including multi-server dark pools) does not suffice for high-frequency trading. In comparison, axe-list matching is a much slower process; with the current, insecure procedure of axe-matching, a few minutes might elapse between when the bank sends its axe list, and the time the client submits its orders. Since ensuring privacy introduces some overhead, clients might not necessarily prefer a slower privacy-preserving dark pool over a fast ordinary dark pool. Furthermore, secure comparison is a necessary building block for dark pools. Any of the comparison protocols from prior works, [16, 19, 28, 29, 33, 36], including ours, can be used for dark pools, but all of them have some overhead. Unfortunately, neither of these works can lead to a fast dark pool (in a star topology network) which is close to the running times of a dark pool operating on plaintexts. Achieving fast enough comparison that is suitable for high-frequency trading is an interesting open problem.
The work of Massacci et al. [30] considers a distributed Market Exchange for futures assets which has functionality with multiple steps where one of the steps includes the dark pool functionality. Their experiments show that their system can handle up to 10 traders. Moreover, orders are not concealed: in particular, an aggregated list of all waiting buy and sell orders is revealed which is not the case in solution and the dark pool solutions. Note that there are works that propose dark pool constructions on the blockchain [32, 6, 24] which is not the focus of our work. Moreover, these solutions have different guarantees and security goals. None of the above solutions is in production.
**Prior works on secure 3-party Less Than comparison.** There are several works in the literature that propose secure comparison protocols of two values in the information-theoretic setting [2, 16, 19, 28, 33, 36]. See Table 1 for a detailed comparison of these works compared to ours. Our protocol does not require preprocessing and runs in 2 rounds of interaction. Our cost incurs an \(\ell^{2}\) overhead since we secret share \(\ell\) bit numbers in a field of size \(\ell\). Similar overhead also appears in prior works. The security parameter \(\lambda\) overhead is required due to the use of coin flipping and the additional use of commitments in the malicious protocol. The main reason for the higher overhead of prior secret sharing-based protocols in Table 1 is that they require interaction per secure multiplication leading to an increased round complexity (\(\approx\log\ell\)). Our protocol does not require any secure multiplications, which is a significant benefit in upgrading our passive protocol to one with malicious security.
The works of [20, 21, 23, 27], based on multiplicative/additive homomorphic encryption, provide 2 (or constant) round solutions but they only offer passive security. The computational cost is capped at \(O(\lambda\cdot\ell)\) modular multiplications. Moreover, some works require a trusted setup assumption to generate the public parameters. For instance the modulus generation of the homomorphic Paillier encryption-based solutions.
The most recent work of [2], based on functional secret sharing in the preprocessing model, is a three-round solution offering only passive security with the cost of \(O(\ell)\) PRG calls in its online phase.
### Why Specifically-Tailored Protocols?
A natural question is why we design a specifically tailored protocol for the system, instead of just using any generic, off-the-shelf secure computation protocols. Those solutions are based on securely emulating arithmetic or Boolean circuits, and require translating the problem at hand to such a circuit. Specifically, for our client-to-client matching algorithm, which is a three-party secure protocol with one corruption, it
looks promising to use some generic MPC protocols that are based on replicated secret sharing, such as [3, 17] or garbled circuits [26, 37].
There are two main requirements from the system (from a business perspective) that leads us to design a specifically-tailored protocol and not a generic MPC: (1) The need for a constant number of rounds; (2) Working with committed inputs. Furthermore, no offline preprocessing is possible since clients wish to participate only during the live matching phase. We provide a comparison with generic MPC techniques in Appendix I.
### Overview of our Techniques
We focus in this overview on the task of client-to-client matching (see Figure 2): A three-party computation between two clients that communicate through the bank. We present our solution while hiding only the clients' quantities axe. However, our detailed protocol additionally hides both the directions and the symbols. We present our protocol in the semi-honest setting and then explain how to achieve malicious corruption.
**Semi-honest clients and server:** The client provides secret shares (and commitments) for all possible symbols and for the two possible sides. If a client is not interested in buying (resp. selling) a particular stock, it provides \(0\) as its input for that symbol and size. It is assumed that the total number of symbols is around \(1000-5000\), and of course, the number of sides is \(2\). Thus, each party has to provide roughly \(2000-10000\) values. To see if there is a match between clients A and B on a particular stock, we securely compute the minimum between the values the parties provided with opposite sides (i.e., A sells and B buys, or B sells and A buys).
Each one of the clients first secret shares its secret value axe using an additive secret sharing scheme. The two clients then exchange shares3. Then, they decide on the matching quantity by computing two bits indicating whether the two quantities are equal or which one of the two is minimal.
Footnote 3: The communication model does not allow the two clients to talk directly, and each client talks only to the server. However, using encryption and authentication schemes, the two clients can establish a secure channel while the server just delivers messages for them.
We design a novel algorithm for computing the minimum. The algorithm consists of two phases. As depicted in Figure 3, the first phase works on the shares of the two secrets \((\mathsf{a},\mathsf{b})\), exchanged via the matching engine using symmetric key encryption, while performing only linear operations on them (\(\mathsf{min}\) protocol). Looking ahead, each one of the two clients would run this phase, without any interaction, on its respective shares. The result would be shares of some secret state \((\mathsf{d}_{0},\mathsf{d}_{1})\) in which some additional non-linear processing is needed after reconstruction to obtain the final result. However, the secret state can be simulated with just the result of the computation - i.e., the two bits indicating whether the two numbers are equal or which one is minimal. Therefore, at the end of the first phase, the two clients can send the shares to the server, who reconstructs the secret state and learns the result, again using just local (this time, non-linear) computations.
Our minimum protocol \(\mathsf{min}\) is described in Section 4.2, and we overview our techniques and contributions in the relevant section. Our semi-honest protocol is given in Section 4.3.
**Malicious clients.** We now discuss how to change the protocol to protect against malicious clients.
Zooming out from computing minimum, the auction works in two phases: a "registration" stage, where clients submit their orders, and the matching stage, where the clients and the bank run the secure protocols to find matching orders. In the malicious case, the parties submit commitments to the quantities of their orders to the server. The list of participants is not known in advance, only the clients who submitted a commitment can participate in the current matching phase. Moreover, the list of participants (at each run) is not public and is only known to the server.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline & Offline & Online & Offline & Online & & & \\ Protocol & Communication & Communication & Computation & Computation & Rounds & Security & Corruption \\ \hline
[33] & - & \(O\big{(}(\ell\log\ell)\cdot(\ell+s)\big{)}\) & - & \(O\big{(}(\ell\log\ell)\cdot(\ell+s)\big{)}\) & 31 & passive & HM \\
[16] & - & \(O\big{(}\ell\cdot(\ell+s)\big{)}\) & - & \(O\big{(}\ell\cdot(\ell+s)\big{)}\) & \(O(\log\ell)\) & passive & HM \\
[36] & - & \(O\big{(}\ell^{2}+\log\ell\big{)}\) & - & \(O(\ell^{2}+\log\ell)\) & \(O(\log\ell)\) & passive & DM \\ This work & - & \(O\big{(}\ell^{2}+\log\ell\big{)}\) & - & \(O(\ell^{2}+\log\ell)\) & 2 & passive & DM \\ \hline
[28] & \(O(\ell^{2})\) & \(O(\ell\cdot(\ell+s)\big{)}\) & \(O(\ell^{2})\) & \(O(\log\ell\cdot(\ell+s))\) & \(O(\log\ell)\) & active & HM \\
[19] & - & \(O\big{(}(\ell\log\ell)\cdot(\ell+s)\big{)}\) & - & \(O\big{(}(\ell\log\ell)\cdot(\ell+s)\big{)}\) & 44 & active & HM \\
[29] & \(O(\ell)\) & \(O\big{(}(\ell\log\ell)\cdot(\ell+s)\big{)}\) & \(O(\ell)\) & \(O(\ell\cdot(\ell+s))\) & \(O(\log\ell)\) & active & DM \\ This work & - & \(O(\ell\cdot(\ell+\lambda))\) & - & \(O(\ell\cdot(\ell+\lambda))\) & 2 & active & HM \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cost of passive and active comparison protocols in terms of offline, and online communication and computation complexity; in terms of rounds; in terms of security; and in terms of corruptions supported. HM stands for honest majority, while DM stands for dishonest majority. \(\ell\) denotes to the bit length of the input, \(s\) is the statistical security parameter and \(\lambda\) is the computational security parameter. The work of [29] achieves statistical security over arithmetic fields but it achieves perfect security over the arithmetic rings.
Of course, clients have to be consistent, and cannot use different values in the matching phase and in the registration phase. In the matching phase, the clients secret shares their inputs (additive secret sharing), and prove using a Zero-Knowledge (ZK) proof that the shares define the committed value provided in the registration phase.
More specifically, client \(C_{1}\) commits to a during registration, i.e., sends \(\mathsf{Com}(\mathsf{a})\) to the server and commits to the shares \((\mathsf{a}_{1},\mathsf{a}_{2})\) of the minimum by sending \(\mathsf{Com}(\mathsf{a}_{1})\) and \(\mathsf{Com}(\mathsf{a}_{2})\) to the server. It also proves in ZK the statement that \(\mathsf{Com}(\mathsf{a})=\mathsf{Com}(\mathsf{a}_{1})+\mathsf{Com}(\mathsf{a}_ {2})\) given that the commitment scheme is linearly homomorphic allowing to perform additions on committed values.
On top of the basic semi-honest protocol (as depicted in Figure 3) we also exchange the messages shown in Figure 4 where every party forwards a commitment to the other party for the share that it does not hold. Client \(C_{1}\) receives a commitment to \(\mathsf{b}_{2}\) and client \(C_{2}\) receives a commitment to \(\mathsf{a}_{1}\).
Next, recall that our minimum protocol requires only linear work from the clients, and thus it allows to work on any linearly-homomorphic cryptosystem, such as linear secret sharing scheme, linearly homomorphic commitments, linearly homomorphic encryption scheme, and so on. In the semi-honest setting, we used this property to work only on the secret shares. We run the linear algorithm three times in parallel, on different inputs:
1. First, each client simply runs the algorithm on additive shares, just as in the semi-honest solution. This is depicted in Figure 3. Running the algorithm on those shares would result in shares of some secret state that will be delivered to the server. The server reconstructs the state and computes the result from this state.
2. Second, the parties run the algorithm on the commitments of the other party's share. This is depicted in Figure 4. Since the commitment scheme is also linearly-homomorphic, it enables Alice to compute a commitment of what Bob is supposed to send to the server in the first invocation, and vice versa.
3. Third, the parties also compute (again, using only linear operations!) information that allows the server to learn the openings of the other party's commitment. This enables the server to check that all values it received in the first invocations are correct.
**Malicious server.** Our final system does not provide security against a malicious server, unless the two clients can authenticate themselves to each other, or can talk directly. We show that if the client can communicate to each other, then we can also support malicious server for the comparison protocol.
The server receives shares of some secret states, together with commitments of the secret states. It then reconstructs the secret state and checks for consistency. It then has to perform some non-linear operations on the secret state to learn the result. Applying generic ZK proofs for proving that the non-linear operation was done correctly would increase the overhead of our solution. Luckily, the non-linear operation that the server performs is ZK-friendly, specifically, it is enough to prove in ZK a one-out-of-many proof (i.e., given a vector, proving that one of the elements in the vector is zero; see Theorem 4.4). For this particular language, there exists fast ZK solutions [25]. See Section 4.4 for a description and details.
**Organization.** The paper is organized as follows. In Section 2 we provide the preliminaries, while some are deferred to the appendices. In Section 3 we provide the main matching engine functionality. In Section 4 we provide our protocol for computing the minimum, including the semi-honest and the malicious versions. In Section 5 we report the system performance and in Appendix J we mention challenges pertaining to the deployment of our system.
## 2 Preliminaries
Some preliminaries are deferred to Appendix A.
**Notations.** We use PPT as an acronym for probabilistic polynomial time. We use \(\lambda\) to denote the security parameter, and \(\mathsf{negl}(\lambda)\) to denote a negligible function (a function that is smaller than any polynomial for sufficiently large \(\lambda\)).
**Commitment schemes.** A commitment scheme is a pair of probabilistic algorithms \((\mathsf{Gen},\mathsf{Com})\); given public pa
Figure 4: Client-to-client matching protocol for computing the minimum in the presence of a malicious adversary. In addition to values computed in Figure 3, the parties compute commitments of the value that the other participant is supposed to send to the server.
Figure 3: Client-to-client matching protocol for computing the minimum between the quantities \(\mathsf{a}\) from client \(C_{1}\) and \(\mathsf{b}\) from client \(C_{2}\) in the semi-honest setting. As described in Footnote 3, the communication between the two clients through the server is encrypted, and so the view of the server in this communication is just \(d_{1}\), \(d_{2}\).
rameters \(\mathsf{params}\leftarrow\mathsf{Gen}(1^{\lambda})\) and a message \(m\), \(\mathsf{com}:=\mathsf{Com}(\mathsf{params},m;r)\) returns a "commitment" to the message \(m\). To reveal \(m\) as an opening of \(\mathsf{com}\), the committer simply sends \(m\) and \(r\) (this is sometimes called "decommitment"). For notational convenience, we often omit \(\mathsf{params}\). A commitment scheme is _homomorphic_ if, for each \(\mathsf{params}\), its message, randomness, and commitment spaces are abelian groups, and the corresponding commitment function is a group homomorphism. We always write message and randomness spaces additively and write commitment spaces multiplicatively. See Section A for more details.
**Zero-knowledge proofs.** We use non-interactive zero-knowledge for three languages. See formal treatment in Appendix A.1:
* **Commitment equality proof:** Denoted as the relation \(\mathcal{R}_{\mathsf{ComEq}}\), the prover convinces the verifier that the two given commitments \(c_{0},c_{1}\) hide the same value.
* **Bit proof:** Denoted as the relation \(\mathcal{R}_{\mathsf{BitProf}}\), allows the prover to prove that a commitment \(c\) hides a bit, i.e., a value in \(\{0,1\}\).
* **Out-of-many proofs.** Denoted as the relation \(\mathcal{R}_{\mathsf{OneMany}}\), allows a prover to prove that one of the commitments \(V_{0},\ldots,V_{n}\) in the statement is a commitment of \(0\).
## 3 The Prime Match Main Functionalities
We now describe our Prime Match inventory matching functionalities. We describe the bank-to-client functionality (Section 3.1), the client-to-client functionality (Section 3.2), and the multi-client system (Section 3.3).
### Bank to Client Matching
This variant is a two-party computation between a bank and a client. The bank tries to find matching orders between its own inventory and each client separately. As mentioned in the introduction, this essentially comes to replace the current procedure of axe-matching as being conducted today, with a privacy-preserving mechanism. Today the bank sends its inventory list to the client who then submits orders to the bank. Note, however, that if the bank runs twice with two different clients, and the bank does not hold some security X, and two clients are interested in X with opposite directions, then such a potential match will not be found.
The functionality proceeds as follows. The client sends to the bank its axe list. This includes the list of securities it is interested in, and for each security whether it is interested in long (buy) or short (sell) exposures, and the quantity. The client sends its own list. The functionality finds whether the bank and the client are interested in the same securities with opposite sides, and in that case, it provides as output the matching orders and the quantity is the minimum between the two amounts.
**FUNCTIONAITY 3.1** (\(\mathcal{F}_{\mathrm{B2C}}\)-Bank-to-client functionality).: The functionality is parameterized by the set of all possible securities to trade, a set \(U\).
**Input:** The bank \(P^{*}\) inputs lists of orders \((\mathsf{symb}^{*}_{1},\mathsf{side}^{*}_{1},\mathsf{amount}^{*}_{i})\) where \(\mathsf{symb}_{i}\subseteq U\) is the security, \(\mathsf{side}^{*}_{i}\in\{\mathsf{buy},\mathsf{sell}\}\) and \(\mathsf{amount}^{*}_{i}\) is an integer. The client sends its list of the same format, \((\mathsf{symb}^{C}_{i},\mathsf{side}^{C}_{i},\mathsf{amount}^{*}_{i})\).
**Output:** Initialize a list of \(\mathsf{Matches}\). For each \(i,j\) such that \(\mathsf{symb}^{*}_{i}=\mathsf{symb}^{C}_{j}\) and \(\mathsf{side}^{*}_{i}\neq\mathsf{side}^{C}_{j}\), add \((\mathsf{symb}^{*}_{1},\mathsf{side}^{*}_{i},\mathsf{side}^{C}_{j},\min\{ \mathsf{amount}^{*}_{i},\mathsf{amount}^{*}_{j}\})\) to \(M\). Output \(M\) to both parties.
From a business perspective, it is important to note that the input of the client (and the bank) serves as a "commitment" - if a match is found then it is executed right away.
The functionality resembles a set intersection. In set intersection, if some element is in the input set of some party but not in its output, it can conclude that it does not contain in the other set. Here, if a party does not find a particular symbol in its output although it did provide it as an input, then it is still uncertain whether the other party is not interested in that security, or whether it is interested but with the same side. We show how to implement the functionality in the presence of a malicious client or a semi-honest server in Appendix E.
**Bank to multiple clients.** In the actual system, the bank has to serve multiple clients. This is implemented by a simple (sequential) composition of the functionality. Specifically, the functionality is now reactive where clients first register that they are interested to participate. The bank then runs Functionality 3.1 with the clients - either according to the basis of first-come-first-served, or some random ordering. We omit the details and exact formalism as they are quite natural given a semi-honest bank.
**Range functionalities.** In Appendix G, we show a variant of the protocol where each party inputs a _range_ in which it is interested and not just one single value. I.e., if a matched order does not satisfy some minimal value, it will not be executed. Since the minimum value does not change throughout the execution, whenever the bank receives \(0\) as a result of the execution it cannot decide whether the client is not interested in that particular symbol, or whether it is interested - but the matched amount does not satisfy the minimum threshold.
### Client to Client Matching
In this variant, the bank has no input and it tries to find potential matches between clients facilitating two clients that wish to compare their inventories. This is a three-party computation where the bank just facilitates the interaction. It is important to notice that the clients do not know each other,
and do not know who they are being paired with. The bank selects the two clients and offers them to be paired.
**FUNCTIONALITY 3.2** (\(\mathcal{F}_{\text{C2C}}\)-Client-to-client functionality).: The functionality is parameterized by the set of all possible securities to trade, a set \(U\). This is a three-party functionality between two clients, \(P_{1}\) and \(P_{2}\), and the bank \(P^{*}\).
**Input:** The client \(P_{1}\) inputs a list of orders \((\text{symbol}_{i}^{1},\text{side}_{i}^{1},\text{amount}_{i}^{1})\). The client \(P_{2}\) inputs a list of orders \((\text{symbol}_{2}^{2},\text{side}_{i}^{2},\text{amount}_{i}^{2})\), and the bank has no input.
**Output:** Initialize a list of Matches. For each \(i,j\) such that \(\text{symbol}_{i}^{1}=\text{symb}_{j}^{2}\) and \(\text{side}_{i}^{1}\neq\text{side}_{j}^{2}\), add \((\text{symb}_{i}^{1},\text{side}_{i}^{1},\text{side}_{j}^{2},\min\{\text{amount}_ {i}^{1},\text{amount}_{j}^{2}\})\) to \(M\). Output \(M\) to all three parties.
In the next section, we show how to implement this functionality in the presence of a malicious client or a malicious server, assuming that the two clients can communicate directly, or have a public-key infrastructure. When the two clients can communicate only through the server and there is no public-key infrastructure (PKI) or any other setup, there is no authentication and the server can impersonate the other client. We therefore cannot hope to achieve malicious security. We achieve security in the presence of a semi-honest server. See a discussion in the next subsection.
### The Multi-Client System
We now proceed to the multiparty auction. Here we have parties that register with their intended lists, and the bank facilitates the orders by pairing the clients according to some random order. The functionality is now reactive; The parties first register, in which they announce that they are willing to participate in the next auction, and they also commit to their orders. In the second phase, the bank selects pairs of clients in a random order to perform client-to-client matching. Looking ahead, typically there are around 10 clients that participate in a given auction.
For simplicity of exposition and to ease readability, we write the functionality as the universe is just a single symbol. Moreover, instead of sending the side explicitly, the client sends two integers \(L\) and \(S\), representing its interest in long (buy) or short (sell) exposure, respectively. Rationally, each party would put one of the integers as 0 (as otherwise, it would just pay extra fees to the bank). Generalizing the functionality to deal with many symbols is performed in a natural manner, where the number of total symbols is 1000-5000 in practice. The main functionality can process all the different symbols in parallel.
**FUNCTIONALITY 3.3** (\(\mathcal{F}_{\text{MC}}\) - Multi-client matching).: This is an \(n+1\) party functionality between \(n\) clients \(P_{1},\dots,P_{n}\) and a bank \(P^{*}\).
Upon initialization, \(\mathcal{F}_{\text{MC}}\) initializes a list \(\mathcal{P}=\emptyset\) and two vectors \(\mathcal{L}\) and \(\mathcal{S}\) of size \(n\), where \(n\) bounds the total number of possible clients.
\(\mathcal{F}_{\text{MC}}.\mathbf{Register}(P_{i},L_{i},S_{i})\). Store \(\mathcal{L}[i]=L_{i}\) and \(\mathcal{S}[i]=S_{i}\) and add \(i\) to \(\mathcal{P}\). Send to the bank \(P^{*}\) the message \(\mathbf{registered}(P_{i})\).
\(\mathcal{F}_{\text{MC}}.\mathbf{Process}()\).
Choose a random ordering \(O\) over all pairs of \(\mathcal{P}\).
For the next pair \((i,j)\in O\) try to match between \(P_{i}\) and \(P_{j}\) (we can assume wlog that always \(i\leq j\)):
Compute \(M_{0}=\min(\mathcal{L}[i],\mathcal{S}[j])\), \(b_{0}^{0}=(\mathcal{L}[i]\leq\mathcal{S}[j])\), \(b_{1}^{0}=(\mathcal{S}[i]\leq L[j])\).
Compute \(M_{1}=\min(\mathcal{S}[i],\mathcal{L}[j])\), \(b_{0}^{1}=(\mathcal{S}[i]\leq\mathcal{L}[j])\) and \(b_{1}^{1}=(\mathcal{S}[i]\leq\mathcal{L}[j])\).
Send \((i,j,M_{0},M_{1},b_{1}^{0},b_{1}^{1})\) to \(P_{i}\), and \((i,j,M_{0},M_{1})\) with \((b_{0}^{0},b_{1}^{0},b_{0}^{1},b_{1}^{1})\) to \(P^{*}\).
Update \(\mathcal{L}[i]=\mathcal{L}[i]-M_{0}\) and \(\mathcal{S}[j]=\mathcal{S}[j]-M_{0}\).
Update \(\mathcal{S}[i]=\mathcal{S}[i]-M_{1}\) and \(\mathcal{L}[j]=\mathcal{L}[j]-M_{1}\).
**On malicious server.** Our final protocol (see Appendix F) for \(\mathcal{F}_{\text{MC}}\) is secure in the presence of a semi-honest \(P^{*}\) (and a malicious client). Inherently, clients communicate through a star network where the bank facilitates the communication. Moreover, we assume no PKI, clients do not know how many clients are registered in the system, and how many clients are participating in the current auction. This can be viewed as "secure computation without authentication", in which case the server can always "split" the communication and disconnect several parties from others (see [8] for a formal treatment).
We prove security in the presence of a semi-honest server. In fact, our protocol achieves a stronger notion of a guarantee than just semi-honest, as in particular, it runs the underlying comparison protocol (a single invocation of a client-to-client matching) which is secure against a malicious server.
Another relaxation that we make is that the ordering of pairs is random, and we do not have a mechanism to enforce it. Note also that the functionality leaks some information to the server; in particular, after finding a match, the bank executes it immediately. The bank can infer information about whether two values equal to 0, and therefore whether a client is not interested in a particular symbol. In contrast, each client just learns whether its value is smaller or equal to the value of the other party, and therefore when it inputs 0 it can never infer whether the other party is interested in that symbol or not.
Securely Computing Minimum
A pivotal building block in Prime Match is a secure minimum protocol. In Section 4.1, we review our functionality for computing the minimum. We focus on the case of client-to-client matching with an aiding server. We show how to convert the protocol for two parties in Appendix E.
In Section 4.2 we present the underlying idea for computing the minimum. The algorithm computes the minimum while using only linear operations (looking ahead, those would be computed on shared values) while pushing the non-linear operations on reconstructed data. In Sections 4.3 and 4.4 we show a semi-honest and a malicious protocol for computing the minimum, respectively.
### The Minimum Functionality
After receiving a secret integer from each one of the two parties, the functionality compares them and gives as a result two bits - which indicate which one of the two inputs is smaller than the other, or whether they are equal. It also gives the result to the server.
**FUNCTIONALITY 4.1** (\(\mathcal{F}_{\text{comp}}\): Server-aided secure minimum functionality).:
Consider two players, \(P_{0}\) and \(P_{1}\), and a server \(P^{*}\).
* **Input:**\(P_{0}\) and \(P_{1}\) respectively send integers \(v_{0}\) and \(v_{1}\) in \(\{0,\ldots,2^{n}-1\}\) to \(\mathcal{F}_{\text{comp}}\).
* **Output:**\(\mathcal{F}_{\text{comp}}\) sends \(b_{0}:=(v_{0}\leq v_{1})\) to \(P_{0}\), \(b_{1}:=(v_{1}\leq v_{0})\) to \(P_{1}\), \((b_{0},b_{1})\) to \(P^{*}\).
In the rest of this section, we will show how to implement this functionality in the presence of a semi-honest (Section 4.3) and malicious adversary (Section 4.4).4
Footnote 4: For ease of presentation, Functionality 4.1 is for the semi-honest version of the protocol; For the malicious case, we will use a slightly different functionality on committed inputs; See Appendix D.
### Affine-Linear Comparison Function
We first describe an abstract algorithm which compares two elements \(v_{0}\) and \(v_{1}\) of \(\{0,\ldots,2^{n}-1\}\subset\mathbb{F}_{q}\), given their bit-decompositions. We separate the algorithm into two parts: ComparisonInitial (Algorithm 1) and ComparisonFinal (Algorithm 2). Both parts do not use any underlying cryptographic primitives.
In the first algorithm (ComparisonInitial), all operations on the bit-decompositions of the two inputs \(v_{0}\) and \(v_{1}\) are _linear_. Looking ahead, this will be extremely useful when converting the algorithm into a secure two-party protocol, where \(v_{0}\) and \(v_{1}\) are additively shared between the two parties (or also just committed, encrypted under additively homomorphic encryption scheme, etc.). In particular, this part of the protocol can be executed without any interaction, just as the algorithm itself when \(v_{0}\) and \(v_{1}\) are given in the clear. The second algorithm (ComparisonFinal) can be computed by a _different_ party, given all information in the clear. Looking ahead, this will be executed by the server \(P^{*}\) on the outputs of the first part. This part contains some non-linear operations, however, this part of the algorithm does not have to be translated into a secure protocol.
**Overview Algorithm 1** (ComparisonInitial).: Our approach is inspired by the algorithm of Wagh, Gupta, and Chandran [36, Alg. 3], which compares a _secret-shared_ integer with a _public_ integer. (Specifically, its inputs consist of an array of public bits and an array of secret-shared bits.) We extend the algorithm and allow the comparison of two private integers using only linear operations.
We achieve this \(\mathbb{F}_{q}\)-linearity in the following way. We fix integers \(v_{0}\) and \(v_{1}\) in \(\{0,\ldots,2^{n}-1\}\), with big-endian bit-decompositions given respectively by
\[v_{0}=\sum_{j=0}^{n-1}2^{n-1-j}\cdot v_{0,j},\quad\text{and}\quad v_{1}=\sum_ {j=0}^{n-1}2^{n-1-j}\cdot v_{1,j}\;.\]
We follow the paradigm of [36], whereby, for each \(j\in\{0,\ldots,n-1\}\), a quantity \(w_{j}\) is computed which equals \(0\) if and only if \(v_{0,j}=v_{1,j}\). Meanwhile, for each \(j\in\{0,\ldots,n-1\}\), we set \(c_{j}:=1+v_{0,j}-v_{1,j}+\sum_{k<j}w_{k}\) (we also set \(c_{n}:=\sum_{k=0}^{n-1}w_{k}\), as we discuss below). The crucial observation of [36] is that, for each \(j\in\{0,\ldots,n-1\}\), \(c_{j}=0\) so long as \(v_{0,j}<v_{1,j}\) as bits (that is, if \(1+v_{0,j}-v_{1,j}=0\)) _and_ the higher bits of \(v_{0}\) and \(v_{1}\) agree (inducing the equality \(\sum_{k<j}w_{k}=0\)). By consequence, _some_\(c_{j}\), for \(j\in\{0,\ldots,n-1\}\), must equal \(0\) whenever \(v_{0}<v_{1}\). Similarly, \(c_{n}=0\) whenever \(v_{0}=v_{1}\). In summary, \(v_{0}\leq v_{1}\) implies that some \(c_{j}=0\), for \(j\in\{0,\ldots,n\}\).
The main challenge presented by this technique is to ensure that the opposite implication holds; that is, we must prevent the sum \(c_{j}:=1+v_{0,j}-v_{1,j}+\sum_{k<j}w_{k}\) from equalling \(0\) (possibly by overflowing) modulo \(q\)--that is, even when \(w_{k}\neq 0\) for some \(k<j\)--and hence yielding a "false positive" \(c_{j}=0\), which would falsely assert the inequality \(v_{0}\leq v_{1}\). [36] prevents this phenomenon by ensuring that each \(w_{j}\in\{0,1\}\), and choosing \(2+n<q\) (they set \(n=64\) and \(q=67\)). In fact, [36] defines \(w_{j}:=(v_{0,j}-v_{1,j})^{2}\). Under this paradigm, \(c_{j}:=1+v_{0,j}-v_{1,j}+\sum_{k<j}w_{k}\) is necessarily _non-zero_ so long as either \(v_{0,j}\geq v_{i,j}\) as bits (so that \(1+v_{0,j}-v_{1,j}>0\)) or _any_\(w_{k}\neq 0\), for \(k<j\).
This squaring operation is nonlinear in the bits \(v_{0,j}\) and \(v_{1,j}\), and so it is unsuitable for our setting. We adopt the following recourse instead, which yields \(\mathbb{F}_{q}\)-linearity at the cost of requiring that the number of bits \(n\in O(\log q)\) (a mild restriction in practice). The key technique is that we may eliminate the squaring--thereby allowing each \(w_{j}\) to remain in \(\{-1,0,1\}\)--provided that we multiply each \(w_{j}\) by a suitable public scalar. In fact, it suffices to multiply each (unsquared) difference \(w_{j}\) by \(2^{2+j}\). In Theorem 4.4 below, we argue that this approach is correct.
Our modifications to [36] also include our computation of the _non-strict_ inequality \(v_{0}\leq v_{1}\)--effected by the extra value \(c_{n}\)--as well as our computation of the opposite non-strict inequality, \(v_{1}\leq v_{0}\), in parallel. The latter computation proceeds identically, except uses \(-1+v_{0,j}-v_{1,j}\in\{-2,-1,0\}\) at each bit.
```
1: Assign \(w_{\text{accum}}:=0\)
2:for\(j\in\{0,\ldots,n-1\}\)do
3: Set \(c_{0,j}:=1+v_{0,j}-v_{1,j}+w_{\text{accum}}\).
4: Set \(c_{1,j}:=-1+v_{0,j}-v_{1,j}+w_{\text{accum}}\)
5: Set \(w_{j}:=(v_{0,j}-v_{1,j})\) and \(w_{\text{accum}}+=2^{2+j}\cdot w_{j}\)
6: Set \(c_{0,n}\) and \(c_{1,n}\) equal to \(w_{\text{accum}}\)
7: Sample a random permutation \(\pi\leftarrow\mathbf{S}_{n+1}\)
8:for\(j\in\{0,\ldots,n\}\)do
9: Sample random scalars \(s_{0,j}\), \(s_{1,j}\) from \(\mathbb{F}_{q}\setminus\{0\}\).
10: Assign \(d_{0,j}:=s_{0,j}\cdot c_{0,\pi(j)}\),
11: Assign \(d_{1,j}:=s_{1,j}\cdot c_{1,\pi(j)}\)
12:return\((d_{0,0},\ldots,d_{0,n})\) and \((d_{1,0},\ldots,d_{1,n})\)
```
**Algorithm 1**\(\mathsf{ComparisonInitial}\left((v_{0,0},\ldots,v_{0,n-1}),\right.\)
Of course, the intermediate value \(v_{0,j}-v_{1,j}\) need only be computed once per iteration of the first loop.
**Overview of Algorithm 2**\(\mathsf{(ComparisonFinal)}\).: Note that in Algorithm 1, \(w_{\text{accum}}=0\) as long as \(v_{0,j}=v_{1,j}\), and it attains a non-zero value at the first \(j\) for which \(v_{0,j}\neq v_{1,j}\). Up to that point, \((c_{0,j},c_{1,j})=(1,-1)\). At the first \(j\) for which \(v_{0,j}\neq v_{1,j}\):
* If \(v_{0}>v_{1}\) (i.e., \((v_{0,j},v_{1,j})=(1,0)\)), then we get that \((c_{0,j},c_{1,j})=(2,0)\).
* If \(v_{0}<v_{1}\) (i.e., \((v_{0,j},v_{1,j})=(0,1)\)), then we get that \((c_{0,j},c_{1,j})=(0,-2)\).
The algorithm then makes sure that no other value of \(c_{b,j}=0\), essentially by assigning \(w_{\text{accum}}\) to be non-zero. If \(v_{0}=v_{1}\) then \(c_{0,n}=c_{1,n}=0\). Finally, all the bits \((c_{0,0},\ldots,c_{0,n}),(c_{1,0},\ldots,c_{1,n})\) are permuted and re-randomized with some random scalars. Observe that if \(v_{0}>v_{1}\) then all the values \(d_{0,0},\ldots,d_{0,n}\) are all non-zero, and one of \(d_{1,0},\ldots,d_{1,n}\) is zero. If \(v_{1}\geq v_{0}\) then exactly one of the values \(d_{1,0},\ldots,d_{1,n}\) is \(0\) and all \(d_{0,0},\ldots,d_{0,n}\) are non-zero.
It is crucial that the vectors \((d_{0,0},\ldots,d_{0,n})\) and \((d_{1,0},\ldots,d_{1,n})\) do not contain any information on \(v_{0},v_{1}\) rather then whether \(v_{0}\leq v_{1}\) or \(v_{1}\leq v_{0}\). Specifically, these values can easily be simulated given just the two bits \(v_{0}\leq v_{1}\) and \(v_{1}\leq v_{0}\). Therefore, it is safe to give both vectors to a third party, which will perform the non-linear part of the algorithm. For a vector of bits \((x_{0},\ldots,x_{n})\in\{0,1\}^{n+1}\), the operation \(\mathbf{any}_{j=0}^{n}x_{j}\) returns \(1\) iff there exists \(j\in[0,\ldots,n]\) such that \(x_{j}=1\). Algorithm 2 simply looks for the \(0\) coordinate in the two vectors. We have:
```
1: Assign \(b_{0}:=\mathbf{any}_{j=0}^{n}\left(d_{0,j}=0\right)\)
2: Assign \(b_{1}:=\mathbf{any}_{j=0}^{n}\left(d_{1,j}=0\right)\)
3:return\(b_{0}\) and \(b_{1}\)
```
**Algorithm 2**\(\mathsf{ComparisonFinal}\left((d_{0,0},\ldots,d_{0,n}),\right.\)
In the below theorem, we again consider bit-decomposed integers \(v_{0}=\sum_{j=0}^{n-1}2^{n-1-j}\cdot v_{0,j}\) and \(v_{1}=\sum_{j=0}^{n-1}2^{n-1-j}\cdot v_{1,j}\); we view the bits \(v_{i,j}\) as elements of \(\{0,1\}\subset\mathbb{F}_{q}\). The following theorem is proven in Appendix B:
**Theorem 4.4**.: _Suppose \(n\) is such that \(2+4\cdot(2^{n}-1)<q\). Then for every \(v_{0},v_{1}\in\mathbb{F}_{q}\)_
\[(v_{0}\leq v_{1},v_{1}\leq v_{0})=\] \[\mathsf{ComparisonFinal}\left(\mathsf{ComparisonInitial}\left( \vec{v_{0}},\vec{v_{1}}\right)\right)\,\]
_where \(\vec{v_{0}},\vec{v_{1}}\) are the bit-decomposition of \(v_{0},v_{1}\), respectively. Moreover, for every \(i\in\{0,1\}\):_
* _If_ \(v_{i}\leq v_{1-i}\) _then there exists exactly one_ \(j\in\{0,\ldots,n\}\) _such that_ \(d_{i,j}=0\)_. Moreover,_ \(j\) _is distributed uniformly in_ \(\{0,\ldots,n\}\)_, and each_ \(d_{i,k}\) _for_ \(k\neq j\) _is distributed uniformly in_ \(\mathbb{F}_{q}\setminus\{0\}\)_._
* _If_ \(v_{0}>v_{1-i}\) _then the vector_ \((d_{i,0},\ldots,d_{i,n})\) _is distributed uniformly in_ \(\mathbb{F}_{q}^{n+1}\setminus\{0\}^{n+1}\)_._
We emphasize that Algorithm 1 uses only \(\mathbb{F}_{q}\)-linear operations throughout. A number of our below protocols conduct Algorithm 1 "homomorphically"; that is, they execute the algorithm on elements of an \(\mathbb{F}_{q}\)-module \(M\) which is unequal to \(\mathbb{F}_{q}\) itself. As a basic example, Algorithm 1 may be executed on bits \((v_{0,j})_{j=0}^{n-1}\) and \((v_{1,j})_{j=0}^{n-1}\) which are _committed_, provided that the commitment scheme is homomorphic (its message, randomness and commitment spaces should be \(\mathbb{F}_{q}\)-modules, and its commitment function an \(\mathbb{F}_{q}\)-module homomorphism). Furthermore, Algorithm 1 may be conducted on additive \(\mathbb{F}_{q}\)-shares of the bits \((v_{0,j})_{j=0}^{n-1}\) and \((v_{1,j})_{j=0}^{n-1}\).
In this latter setting, sense must be given to the affine additive constants \(\pm 1\). As in [36], we specify that these be shared in the obvious way; that is, we stipulate that \(P_{0}\) and \(P_{1}\) use the shares \(0\) and \(\pm 1\), respectively.
### The Semi-Honest Protocol
For simplicity, we first describe a protocol that securely computes this functionality in the setting of three-party computation with an honest majority and a _semi-honest_ adversary. We give a maliciously secure version in Protocol 4.3 and prove its security in Section C of the Appendix.
**Theorem 4.5**.: _If \(\Pi_{\mathsf{CT}}\) is a secure coin-tossing protocol, \(G\) is a pseudorandom generator, and the two clients communicate using symmetric authenticated encryption with pseudorandom
ciphertexts, then Protocol 4.2 securely computes Functionality 4.1 in the presence of a semi-honest adversary corrupting at most one party. Each party sends or receives \(O(n^{2}+\lambda)\) bits, where n is the length of the input and \(\lambda\) is the security parameter._
### The Maliciously Secure Protocol
We now give our malicious protocol for Functionality 4.1 in Protocol 4.3. To ease notation, we denote \(N=\{0,\ldots,n-1\}\). We already gave an overview of the protocol as part of the introduction. The parties commit to the inputs, share their inputs, commit to the shares, and prove that all of those are consistent. Then, each party can operate on the shares it received (as in the semi-honest protocol), but also on the commitment that it holds on the other parties' share. When \(P^{*}\) receives the shares of \(d\) from party \(P_{t}\), it also receives a commitment of what \(P_{1-i}\) is supposed to send. The server can therefore check for consistency, and that no party cheated. Moreover, each party \(P_{i}\) can also compute commitments of the two vectors \(\vec{d}_{0},\vec{d}_{1}\). When the server comes to prove \(P_{t}\) that \(v_{i}\leq v_{1-i}\), it has to show that there is a 0 coordinate in its vector \(\vec{d}_{i}\). This is possible using one-out-of-many proof. See Appendix D for the full security proof. We note that the functionality is slightly different than that of Functionality 4.1.
### Bank-to-Client
We show in Appendix E a two-party (Bank-to-Client) version of the protocol, where the bank does not just facilitate the computation but also provides input. In a nutshell, the parties use linear homomorphic encryption - ElGamal encryption - instead of secret sharing.
## 5 Prime Match System Performance
We report benchmarks of Prime Match in two different environments, a Proof of Concept (POC) environment, and the production environment after refactoring the code to meet the requirements of the bank's systems. The former benchmarks can be used to value the performance of the comparison protocol for other applications in different systems.
**Secure Minimum Protocol Performance**: For the purposes of practical convenience, adoption, and portability, our client module is entirely browser-based, written in JavaScript. Its cryptographically intensive components are written in the C language with side-channel resistance, compiled using Emscript into WebAssembly (which also runs natively in the browser). Our server is written in Python, and also executes its cryptographically intensive code in C. Both components are multi-threaded--using WebWorkers on the client side and a thread pool on the server's--and can execute arbitrarily many concurrent instances of the protocol in parallel (i.e., constrained only by hardware). All players communicate by
**PROTOCOL 4.3** (Maliciously secure comparison protocol \(\Pi_{\mathsf{comp}}\)).:
**Input:**\(P_{0}\) and \(P_{1}\) hold integers \(v_{0}\) and \(v_{1}\), respectively, in \(\{0,\ldots,2^{n}-1\}\). \(P^{*}\) has no input.
**Setup phase:** A coin-tossing protocol \(\Pi_{\mathsf{CT}}\), and a commitment scheme \((\mathsf{Gen},\mathsf{Com})\), are chosen (see Sect. 2,A).
**The protocol:**
1. Commit \(V_{i}\leftarrow\mathsf{Com}(v_{i};r_{i})\), and send \(V_{i}\) to \(P^{*}\). \(P^{*}\) delivers \(V_{i}\) to \(P_{1-i}\).
2. Engage with \(P_{1-i}\) in the coin-tossing procedure \(\Pi_{\mathsf{CT}}\), and obtain a \(\lambda\)-bit shared \(s\).
3. Compute the bit decomposition \(v_{i}=\sum_{j\in N}2^{n-1-j}\cdot v_{i,j}\), for bits \(v_{i,j}\in\{0,1\}\).
4. For each \(j\in N\): 1. Compute a random additive secret-sharing \(v_{i,j}=\langle v_{i,j}\rangle_{0}^{q}+\langle v_{i,j}\rangle_{1}^{q}\) in \(\mathbb{F}_{q}\). 2. Commit \(V_{i,j,k}\leftarrow\mathsf{Com}(\langle v_{i,j}\rangle_{k}^{q};\,r_{i,j,k})\) for each \(k\in\{0,1\}\). 3. Open \(V_{i,j,1-i}\) by sending \(\langle v_{i,j}\rangle_{1-i}^{q}\) and \(r_{i,j,1-i}\) to \(P_{1-i}\).
5. Send the full array \(\big{(}V_{i,j,k}\big{)}_{j,k=0}^{n-1,1}\) to \(P_{1-i}\).
6. Compute: \(\pi_{i}\leftarrow\mathsf{ComEq.Prove}\left(V_{i},\prod_{j=0}^{n-1}\left(V_{i,j,0}\cdot V_{i,j,1}\right)^{2^{n-1-j}}\right)\), \(\pi_{i,j}\leftarrow\mathsf{BitProof.Prove}\left(V_{i,j,0}\cdot V_{i,j,1}\right)\), for all \(j\in N\), \(P_{i}\) sends \(\pi_{i}\), and \((\pi_{i,j})_{j=0}^{n-1}\) to \(P_{1-i}\).
7. \(P_{i}\) receives \(\pi_{1-i}\), and \((\pi_{1-i,j})_{j=0}^{n-1}\) and verifies the following: 1. The openings \(\langle v_{1-i,j}\rangle_{i}^{q}\) and \(r_{1-i,j,i}\) indeed open \(V_{1-i,j,i}\), for \(j\in N\), 2. \(\mathsf{ComEq.Verify}\left(\pi_{1-i,V_{1-i},\prod_{j=0}^{n-1}\left(V_{1-i,j,0} \cdot V_{1-i,j,1}\right)^{2^{n-1-j}}}\right)\), 3. \(\mathsf{BitProof.Verify}\left(\pi_{1-i,j},V_{1-i,j,0}\cdot V_{1-i,j,1}\right)\) for each \(j\in N\). If any of these checks fail, \(P_{i}\) aborts.
8. \(P_{i}\) runs Algorithm 1, in parallel, on the shares \(\langle v_{k,j}\rangle_{i}^{q}\), the randomnesses \(r_{k,j,i}\), and the commitments to the _other party_'s shares \(V_{k,j,1-i}\) (all for \(j\in N\) and \(k\in\{0,1\}\)). That is, \(P_{i}\) runs: \[\left(\left(\langle d_{0,j}\rangle_{i}^{q}\right)_{j=0}^{n},\left( \langle d_{1,j}\rangle_{i}^{q}\right)_{j=0}^{n}\right) \leftarrow\mathsf{ComparisonInitial}\left(\left(\langle v_{0,j} \rangle_{i}^{q}\right)_{j=0}^{n-1},\left(\langle v_{1,j}\rangle_{i}^{q} \right)_{j=0}^{n-1}\right),\] \[\left(\left(\langle s_{0,j}\rangle_{i}\right)_{j=0}^{n},\left(s_{1,j,i}\rangle_{j=0}^{n}\right) \leftarrow\mathsf{ComparisonInitial}\left(\left(\langle r_{0,j,i} \rangle_{j=0}^{n-1},\langle r_{1,j,i}\rangle_{j=0}^{n-1}\right),\right.\] \[\left(\langle D_{0,j,1-i}\rangle_{j=0}^{n},\left(D_{1,j,1-i} \rangle_{j=0}^{n}\right)\right) \leftarrow\mathsf{ComparisonInitial}\left(\left(\langle V_{0,j,1-i} \rangle_{j=0}^{n-1},\langle V_{1,j,1-i}\rangle_{j=0}^{n-1}\right),\] using the same shared randomness \(s\) for all internal coin flips.
9. \(P_{i}\) sends \(\left(\langle d_{0,j}\rangle_{i}^{q}\right)_{j=0}^{n},\left(\langle d_{1,j} \rangle_{i}^{q}\right)_{j=0}^{n}\) and the randomnesses \(\left(s_{0,j}\rangle_{i}\right)_{j=0}^{n},\left(s_{1,j,i}\right)_{j=0}^{n}\) to \(P^{*}\).
**Party \(P^{*}\) (Output reconstruction):** After receiving all shares, \(P^{*}\) proceeds as follows:
1. Reconstruct for each \(j\in N\): \[d_{0,j} :=\langle d_{0,j}\rangle_{0}^{q}+\langle d_{0,j}\rangle_{1}^{q}\, s_{0,j} :=s_{0,j,0}+s_{0,j,1}\] \[d_{1,j} :=\langle d_{1,j}\rangle_{0}^{q}+\langle d_{1,j}\rangle_{1}^{q}\, s_{1,j} :=s_{1,j,0}+s_{1,j,1}\]
2. Finally, \(P^{*}\) executes Algorithm 2, that is: \((b_{0},b_{1}):=\mathsf{ComparisonFinal}\left(\left(d_{0,j}\right)_{j=0}^{n}, \left(d_{1,j}\right)_{j=0}^{n}\right)\)
3. For each \(i\in\{0,1\}\), if \(b_{i}\) is true, then \(P^{*}\) re-commits \(D_{i,j}:=\mathsf{Com}(d_{i,j};s_{i,j})\) for each \(j\in N\), computes \(\pi_{i}^{\prime}\leftarrow\mathsf{OneMany.Prove}\left(\left(D_{i,j}\right)_{j=0 }^{n}\right)\), and finally sends \(\pi_{i}^{\prime}\) to \(P_{i}\). Otherwise, \(P^{*}\) sends \(\bot\) to \(P_{i}\). \(P^{*}\) outputs both \(b_{0}\) and \(b_{1}\).
**Each Party \(P_{i}\) (output reconstruction):**
1. If receives a proof \(\pi_{i}^{\prime}\), compute \(D_{i,j}:=\mathsf{Com}(\langle d_{i,j}\rangle_{i}^{q};s_{i,j,i})\cdot D_{i,j,1-i}\), for each \(j\in N\), and then verifies \(\mathsf{OneMany.Verify}\left(\pi_{i}^{\prime},\left(D_{i,j}\right)_{j=0}^{n}\right)\). If verification passes then output true, otherwise, output false.
sending binary data on WebSockets (all commitments, proofs, and messages are serialized).
We run our experiments on commodity hardware throughout since our implementation is targeted to a real-world application where clients hold conventional computers. In particular, one of the two clients runs on an Intel Core i7 processor, with 6 cores, each 2.6GHz, and another one runs on an Intel Core i5, with 4 cores, each 2.00 GHz. Both of them are Windows machines. Our server runs in a Linux AWS instance of type c5a.8xlarge, with 32 vCPUs. In the first scenario, we run the client-to-bank inventory matching protocol for two clients where we process each client one by one against the bank's inventory. In Table 2 we report the performance for a different number of registered orders per client (\(100,200,\ldots,10000\)). _Latency_ refers to the total time it takes to process all the orders from both clients (in seconds). _Throughput_ measures the number of transactions per second. The number of orders/symbols processed per second is approximately 10. We also report the message size (in MB) for the message sent from each client to the server during the registration phase and the matching phase. We further record the size of the messages received from the server to each client. The bandwidth is 300 Mbps. Note that the message sizes can be reduced considerably if message serialization is not required.
In the second scenario, we run the client-to-client inventory matching protocol (the comparison protocol) for two clients where we try to match the orders of the two clients via the bank. In this case we assume that the bank has no inventory. We report the performance in Table 2. The number of orders/symbols processed per second is approximately 10 in this case too.
**Prime Match Performance in Production:** Figure 6 shows a sketch of both the bank's network tiering and the application architecture of Prime Match. The right side of the diagram shows that the clients are able to access the Prime Match UI through the bank's Markets portal with the correct entitlement.
The application UI for Prime Match is hosted on the bank's internal cloud platform. After each client is getting authenticated by the bank's Markets portal, it can access the application on the browser which establishes web socket connections to the server. The client's computation takes place locally within their browser, ensuring that all private data remains local. The server is hosted on the bank's trade management platform.
The bank's network tiering exists to designate the network topology in and between approved security gateways (firewalls). The network traffic from the client application is handled by tier 1 which helps the bank's Internet customers achieve low latency, secured, and accelerated content access to internally hosted applications. Then, the traffic will be subsequently forwarded to a web socket tunnel in tier 2 and gets further directed to the server in tier 3.
During the axe registration phase, the client logins to Prime Match from his/her web browser and uploads a file of orders (symbol, directions, quantity) which are encrypted locally and securely on the browser. The client is uploading a minimum (threshold) and a maximum (full) quantity per symbol. Prime Match first processes the encrypted threshold quantities of all clients and then it processes the full quantities. This process implements the range functionality of Appendix G. Note that for the partial matches, the full quantity is never revealed to the server. Moreover, if there is no match no quantity is revealed to the server. Figure 7 shows a complete run of an auction on the client's browser after the matching phase where the final matched quantities of some test symbols/securities with the corresponding fixed spread are revealed.
Our protocol runs in production every 30 minutes. There are two match runs each hour and matching starts at xx:10 and xx:40. Axe registration starts 8 minutes before matching (xx:02 and xx:32). The matching process finishes at various
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & \multirow{2}{*}{Number of Symbols} & \multirow{2}{*}{Latency (sec)} & Throughput & Matching & Matching \\ & & & (transactions/ & received msg & sent msg \\ & & Latency (sec) & sec) & Size (MB) & Size (MB) \\ \hline & 100 & 9.903 (\(\pm\)0.174 ) & 10.09 & 0.215 & 0.521 \\ & 200 & 19.533 (\(\pm\)0.530 ) & 10.23 & 0.430 & 1.040 \\ & 500 & 46.223(\(\pm\)0.787 ) & 10.81 & 1.076 & 2.597 \\ Bank-to-client & 1000 & 95.396 (\(\pm\)2.063 ) & 10.48 & 2.152 & 5.194 \\ & 2000 & 183.186 (\(\pm\)2.512 ) & 10.91 & 4.304 & 10.382 \\ & 4000 & 356.740 (\(\pm\)2.149 ) & 11.21 & 8.608 & 20.762 \\ & 10000 & 941.813 (\(\pm\)18.465 ) & 10.61 & 21.520 & 51.902 \\ \hline & 100 & 11.15 (\(\pm\)0.060 ) & 8.96 & 0.972 & 1.549 \\ & 200 & 20.636 (\(\pm\)0.525 ) & 9.69 & 1.945 & 3.096 \\ & 500 & 51.493(\(\pm\)2.343 ) & 9.71 & 4.863 & 7.737 \\ Client-to-client & 1000 & 101.051 (\(\pm\)2.587 ) & 9.89 & 9.727 & 15.472 \\ & 2000 & 208.813 (\(\pm\)1.479 ) & 9.57 & 19.454 & 30.942 \\ & 4000 & 390.510 (\(\pm\)3.020 ) & 10.24 & 38.908 & 61.882 \\ & 10000 & 1064.443 (\(\pm\)70.439 ) & 9.39 & 97.270 & 154.702 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of Bank-to-Client matching for two clients, and Client-to-Client matching.
times (as shown in the experiments section) based on the number of symbols. Based on the application requirements (up to 5000 symbols in the US and 10-60 clients) it does not finish after xx:55.
For running the code in production in the bank's environment, the two clients run on the same type of Windows machines whose specs are Intel XEON CPU E3-1585L, with 4 cores, each 3.00 GHz. The server runs on the bank's trade management platform with 32 vCPUs. In Table 3 of the Appendix, we report the performance. Moreover, we discuss challenges we faced during the implementation in Appendix J.
## 6 Conclusion
Inventory matching is a fundamental service in the traditional financial world. In this work, we introduce secure multiparty computation in financial services by presenting a solution for matching orders in a stock exchange while maintaining the privacy of the orders. Information is revealed only if there is a match. Our central tool is a new protocol for secure comparison with linear operations in the presence of a malicious adversary, which can be of independent interest. Our system is running live, in production, and is adopted by a large bank in the US - J.P. Morgan.
## 7 Acknowledgements
We would like to thank Mike Reich, Vaibhav Popat, Sitaraman Rajamani, Dan Stora, James Mcilveen, Oluwatoyin Aguiyi, Niall Campbell, Wanyi Jiang, Grant McKenzie, Steven Price, Vinay Gayakwad, Srikanth Veluvolu, Noel Peters for their great efforts and help to move Prime Match in production. Last but not least, we would like to thank our executive sponsor Jason Sippel. The paper is based on joint patents [4, 22]
This paper was prepared in part for information purposes by the Artificial Intelligence Research group and AlgoCRYPT CoE of JPMorgan Chase & Co and its affiliates ("JP Morgan"), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. 2023 JP Morgan Chase & Co. All rights reserved.
|
2303.09719
|
Learning towards Selective Data Augmentation for Dialogue Generation
|
As it is cumbersome and expensive to acquire a huge amount of data for
training neural dialog models, data augmentation is proposed to effectively
utilize existing training samples. However, current data augmentation
techniques on the dialog generation task mostly augment all cases in the
training dataset without considering the intrinsic attributes between different
cases. We argue that not all cases are beneficial for augmentation task, and
the cases suitable for augmentation should obey the following two attributes:
(1) low-quality (the dialog model cannot generate a high-quality response for
the case), (2) representative (the case should represent the property of the
whole dataset). Herein, we explore this idea by proposing a Selective Data
Augmentation framework (SDA) for the response generation task. SDA employs a
dual adversarial network to select the lowest quality and most representative
data points for augmentation in one stage. Extensive experiments conducted on
two publicly available datasets, i.e., DailyDialog and OpenSubtitles, show that
our framework can improve the response generation performance with respect to
various metrics.
|
Xiuying Chen, Mingzhe Li, Jiayi Zhang, Xiaoqiang Xia, Chen Wei, Jianwei Cui, Xin Gao, Xiangliang Zhang, Rui Yan
|
2023-03-17T01:26:39Z
|
http://arxiv.org/abs/2303.09719v1
|
# Learning towards Selective Data Augmentation for Dialogue Generation
###### Abstract
As it is cumbersome and expensive to acquire a huge amount of data for training neural dialog models, data augmentation is proposed to effectively utilize existing training samples. However, current data augmentation techniques on the dialog generation task mostly augment all cases in the training dataset without considering the intrinsic attributes between different cases. We argue that not all cases are beneficial for augmentation task, and the cases suitable for augmentation should obey the following two attributes: (1) low-quality (the dialog model cannot generate a high-quality response for the case), (2) representative (the case should represent the property of the whole dataset). Herein, we explore this idea by proposing a _Selective Data Augmentation framework_ (SDA) for the response generation task. SDA employs a dual adversarial network to select the lowest quality and most representative data points for augmentation in one stage. Extensive experiments conducted on two publicly available datasets, _i.e._, DailyDialog and OpenSubtitles, show that our framework can improve the response generation performance with respect to various metrics.
## Introduction
Open-domain dialogue generation is becoming a research hotspot in the community of natural language processing due to its penitential applications Li et al. (2019); Chen et al. (2021). Generally, in the paradigm of deep neural networks, a large quantity of training data is required for facilitating the convergence of these models. As such, a data augmentation framework that can generate reliable training cases is the crux of building a robust dialogue model.
As shown in Figure1(a), existing data augmentation methods for the dialog generation task mainly investigate different ways to augment all data samples without considering their distinct attributes. For example, Hou et al. (2018) augmented each case by leveraging other cases with similar semantic meaning in the training dataset, and Li et al. (2019) generated diversified versions for each query and response in an adversarial style. However, we argue that in practice, the attributes of the training cases vary, thus, not all cases are necessary for augmentation. The augmentation of dull responses such as "I don't know" and noisy samples with unpaired queries and responses even brings harm to the model. Taking one step further, we assume that whether each case is beneficial for augmentation should be examined from two aspects. From the generation quality aspect, the generation model may perform relatively well in some cases, for example, the cases with safe answers. Correspondingly, it is redundant and sometimes harmful to augment these cases Csaky et al. (2019). Thus, we should only focus on part of the data where the model fails to adapt to (_low-quality_). From the dataset attribute side, the quality of user-generated training data varies greatly, and noisy samples frequently appear Cai et al. (2020). Hence, we should augment the representative cases that reflect a larger set of their properties (_representative_), instead of some noisy samples that do not represent the general attribute of the whole dataset. This is also inspired by a previous study Schroder and Niekler (2020), which shows that training on representative cases can increase the quality of the resulting model.
Based on this assumption, in this work, we propose a novel _Selective Data Augmentation_ framework, namely SDA, to accurately select the most informative data points from the training dataset by simultaneously considering the generation quality and representativeness. The overview is illustrated in Figure 1(b). The dialog selector is required to select the samples maximizing the distance between generated responses and original responses (low-quality) while minimizing the distance between selected samples and original samples (representative).
Concretely, we use a dual generative adversarial (DualGAN) framework to assist the dialog selector in the distance measurement between deep feature representations. From the generation quality side, a discriminator tries to discriminate between the generated response and the ground-truth response, while the dialog selector aims to trick the discriminator. If the generated responses cannot fool the discriminator, then the selected samples have low quality. From the representativeness side, we measure the distance by the re
construction process. If the selected samples successfully reconstruct the original data, then the selected cases have high representativeness. Concretely, the samples selected by the dialog selector are sent to a variational autoencoder (VAE), which embeds the features of selected samples into the same latent space and then reconstructs them. The reconstructed features are fed to the representativeness discriminator, which tries to discriminate between the original samples and the reconstructed samples. If the selected samples successfully fool the discriminator, then the selected samples have high representativeness. In this way, the dialog selector is encouraged to take both generation quality and representativeness into consideration during data selection.
Our main contributions can be summarized as follows: (1) We propose the selective data augmentation task, which aims to select suitable training cases for augmentation. (2) We propose a dual adversarial framework for _Selective Data Augmentation_ (SDA), which can simultaneously learn to select the lowest quality and most representative data points for augmentation. (3) Extensive experiments conducted on two public dialog datasets show that our approach can improve the dialog generation performance. We also show the universality of our framework for the story generation task.
## Related Work
**Dialog Generation.** Existing approaches to improve neural dialogue generation models mainly target building more powerful learning systems, using extra information such as conversation topics Zou et al. (2021), persona profile Chan et al. (2019), user emotions Song et al. (2019), out-sourcing knowledge Li et al. (2021), or pretrained models Tuan et al. (2021). Another popular framework for dialogue generation concentrates on using VAE Zhao et al. (2017), in which a latent variable is introduced to benefit the dialogue model with more diverse response generation. As the GAN framework facilitates training the generator,
**Data Augmentation.** In the paradigm of deep learning, data augmentation is an effective way to boost the performance of neural models. To name a few, Kurata et al. (2016) proposed to generate labeled data with the decoder LSTM based on the perturbated encoded vector for the semantic slot filling task. Andreas (2020) designed a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models. Kobayashi (2018) and Wu et al. (2019) showed that contextual augmentation using label-conditional language models helps text classification tasks. In terms of dialog generation task, Li et al. (2019) proposed a generative model to augment existing data, where the CVAE was employed as the generator to output more training data with diversified expressions. Louvan and Magnini (2020) proposed Lightweight augmentation, a set of word-span and sentence-level augmentation methods for low-resource slot filling and intent classification. The most recent work was proposed by Cai et al. (2020), where they proposed to augment and reweight all learning samples.
## Methodology
Open-domain dialogue generation involves generating a response \(R^{i}=(r_{1}^{i},...,r_{i}^{i},...,r_{m}^{i})\) for a user-issued query \(Q^{i}=(q_{1}^{i},...,q_{k}^{i},...,q_{m^{\prime}}^{i})\), where \(r_{j}^{i}\) refers to the \(j\)-th word in the response in \(i\)-th case, and \(q_{k}^{i}\) denotes the \(k\)-th word in the query in \(i\)-th case. \(m\) and \(m^{\prime}\) are the word length of a response and a query, respectively. The entire dialogue system is trained under \(D\), _i.e._, maximizing the \(P(R^{i}|Q^{i})\), where \(D=\{(Q^{i},R^{i})\}_{i=1}^{N}\) is the dataset and \(N\) refers to the number of training query-response pairs. For the data augmentation task, the original dataset \(D\) is increased to \(D^{\prime}=\{(Q^{i},R^{i})\}_{i=1}^{N^{\prime}}\), where \(N^{\prime}\) is the data size after augmentation. In our selective data augmentation task, we aim to select suitable cases suitable for augmentation and increase the data size from \(N\) to \(N^{\prime}\). Correspondingly, the response generation changes from argmax\(P(R|Q,D)\) to argmax\(P(R|Q,D^{\prime})\).
Overall, the _Dialog Selector_ assigns select weights to existing samples. To select the lowest quality and most representative cases, we propose two discriminators to assist this process, as shown in Figure 2. Firstly, a _Generation Quality Discriminator_ (GQD) discriminates between the ground-truth response and the generated response. The dialog selector will assign high weights to cases that cannot fool GQD. Secondly, to examine the representativeness of the selected samples, a reconstruction and a discrimination process are employed. The intuition is that if the selected cases can successfully reconstruct the original data, then the selected cases are representative. Concretely, a reconstructor first embeds the selected samples into the same latent space and then reconstructs them. A _Representativeness Discriminator_ (RD) is then required to classify whether the input belongs to the original samples or the selected samples. Dialog selector will assign high weights to cases that fool RD.
Figure 1: Our goal is to simultaneously select the lowest quality and the most representative cases in the training dataset for augmentation (best viewed in color).
### Dialog Selector
We first employ a bi-directional recurrent neural network (Bi-RNN) to model the temporal interactions between words in the query and response, denoted by \(Q=\{h_{1}^{q_{i}},...,h_{m^{\prime}}^{q_{i}}\}\) and \(R=\{h_{1}^{r_{i}},...,h_{m}^{r_{i}}\}\), respectively. \(i\) denotes the sample index. The final hidden state \(h_{m^{\prime}}^{q_{i}}\) and \(h_{m}^{r_{i}}\) denotes the overall representation for the query and response. The dialog selector adopts a simple architecture, consisting of a 5-layer multi-layer perceptron (MLP) with Xavier initialization, to map the input feature to a score:
\[s^{i}=\sigma\left(\text{MLP}_{a}([h_{m^{\prime}}^{q_{i}};h_{m}^{r_{i}}])\right), \tag{1}\]
where \([;]\) denotes the concatenation operation and \(\sigma\) denotes the sigmoid function.
In the next subsections, we will propose dual discriminators to assist the selection process. As preparation, the original representation \(Q\) and \(R\) are weighted using these scores, and we use \(R\) to denote this process:
\[\hat{R}^{i}=(1-s^{i})R^{i},\tilde{R}^{i}=s^{i}R^{i}. \tag{2}\]
\(\hat{R}^{i}\) is employed for quality discrimination, and \(\tilde{R}^{i}\) is used for representativeness discrimination. Note that we use \(1-s^{i}\) and \(s^{i}\) as the weights for quality and representativeness branches, respectively, to ensure the optimization of these two terms in the same direction. Notations for \(\hat{Q}^{i}\) and \(\hat{Q}^{i}\) are similar.
To prevent the selector from assigning equal importance to all data points, we employ a length regularizer loss \(\mathcal{L}_{\mathrm{LR}}\) to limit the number of selected elements, and use a determinantal point process (DPP) loss \(\mathcal{L}_{\mathrm{dpp}}\)(Szegedy et al., 2015) to ensure the diversity of selected data points:
\[\mathcal{L}_{\mathrm{LR}}=\left\|\sigma-\frac{1}{N}\sum_{i=1}^{N}s^{i}\right\| _{2},\mathcal{L}_{\mathrm{dpp}}=-\log(P(\mathbf{s})). \tag{3}\]
For \(\mathcal{L}_{\mathrm{LR}}\), \(\sigma\) represents the percentage of cases for subset selection. For \(\mathcal{L}_{\mathrm{dpp}}\), \(P(\mathbf{s})\) is a probability that DPP assigns to the select \(\mathbf{s}\). We compute \(P(\mathbf{s};L)=\frac{\det(L(\mathbf{s}))}{\det(L+I)},\) where \(L\) is an \(N\times N\) similarity matrix between every case, \(I\) is an identity matrix, and \(L(\mathbf{s})\) is a smaller square matrix, cut down from \(L\) given \(\mathbf{s}\). For \(i\)-th case and \(j\)-th case, the pairwise similarity values are defined as \(L_{i,j}=s^{i}s^{j}[h_{m^{\prime}}^{q_{i}};h_{m^{\prime}}^{r_{i}}][h_{m^{\prime }}^{q_{j}};h_{m}^{r_{j}}]\).
### Generation Quality Discriminator
Generation quality discriminator (GQD) aims to evaluate whether the generated responses are feasible for a given query. We achieve this by measuring the matching degree between query-response pairs in an adversarial fashion. The weighted ground truth query-response pair is treated as a positive case, while the query with the generated response pair is the negative case. Concretely, for the positive pair, we concatenate the weighted ground-truth response \(\hat{R}^{i}\) with the weighted query \(\hat{Q}^{i}\). Then, a fully-connected neural network with a sigmoid activation function is utilized to flatten the feature matrix, resulting in the final matching score \(m_{g}^{i}\in(0,1)\). The matching score \(m_{f}^{i}\) between the negative instance \(\hat{R}^{\prime}{}^{i}\) and query \(\hat{Q}^{i}\) is also calculated as the above-mentioned procedure, except that we have a dimension transformation on the generated response to align it with \(\hat{Q}^{i}\). Note that our framework does not rely on specific response generation models, and in our case, we employ LSTM-based RNN as the generator.
In the paradigm of GAN, the training objective of GQD is to maximize the matching score of positive instances and minimize the negative ones, while the dialog selector is optimized by aiming to maximize the matching score the generated response:
\[\mathcal{L}_{D} =-\sum_{i=1}^{N}\left(\log(1-m_{f}^{i})+\log(m_{g}^{i})\right), \tag{4}\] \[\mathcal{L}_{G} =-\sum_{i=1}^{N}\left(\log(m_{f}^{i})\right). \tag{5}\]
The dialog selector will learn to assign high weights, _i.e._, \(1-s^{i}\), to samples that are difficult for GQD to identify, which leads to a low \(s^{i}\) score. In other words, the cases that obtain high \(s^{i}\) scores have low quality.
### Reconstructor
In the next two subsections, we introduce our representativeness selection process, where the samples that can be used to reconstruct the original dataset are selected. This is inspired by Mahasseni et al. (2017), where they address the problem of finding representative images in a video. Their key idea is to find a subset of images that can
Figure 2: Major components of our approach. The _dialog selector_ selects samples that will be examined by two discriminators. The _generation quality discriminator_ examines the generation quality of the selected cases, and the _representativeness discriminator_ examines the representativeness of the selected samples. The samples with low quality and high representativeness, _i.e._, high \(s\) score, will be selected for data augmentation.
be used to reconstruct the whole video. In this work, we extend this ideology from video level to dataset level to find representation cases instead of images. Since dialog data is paired, we use the query to illustrate this process.
Our reconstructor takes the form of VAE, which is commonly used to effectively learn feature representations [14]. VAE defines a posterior distribution over the observed data, given an unobserved latent variable. Overall, VAE consists of an encoder and a decoder. The encoder maps the weighted query \(\tilde{Q}^{i}\) to a latent space \(e\), and the decoder reconstructs the query from \(e\).
Concretely, the encoder computes posterior distributions \(q_{\theta}(e|\tilde{Q}^{i})\), where the latent representations \(e\) is sampled. The reconstruction process can be formulated as \(p_{\theta}(\tilde{Q}^{i}|e)\), representing the probability of generating input \(Q^{i}\) conditioned on \(e\). Herein \(\theta\) represents the parameters of the above encoders and reconstruction decoder. Because of the intractable integral of the marginal likelihood \(p_{\theta}(\tilde{Q}^{i})\), the posterior \(q_{\theta}(e|\tilde{Q}^{i})\) is simulated by variational approximation \(q_{\phi}(e|\tilde{Q}^{i})\), where \(\phi\) is the parameters for \(q\). When learning the VAE, the objective is to maximize the variational lower bound of \(\log p_{\theta}(\tilde{Q}^{i})\):
\[\mathcal{L}^{q}_{VAE}=\mathrm{KL}(q_{\phi}(e|\tilde{Q}^{i})\|p_{\theta}(e))- \mathbb{E}_{q_{\phi}(e|\tilde{Q}^{i})}[\mathrm{log}p_{\theta}(\tilde{Q}^{i}|e )],\]
where the KL denotes KL-divergence, the regularization for encouraging the approximated posterior \(q_{\phi}(e|\tilde{Q}^{i})\) to be close to the prior \(p_{\theta}(e)\), _i.e._, standard Gaussian distribution. \(\mathbb{E}[\cdot]\) is the reconstruction loss conditioned on the approximation posterior \(q_{\phi}(e|\tilde{Q}^{i})\).
We denote the reconstructed query as \(\tilde{Q}^{i}\), and reconstructed response as \(\tilde{R}^{i}\).
### Representativeness Discriminator
The discrimination processes for query and response are similar, and we use the query to illustrate this process. Representativeness discriminator (RD) takes \(\tilde{Q}^{i}\) and \(\tilde{Q}^{i}\) as input and aims to classify them into two distinct classes (_i.e._, selected or original). RD adopts the same architecture in GQD except that it does not have the dimension transformation. We omit the details here due to limited space. RD aims to maximize the correct matching result, while the dialog selector aims to select cases that can fool RD. If RD cannot distinguish the selected cases from the original one, the dialogs with high \(s^{i}\) scores are seen to have good representativeness to the dataset. Hence, the dialog selector will learn to assign high \(s^{i}\) score to the representative case to fool RD.
## Experiment
### Experiment Setup
**Datasets.** Following Cai et al. (2020), we conduct experiments on two English conversation datasets: (1) _DailyDialog_[13], a collection of real-world dialoges widely used in open-domain dialogue generation. This is a multi-turn dataset, and we treat each turn as a training pair in this work. The overlapping pairs are removed from the dataset. (2) _OpenSubtitles_[15], a group of human-human conversations converted from movie transcripts. We split the DailyDialog dataset to 54,889/6,005/5,700, and OpenSubtitles to 64,000/8,000/8,000.
**Implementation Details.** (1) _Hyperparameter setting_: We implement our models in TensorFlow on an NVIDIA GTX 1080 Ti GPU. We truncate the input dialog to 20 words, the minimum decoding step is 10, and the maximum step is 30. The default \(\sigma\) in Equation 1 is set to 0.6 except in the augmentation percentage analysis. The batch size is set to 16, and we limit the vocabulary size to 50K. (2) _Optimization techniques:_ We employ a set of techniques to deal with the posterior collapsed problem in VAE [16] including Bag-Of-Words (BOW) and KL annealing. We increase the kl loss coefficient by 0.5 every 10,000 batches. Readers can refer to work by [14] for details. For the GANs in our framework, we train the discriminator for one step every five steps for the generator, since it is it would be harder for generation than classification. The generators and discriminators are adversarially trained until GQD cannot discriminate between ground-truth and generated responses and RD is not able to distinguish between the summary and original datasets. The framework comes to convergence in less than an hour. (3) _Augmentation details:_ We select 60% cases with the highest scores for augmentation if not specified, based on the experiment result on the validation dataset. For the selected cases, we employ the back-translation technique [17] to augment them by ten times following Li et al. (2019). We choose the back-translations since it provides more diverse augmented text with different structures while preserving the meaning of the original text [18, 19]. Our evaluation metrics include distinctness [13], BLEU [15], and embedding metrics [20].
**Baselines.** We compare our model on following classic generation structure: (1) **SEQ2SEQ**[1]: a sequence-to-sequence model with attention mechanisms. (2) **CVAE**[14]: a latent variable model using conditional variational auto-encoder, trained with KL-annealing and a BOW loss. (3) **Transformer**[13]: an encoder-decoder architecture relying solely on the attention mechanisms. (4) **GPT-2**[17]: a large-scale pre-trained language model, which is finetuned by the full training dataset. We also compare our approach with native augmentation, previous data augmentation, or instance weighting methods: (1) **Random**: we randomly select 60% data for augmentation, to compare with our selective augmentation method. Comparisons with different augmentation percentages can be found in the discussion section. (2) **Calibration**[14]: a calibration network measures the quality of data samples and enables weighted training for dialogue generation. (3) **CVAE-GAN**[13]: a model that combines CVAE and GAN for augmentation. (4) **Manipulation**[15]: it augments all the cases in the training process and reweights them.
### Main Results
**Automatic evaluation.** We instantiate our framework on a number of classic dialog generation models including SEQ2SEQ, CVAE, Transformer, and GPT-2. The automatic evaluation results are shown in Table 1. It can be seen that our model outperforms vanilla baselines on almost all automatic metrics. The improvements are consistent across both datasets, demonstrating the superiority and general applicability of our framework.
In addition, we compare our model with the existing augmentation methods. We select SEQ2SEQ as the response generation model following since all compared models are constructed on this classic baseline [15]. Not surprisingly, as shown in Table 3, our framework outperforms most of the baseline methods. Concretely, SDA outperforms Random baseline in all metrics, demonstrating that selection is necessary to improve the performance of data augmentation. CVAE-GAN augments each case in the training dataset, and Manipulation augments every case in each training step, while our model only augments 60% data and achieves better performance. This demonstrates that selective data augmentation is more effective and efficient, outperforming data augmentation methods that require generating more augmented cases. The statistical significance of observed differences between the performance of two runs is tested using a two-tailed paired t-test for \(\alpha=0.01\).
**Human Evaluation.** We also employ a human evaluation on Amazon Mechanical Turk. For better annotation quality, we employ three annotators and require the annotators to have at least 99% approval rate with at least 1,000 approved HITs. These annotators are hired to evaluate the quality of generated responses on DailyDialog dataset, where the evaluation is conducted in a double-blind fashion. Totally, 200 randomly sampled responses generated by each model are rated by each annotator with two different aspects, _i.e., readability_ and _informativeness_. Criteria are scored from 1 to 3, _i.e.,_ bad, normal, and good. The results of the human evaluation are listed in Table 4. Our model significantly outperforms most of the baselines in terms of all the metrics. Particularly, our model increases informativeness by approximately 2.4% over Manipulation. The kappa statistics is 0.42 and 0.45 for readability and informativeness, respectively, which indicates moderate agreement between annotators. We also show a representative case from DailyDialog in Table 5, It can be seen that our model can generate a more diverse and interesting response that describes in detail how it feels to have a lower.
RD are removed, respectively. This indicates that the jointly selected cases from the quality and representativeness aspects help generate more diverse and accurate responses.
**Analysis of Selected Samples.** In this subsection, we examine whether the model successfully selects the lowest quality and most representative cases for augmentation. We calculate the BLEU scores of selected and unselected cases in the response generation task and response reconstruction task. From Figure 3(a) and Figure 3(b) we can see that the selected cases have lower BLEU scores in terms of the generation quality and higher scores in the reconstruction task. This demonstrates that the model needs to be polished to generate better responses for the selected cases. In the meantime, the selected data itself is not noise data and represents the overall data distribution. To further glean the insights regarding which samples are favored by the augmentation model, we also list examples with different augmentation scores in Figure 3(c). We notice that samples frequently augmented by SDA are more reliable and meaningful context, where the response is closely related to the query and leads to a new topic. While for the dialog pairs self augmented, they contain universal and safe content such as "I don't know" or "I'd forgotten about it".
**Visualization of Dual Training.** To visualize the select process, we draw the loss curve of the generation quality discriminator (\(\mathcal{L}_{D}\) in Equation 4) and the response generator evaluator (\(\mathcal{L}_{G}\) in Equation 4) in Figure 4(a), and show the accuracy of GQD in Figure 4(b). When the training begins, the loss of the GQD and RGE fluctuates from time to time, as well as the accuracy curve, which verify the adversarial training. After several steps, the training converges, and
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & Readability & Informativeness \\ \hline Calibration & 1.63 & 1.68 \\ CVAE-GAN & 1.85 & 1.81 \\ Manipulation & 1.91 & 2.07 \\ SDA & **2.01** & **2.12** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Human evaluation on two aspects: Readability and informativeness.
\begin{table}
\begin{tabular}{c|l c|c c c c c|c c c} \hline \hline & Models & Dist-1 & Dist-2 & Dist-3 & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & Avg & Ext & Gre \\ \hline \multirow{4}{*}{(a)} & Random & 1.38 & 5.24 & 11.70 & 13.25 & 2.17 & 1.13 & 0.73 & 77.02 & 43.12 & 62.68 \\ & Calibration (Shang et al., 2018) & 1.53 & 5.96 & 11.77 & 13.09 & 2.28 & 1.02 & 0.75 & 77.15 & 42.94 & 62.77 \\ & CVAE-GAN (Li et al., 2019) & 1.54 & 5.63 & 13.50 & 14.00 & 2.59 & 1.24 & 0.98 & 77.21 & 43.19 & 62.96 \\ & Manipulation (Cai et al., 2020) & 1.58 & 6.42 & 14.52 & 14.26 & 2.87 & 1.16 & 0.95 & 77.53 & 43.32 & 63.12 \\ & SDA & **1.63** & **7.24** & **15.01** & **14.65** & **2.92** & **1.27** & 0.81 & **77.83** & **43.60** & **63.56** \\ \hline \hline \multirow{4}{*}{(b)} & Random & 1.40 & 2.46 & 5.76 & 10.05 & 1.22 & 1.03 & 0.93 & 61.97 & 45.51 & 50.72 \\ & Calibration (Shang et al., 2018) & 1.43 & 2.58 & 5.82 & 10.20 & 1.23 & 1.08 & 0.68 & 62.03 & 45.57 & 50.83 \\ & CVAE-GAN (Li et al., 2019) & 1.49 & 2.83 & 5.07 & 10.26 & 1.28 & 1.17 & 0.87 & 62.28 & 45.74 & 50.76 \\ & Manipulation (Cai et al., 2020) & 1.41 & 3.40 & 5.93 & 10.37 & 1.58 & 1.24 & 0.94 & 62.29 & 46.00 & 50.22 \\ & SDA & **1.56** & **3.94** & 5.83 & **10.78** & **2.00** & **1.29** & **0.97** & **62.36** & **46.24** & **51.14** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance (%) of our approach instantiated on the naive SEQ2SEQ and the baseline approaches on (a) DailyDialog and (b) OpenSubtitles. Numbers in bold mean that the improvement to the best baseline is statistically significant.
Figure 3: (a) Generation quality of selected and unselected cases. (b) Reconstruction performance of selected and unselected cases. (c) Histogram of different augmentation frequencies.
\begin{table}
\begin{tabular}{c|l} \hline \hline \multicolumn{2}{l}{I got a ticket yesterday.} \\ \hline ground-truth & Really? What did you get one for? \\ CVAE-GAN & Is that right?? Is that possible? \\ Manipulation & 88 yuan, please. \\ SDA & Really? How much is it? \\ \hline \hline - What do you mean? You have a lower?. \\ \hline ground-truth & A fiance. \\ CVAE-GAN & You’ve had a lot of your own! lover! \\ Manipulation & No, I’m serious. \\ SDA & Yeah, she’s so different, she is the sun! \\ \hline \hline \end{tabular}
\end{table}
Table 5: Responses generated by baselines and our model. The top case is selected from DailyDialog, and the bottom case is from OpenSubtitles.
the accuracy of RD stays around 50%, which means GQD cannot distinguish between the generated response and the ground truth one. In other words, the model successfully assigns low \(s\) scores, _i.e.,_ high (1-\(s\)), to the cases with high-quality generated responses so that GQD cannot perform better than a random guess. The accuracy curve of the RD is similar to that of GQD, which proves that our model assigns high \(s\) scores to the most representative cases so that RD cannot distinguish between the reconstructed and the original cases.
**Impact of Augmented Data Scale.** For previous experiments, the percentage of cases for augmentation is set to 60%. In this subsection, we change this percentage to study what is the influence of scale for augmentation and whether selective augmentation is still beneficial under different selection percentages. We also select the random baseline model for better comparison, where the cases for augmentation are randomly sampled. The result on the DailyDialog test dataset is shown in Figure 4(c). For Random baseline, its performance generally improves with the augmentation percentage. This result shows that random augmentation will benefit the dialog generation task, and the more cases are augmented, the better performance will be obtained. However, this is not true for selective augmentation. It can be seen that to begin with, the embedding scores of SDA increase fast with the selective percentage for augmentation. After the percentage reaches 60%, the growth stops, and when the percentage increases from 80% to 100%, there is even a drop in the performance. Similar performance is also observed on the OpenSubtitles dataset. This demonstrates that it only benefits the model if we select the proper cases in the dataset for augmentation, otherwise, augmenting some cases brings harm to the model.
**Universality of our framework.** In addition, we test the generalization ability of our framework on the story generation task. RocStories dataset [16] consists of 98,163 high-quality hand-crafted stories, which capture causal and temporal commonsense relations of daily events. Each story paragraph contains 5 sentences with an average of 43 words. Following the previous work [20], we split the dataset into 8:1:1 for training, validation, and test, and use BLEU as the evaluation metric. As can be seen from Table 6, equipped with augmentation data, our method outperforms GPT-2by 2.4%, 3.4%, and 1.4% on RocStories in terms of BLEU-1, BLEU-2, and Greedy, respectively, which proves the superiority of our model. This experiment also demonstrates that our framework does not rely on a specific task, and can be extended to various text generation scenarios.
## Conclusion and Broader Impacts
In this paper, we propose a selective data augmentation framework to improve the performance of dialogue models. We propose a dual adversarial network to select data for augmentation from the quality and representativeness aspects. One is to examine whether the case is of low generation quality, and the other one is whether the case is representative of the dataset. Experiments conducted on three public datasets demonstrate the effectiveness of our framework. In the future, we would like to explore the effectiveness of selective data augmentation on more generation tasks.
\begin{table}
\begin{tabular}{l|l|c c c c|c c c} \hline \hline & Models & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & Extrema & Average & Greedy \\ \hline \multirow{3}{*}{RocStories} & CVAE & 25.81 & 9.69 & 3.60 & 1.48 & 51.35 & 56.32 & 60.11 \\ & Seq2Seq & 23.24 & 8.96 & 3.40 & 1.50 & 51.33 & 56.49 & 60.07 \\ \cline{1-1} & Transformer & 25.52 & 5.96 & 3.54 & 1.45 & 51.29 & 56.37 & 60.06 \\ \cline{1-1} & GPT-2 & 30.21 & 11.08 & 3.64 & 1.53 & 51.72 & 58.49 & 60.35 \\ \cline{1-1} \cline{2-10} & GPT-2(\(\bigstar\)) & **30.96** & **11.46** & **3.84** & 1.52 & **52.27** & **58.95** & **61.20** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Automatic evaluation results on RocStories for storytelling. Numbers in bold mean that the improvement to the best baseline is statistically significant (t-test with p-value \(<\)0.01).
Figure 4: (a) Loss curve of the quality discriminator and generator evaluator. (b) Accuracy curve of the quality discriminator. (c) Relationship between the selective percentage for augmentation and the embedding scores. Blue denotes our model, and orange denotes the Random model.
## Acknowledgments
We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Research Administration (ORA) under Award No FCC/1/1976-44-01, FCC/1/1976-45-01, URF/1/4663-01-01, RGC/3/4816-01-01, and BAS/1/1635-01-01. This work was also supported by NSFC Grant No. 62122089 and CCF-Tencent Rhino-Bird Open Research Fund.
|
2308.03620
|
Exploring Visual Pre-training for Robot Manipulation: Datasets, Models
and Methods
|
Visual pre-training with large-scale real-world data has made great progress
in recent years, showing great potential in robot learning with pixel
observations. However, the recipes of visual pre-training for robot
manipulation tasks are yet to be built. In this paper, we thoroughly
investigate the effects of visual pre-training strategies on robot manipulation
tasks from three fundamental perspectives: pre-training datasets, model
architectures and training methods. Several significant experimental findings
are provided that are beneficial for robot learning. Further, we propose a
visual pre-training scheme for robot manipulation termed Vi-PRoM, which
combines self-supervised learning and supervised learning. Concretely, the
former employs contrastive learning to acquire underlying patterns from
large-scale unlabeled data, while the latter aims learning visual semantics and
temporal dynamics. Extensive experiments on robot manipulations in various
simulation environments and the real robot demonstrate the superiority of the
proposed scheme. Videos and more details can be found on
\url{https://explore-pretrain-robot.github.io}.
|
Ya Jing, Xuelin Zhu, Xingbin Liu, Qie Sima, Taozheng Yang, Yunhai Feng, Tao Kong
|
2023-08-07T14:24:52Z
|
http://arxiv.org/abs/2308.03620v1
|
# Exploring Visual Pre-training for Robot Manipulation: Datasets, Models and Methods
###### Abstract
Visual pre-training with large-scale real-world data has made great progress in recent years, showing great potential in robot learning with pixel observations. However, the recipes of visual pre-training for robot manipulation tasks are yet to be built. In this paper, we thoroughly investigate the effects of visual pre-training strategies on robot manipulation tasks from three fundamental perspectives: pre-training datasets, model architectures and training methods. Several significant experimental findings are provided that are beneficial for robot learning. Further, we propose a visual pre-training scheme for robot manipulation termed Vi-ProM, which combines self-supervised learning and supervised learning. Concretely, the former employs contrastive learning to acquire underlying patterns from large-scale unlabeled data, while the latter aims learning visual semantics and temporal dynamics. Extensive experiments on robot manipulations in various simulation environments and the real robot demonstrate the superiority of the proposed scheme. Videos and more details can be found on [https://explore-pretrain-robot.github.io](https://explore-pretrain-robot.github.io).
## I Introduction
The past years have witnessed substantial progress in visual representation learning based on deep neural networks. After pre-training on large-scale visual data, the neural network is subsequently employed as a general-purpose encoder to extract visual representations for many tasks, e.g., image segmentation [1], object detection [2] and autonomous driving [3], showing its strong generalization ability, while also highlighting its potential in robot manipulation.
Learning from visual observations for robot manipulation is known as a challenging task that requires a thorough understanding of both visual semantics and sequential patterns of observations. A common method is to train the visual encoder and model-based policy from scratch in an end-to-end manner with in-domain data [4, 5]. Despite its effectiveness to some degree, such a method requires training on a large number of observation-action samples, which may limit its wide applications. Therefore, pre-training the visual encoder with large-scale off-the-shelf data from the real world can serve as an alternative. Benefiting from its strong generalization ability, the pre-trained visual encoder is expected to generalize across a range of robot manipulation tasks and enable data-efficient learning.
Recently, visual pre-training on large-scale real-world data for robot learning has attracted increasing interest. Prominent performance gains reported on prior works [6, 7] show its great potential in learning robot control from pixels. Despite the claimed advantage, these works differ in pre-training data, methods and models. So it remains an open question about which types of data, pre-training methods and models can better assist robot manipulation. A system-level benchmark on the profits of visual pre-training is in demand.
In this paper, as shown in Figure 1, we first conduct extensive studies on visual pre-training from three fundamental aspects: datasets, models and methods that may influence the performance of robot learning. Hopefully, these can facilitate future research in the community. Based on empirical findings, we propose a visual pre-training scheme oriented for robot manipulations, which sequentially trains a visual encoder using self-supervised learning and supervised fine-tuning. Concretely, the visual encoder is first pre-trained based on contrastive learning [8], allowing the trained model to acquire sequential patterns implicitly for the input data. Then, supervised learning is applied by constructing pseudo-labels and temporal labels to encourage the visual encoder further to perceive visual semantics and temporal dynamics. In addition, we propose a new dataset named EgoNet, which is created based on Ego4d [9] and contains a large-scale egocentric video clips rich in human-object interactions. EgoNet has the potential to serve as a benchmark to pre-train visual models for robot manipulations.
Fig. 1: General path of visual pre-training for robot manipulation.
Our main contributions are summarized as: (1) We create the EgoNet dataset, a new benchmark enriched with diverse scenarios and human-object interactions for robotic visual pre-training. (2) We fully explore the visual pre-training in terms of datasets, methods and models, and provide several key suggestions for robot manipulation tasks. (3) We propose a novel cascade visual pre-training scheme that enables the visual encoder to learn sequential patterns, visual semantics and temporal dynamics from the large-scale real-world data, and achieves remarkable performance improvement on robot manipulation tasks.
## II Related Work
### _Vision-Based Robot Learning_
The robotic community has long focused on vision-based learning methods for various robot tasks in the past decade. Currently, the most prevailing paradigm of vision-based robot learning is the end-to-end method [5]. With the surge of deep learning in the last decade, many CNN-based models have been proposed to enable the visual modality of robots in manipulation tasks [10, 11]. Furthermore, CNN-RNN methods [12, 13] are widely adopted to solve the task of human instruction in natural language. Recently, many methods [6, 7, 14] based on pre-trained models have been proposed for robot learning. Several previous methods investigated the self-supervised pre-training in robot manipulation, e.g., R3M [6], MVP [7], and MaskViT [14]. These works focus on one side of visual pre-training, thus calling for a systematic study.
### _Representation Learning_
Self-supervised visual pre-training has been an active research topic recently, and can learn universal visual representations. Visual pre-training aims to learn visual representations by masked image modeling [15, 16] and contrastive learning [8, 17]. While the vision-language pre-training aims to learn the semantic correspondence between different modalities [18, 19]. Pre-training datasets are significant for the representation learning. To learn reusable representations that can generalize well to robotic manipulation tasks, the interaction between humans and objects needs to be captured. Recently, a diverse and large-size dataset Ego4D [9] has been proposed, which contains daily-life activity videos spanning hundreds of scenarios.
### _Robot Manipulation Benchmarks_
With the recent progress in exploiting the pre-trained models in robotic tasks, a number of robotic manipulation benchmarks have been introduced to evaluate the performance of the pre-trained model. The off-shelf robotic manipulation benchmarks can be categorized into two main kinds by simulators: RL (Reinforcement Learning) benchmarks and embodied benchmarks. The RL benchmarks focus on the training and evaluation of reinforcement learning agents where a simulated environment with several robot models and scenarios in limited space is usually provided. Recent RL benchmarks explore the training and evaluation of robotic manipulation method in aspects of multi-task training [20], more realistic scenarios with clutter [21], tasks in higher complex level [22], more kinds of manipulation forms [23] and manipulations with linguistic instructions [24, 25]. Meanwhile, the pre-trained models are widely introduced as solutions to robot manipulation tasks.
## III Benchmarking
In this section, we explore key components that affect the pre-training behaviors and the robot manipulation performance, i.e., pre-training datasets, optimization methods, and model architectures. The study pipeline is shown in Figure 2. We first pre-train the visual encoder on the pre-training dataset. Then we adopt typical imitation learning methods on robot manipulation tasks to verify the effectiveness of visual representations, where the encoder parameters are frozen during training. In this way, we could give system-level studies of each component.
### _Benchmarking Setup_
To evaluate the effectiveness of the pre-trained visual encoder, we adopt two robot control simulation environments, i.e., Franka Kitchen [26] and MetaWorld [20], for robot learning. As shown in the right part of Figure 3, we choose the same tasks as [6]. Please refer to Section V-A for the pre-training details and evaluation metrics.
#### Iii-A1 Pre-training Dataset
ImageNet [27] has recently been widely used in self-supervised pre-training for various downstream tasks. However, ImageNet lacks dynamic interaction between objects, making it may be unsuitable to serve as pre-training data for robot manipulation tasks.
We propose a new benchmark, called EgoNet, to pre-train visual encoders for robot manipulation. It comprises nearly 500,000 video clips covering hundreds of scenarios and is rich in human-object interactions. The EgoNet is constructed based on Ego4D [9]. We experimentally intercept a short clip with a duration of \(1s\) for each narration. With this strategy, a total of 0.503 million video clips rich in human-object interactions are collected. Note that the video in Ego4D has a frame rate of 30 fps. After a 10-fold uniform downsampling, EgoNet is obtained that contains about 1.5 million video frames in total, making the training samples number comparable with ImageNet.
#### Iii-A2 Model Architecture
The architecture of visual encoder is also an important element in determining the performance of robot manipulation tasks. To explore its effect, we choose
Fig. 2: The study pipeline of visual pre-training for robot manipulation.
three typical models, namely convolution-based ResNet-34 [28], ResNet-50 [28], and ResNet-101 [28], which have been the defacto standard for visual representation extraction. In this way, we could provide insight into which architectures are more beneficial for robot manipulation tasks.
#### Iii-A3 Pre-training Method
The learning objective directly determines the type of representations that the model can learn from a dataset. Contrastive learning and masked image modeling, the two most prevalent pre-training methods in self-supervised learning, are naturally the main exploration goals in this work. Contrastive learning aims to encourage the feature similarity between two different augmented views of the same image but suppress the similarity between different images. Masked image modeling resorts to reconstructing the randomly masked patches of the input image. In this work, we choose MoCo-v3 [8] and MAE (Masked AutoEncoder) [15] for contrastive learning and masked image modeling, respectively.
### _Main Observations_
#### Iii-B1 Pre-training Dataset
**EgoNet is more powerful than ImageNet.** We pre-train visual encoder (i.e., ResNet-50) on different datasets, i.e., ImageNet and EgoNet, using the contrastive learning method (MoCo-v3), and observe their performance on the robot manipulation tasks. From Table I, we can see that the model pre-trained on EgoNet achieve better performance on robot manipulation tasks. Obviously, the robot favors the interaction-related knowledge and temporal relationships contained in the video in terms of manipulation tasks. In addition, the egocentric natural images in EgoNet have much more global context about the world, which means richer visual features can be learned.
#### Iii-B2 Model Architecture
**ResNet-50 performs better.** From Table II, we can observe that ResNet-50 and ResNet-101 perform better than ResNet-34 on the robot manipulation tasks in both simulation environments pre-trained on EgoNet. In addition, there is no performance improvement as the model increases from ResNet-50 to ResNet-101. Furthermore, recent work suggests that pre-training ViT [29] models with larger pre-trained datasets can achieve better results.
#### Iii-B3 Pre-training Method
**Contrastive learning is preferred.**
As shown in Table III, MoCo-v3 outperforms MAE on both ImageNet and EgoNet datasets, demonstrating the effectiveness of contrastive learning compared to masked image modeling for manipulation. This result also suggests that the visual semantics acquired by contrastive learning are more important for robot manipulation than the structural information learned by masked image modeling.
### _Summary_
Through the aforementioned explorations on various pre-training datasets, model architectures and pre-training methods, three key conclusions could be drawn:
* Visual pre-training with human-object interaction data is of great importance for robot manipulation.
* Convolution-based ResNet-50 is preferred in retaining visual knowledge for robot manipulation.
* The sequential pattern and semantic information learned by contrastive learning are more effective.
## IV Proposed Approach
Based on the above explorations, we propose Visual Pre-training scheme for Robot Manipulation (Vi-PRoM), which pre-trains ResNet-50 on the EgoNet dataset to extract comprehensive visual representations for robot manipulation. Specifically, we first employ contrastive learning to acquire human-object interaction patterns from the EgoNet dataset in a self-supervised manner. Then two additional learning objectives, i.e., visual semantics predicting and temporal dynamics predicting, are proposed to further enrich the encoder. Figure 3 shows a basic pipeline of the proposed Vi-PRoM. Note that we do not need manually annotate the labels to learn both visual semantics and temporal dynamics.
### _Contrastive Self-supervised Learning_
We hypothesize a good visual representation should have the ability to distinguish different scenes. Therefore, we use contrastive learning as our self-supervised paradigm to let the model learn rich and general visual representations. The contrastive objective function pulls features generated by similar images together and pushes the features generated by different images away. Specifically, we sample a minibatch of images and minimize the InfoNCE loss [30].
### _Supervised Learning_
With the learned representation from contrastive learning, it is imperative to learn visual semantics and temporal dynamics to generalize well for robot manipulation.
#### Iv-B1 Learning Visual Semantics
We introduce a pseudo-label predicting task to fine-tune the learned backbone, encouraging the model to learn better visual semantic representations. Specifically, we employ a ResNet-101 model supervised on ImageNet to generate pseudo labels for EgoNet. Then, the pseudo label is used to fine-tune our self-supervised learned backbone with the cross-entropy loss:
\[\mathcal{L}_{\text{VS}}=-\mathbb{E}_{D}\sum_{i=1}^{N}\mathcal{T}(x_{i})\log(h_{ 1}(f(x_{i}))), \tag{1}\]
where \(D\) is the EgoNet dataset, \(\mathcal{T}\) is the ResNet-101 network to generate pseudo labels for each sample \(x_{i}\), \(f\) is the backbone, and \(h_{1}\) is a classification head.
#### Iv-B2 Predicting Temporal Dynamics
Robot manipulation tasks require predicting the next actions based on current and historical observations. Thus they are sensitive to temporal dynamics. We design a frame order prediction task to enable the model to learn temporal dynamics for each clip of EgoNet. Given the image set \(\mathcal{I}=\{x_{0},...,x_{k},...,x_{N-1}\}\) sampled sequentially from a video clip, we scramble these images and then predict the original order for the image \(x_{k}\). This task is formulated as a classification problem of \(N\) classes, which is commonly solved by minimizing the cross-entropy loss:
\[\mathcal{L}_{\text{TD}}=-\mathbb{E}_{D}\sum_{k=0}^{N-1}\mathbf{y}_{k}\log(h_{ 2}(f(x_{k}))), \tag{2}\]
where \(h_{2}\) is a classification head. \(\mathbf{y}_{k}\) denotes the order of the image \(x_{k}\) in original image set \(\mathcal{I}\).
#### Iv-B3 Joint Training
We combine the visual semantics and temporal dynamics loss for jointly training:
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{VS}}+\lambda\mathcal{L}_{\text{ TD}}, \tag{3}\]
where \(\lambda\) is the balance coefficient set as 0.33 in practice. In principle, visual semantics and temporal dynamics predicting together guide the learning, enabling the model to learn semantic and temporal visual representations.
### _Robot Imitation Learning_
Given the well-trained visual encoder \(f\), the robot utilizes it to encode visual features of pixel observations for policy learning. In this work, we employ the typical behavior cloning (BC) [31] method to imitate expert demonstrations, where the policy network is parameterized as a two-layer perceptron.
## V Experiments
### _Experimental Setup_
To evaluate the pre-trained visual encoder on robot manipulation tasks, we take it as a frozen module for policy learning. We train the policy network for 20,000 steps using a batch size of 32 and an Adam optimizer with a learning rate of 0.001. Unless otherwise specified, the demonstration dataset size used for imitation learning is set as 5. In the PPO experiments, we train for 20 iterations with 10 epochs per iteration. The reward function we use is similar to [32]. The average of the best success rates on all manipulation tasks with three different seeds (100, 125, 150) is reported to measure the performance of the visual encoder.
In the real environment, our robot hardware is mainly formed by a differential-drive mobile base equipped with a 2d LiDAR and IMU and a 6-DoF arm. A 2-finger parallel gripper is equipped for contact-rich interactions. Between the end-effector and the arm, a force torque sensor is installed to measure the forces and torques experienced by the robot, which is utilized to stop the robot if any large forces or torques appear. The robot's wrist is equipped with an RGBD camera as its perception unit. The Intel core i7 CPU is chosen as the computing unit.
### _Main Results_
#### V-B1 Simulation Environments
To demonstrate the effectiveness of our Vi-PRoM, we compare it with the state-of-the-art visual pre-training methods for robot manipulation. For fair comparisons, except for the scratch method, whose visual encoder parameters are randomly initialized, all other models are pre-trained on our EgoNet dataset and evaluated
Fig. 3: The pipeline of our Vi-PRoM. The EgoNet dataset is first constructed to serve as pre-training data. We first pre-train the ResNet-50 with contrastive learning, enabling the model to learn universal visual representations. Then, frame-order and pseudo-label predicting tasks are jointly applied to encourage the model to capture temporal and semantic visual representations. Note that the pseudo-labels are automatically generated by the ResNet-101 model pre-trained on ImageNet without manual labeling. Finally, the pre-trained model is utilized to extract visual representations for robot manipulation tasks.
with the behavior cloning method. Note that the visual encoder for each method is ResNet-50. Experimental results are reported in Table IV. It can be seen that our model achieves the best performance in both simulation environments. In addition, the performance gains of our Vi-PRoM over the MoCo-v3, reaching 3.3% and 2.3% in success rate in Franka Kitchen and MetaWorld, respectively, indicate the value of explicitly learning visual semantics and temporal dynamics.
To learn the temporal dynamics and visual semantics, R3M resorts to the time contrastive learning and video-language alignment. Compared with R3M, our Vi-PRoM shows considerable performance gains, especially in the Franka Kitchen environment. Notably, in terms of the capacity to learn visual semantics and temporal dynamics, our pseudo-label predicting and frame order modeling outperform the time contrastive learning and video-language alignment.
To further verify the effectiveness of our Vi-PRoM, we choose the proximal policy optimization (PPO) algorithm [33] as an alternative to behavior cloning. Experimental results are provided in Table IV. Our Vi-PRoM consistently outperforms all competitors on both learning algorithms.
#### V-B2 Real Robot
We deploy our model on a real robot to demonstrate its performance in the real environment. In practice, we test our pre-trained representations on four tasks, i.e., opening the door, closing the door, opening the drawer and closing the drawer. We collect 30 demonstrations for each task. Figure 4 shows two successful cases of our model in the real robot environment. Overall, benefiting from the powerful representational capability of Vi-PRoM, the robot is competent for various manipulation tasks in the real kitchen environment by learning from demonstrations.
### _Ablation Study_
#### V-C1 Pre-training Components
Table V exhibits the experimental results. When visual semantic learning is absent, the success rate decreases by 3.1% and 0.9% on Franka Kitchen and MetaWorld, respectively. Analogously, a drop in success rate of 0.6% and 1.5% on Franka Kitchen and MetaWorld can be observed in the absence of temporal dynamics learning. These two experimental results demonstrate the importance of visual semantics learning and temporal dynamics learning for robot manipulation. Moreover, when both learning objectives are absent, the success rate of Vi-PRoM suffers from considerable performance degradation. Therefore, the effectiveness of the collaboration between visual semantic learning and temporal dynamics learning is proved.
#### V-C2 Model Scalability
We also investigate the scalability of Vi-PRoM. As shown in Figure 5, in both the Franka Kitchen and MetaWorld simulation environments, the success rate of Vi-PRoM improves steadily as the size of the demonstration data increases. After training on the larger expert demonstration dataset, our proposed Vi-PRoM model shows its scalability on robot manipulation tasks.
#### V-C3 Other Models
We also report experimental results of directly taking the popular pre-trained models as visual encoders for robot manipulation, as shown in Table VI. ImageNet Supervised [27] is the ResNet-50 pre-trained for ImageNet classification task. MDETR [34] is the ResNet-101 pre-trained on large-scale image-text pairs. CLIP [35] is the ResNet-50 trained to align the image representation with the paired text. MAE is the ViT-Base trained on ImageNet. MVP is the ViT-large trained on Ego4D. It can be seen that all these models largely lag behind our Vi-PRoM model.
## VI Discussion and Limitation
In this paper, we have explored three crucial components that affect the pre-trained model on robot manipulation tasks. Key conclusions are drawn that robot manipulation prefers human-object interaction dataset, convolution-based ResNet-50 network, as well as temporal and semantic information. We further propose the Vi-PRoM for robot manipulation. Extensive experiments on simulators and the real environment demonstrate its superiority.
Although our pipeline is effective, there are still many issues to be further explored. First, training visual encoders
Fig. 4: The real robot is able to successfully open the drawer and the door with the help of our Vi-PRoM model in a kitchen environment.
Fig. 5: Effects of demonstration size on robot manipulation tasks.
directly on video clips has the potential to learn better temporal dynamics. Then using larger pre-training datasets is also worth exploring in the future. Finally, currently visual encoders are pre-trained on real-world data but evaluated in simulation environments. The significant gap can lead to some unexpected results, and also inspires us to consider establishing an evaluation benchmark from the real environment to facilitate research.
|
2310.19432
|
Explaining the Decisions of Deep Policy Networks for Robotic
Manipulations
|
Deep policy networks enable robots to learn behaviors to solve various
real-world complex tasks in an end-to-end fashion. However, they lack
transparency to provide the reasons of actions. Thus, such a black-box model
often results in low reliability and disruptive actions during the deployment
of the robot in practice. To enhance its transparency, it is important to
explain robot behaviors by considering the extent to which each input feature
contributes to determining a given action. In this paper, we present an
explicit analysis of deep policy models through input attribution methods to
explain how and to what extent each input feature affects the decisions of the
robot policy models. To this end, we present two methods for applying input
attribution methods to robot policy networks: (1) we measure the importance
factor of each joint torque to reflect the influence of the motor torque on the
end-effector movement, and (2) we modify a relevance propagation method to
handle negative inputs and outputs in deep policy networks properly. To the
best of our knowledge, this is the first report to identify the dynamic changes
of input attributions of multi-modal sensor inputs in deep policy networks
online for robotic manipulation.
|
Seongun Kim, Jaesik Choi
|
2023-10-30T10:44:12Z
|
http://arxiv.org/abs/2310.19432v1
|
# Explaining the Decisions of Deep Policy Networks for Robotic Manipulations
###### Abstract
Deep policy networks enable robots to learn behaviors to solve various real-world complex tasks in an end-to-end fashion. However, they lack transparency to provide the reasons of actions. Thus, such a black-box model often results in low reliability and disruptive actions during the deployment of the robot in practice. To enhance its transparency, it is important to explain robot behaviors by considering the extent to which each input feature contributes to determining a given action. In this paper, we present an explicit analysis of deep policy models through input attribution methods to explain how and to what extent each input feature affects the decisions of the robot policy models. To this end, we present two methods for applying input attribution methods to robot policy networks: (1) we measure the importance factor of each joint torque to reflect the influence of the motor torque on the end-effector movement, and (2) we modify a relevance propagation method to handle negative inputs and outputs in deep policy networks properly. To the best of our knowledge, this is the first report to identify the dynamic changes of input attributions of multi-modal sensor inputs in deep policy networks online for robotic manipulation.
## I Introduction
There has recently been substantial progress in representing robot policies using deep neural networks (DNNs). DNNs enable a robot policy to take high dimensional sensor data as an input, which makes the policy sensorimotor. Combined with the reinforcement learning method, deep policy networks have shown that they can outperform humans in actions such as playing board games [1, 2] or video games [3, 4]. However, due to the high dimensional action space, learning a close-to-optimal policy which solves complex manipulation tasks with real-world robots is still challenging.
Over the last few years, it has been proven that a deep visuomotor policy can solve complex tasks in real-world robots. These policies have succeeded in assembling a toy airplane [5], placing a coat hanger on a rack [6], hitting a puck into a goal with a stick (hockey) [7], and opening a door [5]. In spite of these successes, the internal mechanisms of deep visuomotor policies have not been clearly analyzed due to the lack of transparency. As the demand for reliable intelligent robot systems increases, it is essential to understand how policy networks work and which role each input feature is responsible for. It has been implicitly shown that convolutional layers are responsible for perception and that fully-connected layers are responsible for control [8]. However, it has not been shown to what extent each feature contributes to the end-effector movement and how each input feature affects the decision making of the robot policy models quantitatively.
Recently, much work has focused on implementing DNN models with explainability. Methods to discover some internal nodes which are highly related to human-interpretable semantic concepts were introduced for convolutional neural network (CNN) models [9] and generative adversarial network (GAN) models [10, 11]. Several input attribution methods [12, 13, 14, 15, 16, 17, 18] provide intuitive explanations in classification problems. They back-propagate outputs to each input features to measuring the attributions of each input feature quantitatively.
However, input attribution methods proposed for classification networks are not directly applicable to robot policy networks in two reasons. First, an end-effector does not move in direct proportion to each joint torque. The number of links from the joint affects to the end-effector movement. Second, unlike most image classification problems, there exist negative inputs and outputs in robotic manipulation problems. The existence of negative values leads to relevance degradation and breaks some assumptions that should be met when deriving the above input attribution methods. To address these problems, we propose a method that measures the importance factor for each joint torque and modifies relevance propagation rules for the proper handling of negative inputs and outputs.
In this paper, we present several new empirical analyses of deep policy networks for robotic manipulation tasks. We utilize existing input attribution methods with the proposed modification methods for robot policy networks. With these methods, we look inside the model and measure the extent to which each input attributes to the motor torque and the end-effector movement. To verify the correctness of our experimental results, we apply three different input attribution methods: Deep Taylor Decomposition [12], Relative Attributing Propagation [17] and Guided Backpropagation [19].
We qualitatively analyze deep policy networks by visualizing a heatmap and identifying whether the model actually focuses on the input features as expected and how it affects the performance. We quantitatively analyze the behavior of the policy model and identify how the input features affect the decisions of the model. To the best of our knowledge, this is the first report to capture the dynamic changes of sensors' contributions in deep policy networks for robotic manipulations.
## II Background
### _Input Attribution Methods_
An _input attribution method_ is a method for measuring the contributions of input features to the decisions of DNNs. There are several ways to measure these contributions, which we call the _relevance_. One way is to decompose the DNN into a set of linear functions by approximating it with linear functions [12, 13, 14]. Another line of recent research uses heuristic rules that generate a clear distinction between the target object and the background [17] or addresses two problems: the saturation problem and the thresholding problem [18]. It has been also shown that the gradient of the output with respect to the input can be viewed as the attributions [15, 16, 19, 20]. All methods listed above propagate the relevance in a backward manner from the outputs to the inputs. However, in spite of a solid theoretical foundation of decomposing methods [12], there is no ground truth value for the relevance [21, 22]. Therefore, to validate the reliability of the results of this paper, we utilize three different methods: one that decomposes the DNN, one that uses the heuristic propagation rules, and one that uses the gradients. Detailed explanations of each are given below.
### _Deep Taylor Decomposition_
Deep Taylor Decomposition (DTD) [12] is an input attribution method which utilizes Taylor decomposition layer by layer in a backward manner in DNNs. The complex nonlinear model \(f(x)\) represented by the DNN is a function which maps high dimensional inputs to outputs. It can be approximated with the first order Taylor expansion given a data point \(\tilde{x}\). If \(\tilde{x}\) is some well-chosen root point which makes the function value 0, the approximated nonlinear function \(f(x)\) can be viewed as a linear function centered at zero. If \(\tilde{x}\) lies in a high dimensional space, the approximated function can be represented as a set of linear functions, with each linear function mapping each dimension's input to the output. Therefore, each approximated function value is considered as the contribution of the corresponding input to the output.
However, for complex nonlinear models such as DNNs, it is problematic to approximate them with the first order Taylor expansion since it is difficult to find the root point and higher-order terms are ignored. Therefore, DTD approximates a layer-to-layer mapping with first order Taylor expansion, which enables one to decompose layer-wise mapping and compute the contributions of the layer-wise inputs which come from the outputs of previous layers.
Depending on the choice of the root points of layer-to-layer mappings, multiple propagation rules can be derived. In this paper, we use \(z^{+}\)-rule which is one of the representative propagation rules. The \(z^{+}\)-rule is obtained by choosing the nearest root on the segment \((\{x_{i}\mathbb{1}_{w_{ij}<0}\},\{x_{i}\})\), where \(\{x_{i}\}\) is the given data point and \(w_{ij}\) is the weight. \(z^{+}_{ij}\) is defined as \(x_{i}w^{+}_{ij}\), where \(w^{+}_{ij}\) is defined to be \(\max(w_{ij},0)\).
### _Relative Attributing Propagation_
Relative Attributing Propagation (RAP) [17] is an input attribution method with rule-based relevance propagation. RAP provides a way of handling negative attributions by separating attributions into two types: relevant and irrelevant, according to the relative influence between the layers. Instead of ignoring the negative propagated relevance, it measures the absolute influence by taking absolute to all propagated relevance. It then normalizes these absolute relevance scores and propagates them again to the previous layer. By considering highly relevant and highly irrelevant influence at the same time, it provides a much clearer and more intuitive heatmap than other methods in image classification problems.
### _Guided Backpropagation_
Guided Backpropagation (GBP) [19] is a pure gradient-based attribution method. It produces a clear visualization
Fig. 1: Deep policy network architecture. The forward pass directly outputs the motor torques from the visual input. Feature points which are derived from spatial softmax layer and represent locational information in the image space, are concatenated with the joint configuration for the input of a fully-connected layers. The backward pass redistributes the relevance from each motor torque to the visual input and the configuration input with a layer-wise relevance propagation method. The redistributed relevance is added in proportional to the degree to which each motor torque contributes to the joint or end-effector movement. The heatmap of each input represents the extent to which each input attributes to the motor torques.
result (i.e., a heatmap) by utilizing a guidance signal with its gradient. Computing the gradient of the output with respect to the input produces a heatmap conditioned on the input image. However, simply computing the gradient does not provide clear visualization, as the negative gradient can be backpropagated.
This problem is resolved by utilizing a guidance signal when following a backward pass. The guidance signal is achieved by combining backpropagation and deconvnet [23]. The gradient signal is zeroed out for negative activations by backpropagation when ReLU-like activation functions are applied. Similarly, the gradient is zeroed out for the negative gradient by deconvnet. Combining these two, Guided Backpropagation retains only positive gradient signals: \(R_{i}=\sum_{j}\mathbb{1}_{x_{i}>0}\cdot\mathbb{1}_{\frac{\partial R_{i}}{ \partial x_{i}}>0}\cdot\frac{\partial R_{j}}{\partial x_{i}}\), where \(R_{i}\) and \(R_{j}\) are the relevance of the \(i\)-th node at layer \(\{i\}\) and the \(j\)-th node at layer \(\{j\}\), respectively, and \(\mathbb{1}_{\{\cdot\}}\) is the indicator function.
## III Explaining Deep Policy Networks with Multiple Sensors in Robotic Manipulation Tasks
We explain how the deep policy networks make the decision in the form of the motor torque by analyzing the contributions of the input sensors. Specifically, the deep policy networks represented by the neural network shown in Fig. 1 is employed, which is trained with the Guided Policy Search (GPS) algorithm [6]. First, we assign an importance factor \(\alpha_{j}\) to a motor torque \(\tau_{j}\) in proportion to the degree to which each motor torque contributes to the end-effector movement. The importance factor \(\alpha_{j}\) measures how far the end-effector moves by applying the joint torque \(\tau_{j}\) to each joint \(j\).
We compute the importance factor \(\alpha_{j}\) of each joint \(j\) by measuring the end-effector movement when applying each joint torque \(\tau_{j}\) separately using the dynamics and forward kinematics, as shown in Fig. 2. In Fig. 2, \(\alpha_{j}\) is defined to be proportional to \(\Delta\mathbf{p}_{t,j}/\tau_{j}\), where \(\Delta\mathbf{p}_{t,j}\) is the end-effector movement at time step \(t\) when only applying joint torque \(\tau_{j}\). If the dynamics and forward kinematics are unavailable, \(\alpha_{j}\) is set to \(1\) under the assumption that the same amount of torque moves the same angle of the joint. The amortized input attribution is the \(\alpha_{j}\) weighted sum of the input attributions computed from the individual outputs of each joint.
To explain this further, we utilize three different input attribution methods (DTD, RAP and GBP) to validate the reliability of the result. However, DTD and RAP implicitly compute the gradient \(\times\) data point to include the global effect in the explanation, while GBP only computes the gradient explicitly, only measuring a local effect of the input features on the decision of the models. Therefore, we multiply the input data point by the relevance achieved from GBP for a fair comparison of the three different methods.
Directly applying attribution methods to robot policy models is not plausible due to negative values in the inputs and outputs. Existing input attribution methods assume that the inputs and outputs of DNNs are positive. As an example, in image recognition tasks, it is possible to make all input pixel values and output (classification) values positive. However, in the robotic manipulation tasks, it is difficult to make all inputs and outputs positive because robot configurations and motor torques are zero-centered values.
The coexistence of positive outputs and negative outputs causes the relevance degradation problem. As the ReLU activation function is applied to all layers except for the last layer, if the motor torque is negative, the relevance is also negative as the gradient is negative while the input is positive. When adding all positive relevance values and negative relevance values in the last hidden layer, the relevance degradation problem occurs. Moreover, the root point is not guaranteed to be found when applying the \(z^{+}\)-rule to the negative output.
In this paper, we solve this problem by taking the absolute to the negative output before redistributing the relevance score. To this end, the signs of weights which are connected to the negative output are flipped. It only changes the direction of the relevance and preserves the absolute amount of relevance. We can interpret the propagated relevance from the absolute output as the absolute amount of the contribution of the last hidden layer to the output layer. Thus, the relevance at the last layer is propagated from Equation 1,
\[R_{i}=\sum_{j}\Big{(}\mathbb{1}_{\{\sum_{i^{\prime}}z_{i^{\prime}j}>0\}}\frac{ z_{ij}^{+}}{\sum_{i^{\prime}}z_{i^{\prime}j}^{+}}-\mathbb{1}_{\{\sum_{i^{\prime}}z_{ i^{\prime}j}<0\}}\frac{z_{ij}^{-}}{\sum_{i^{\prime}}z_{i^{\prime}j}^{-}} \Big{)}R_{j}, \tag{1}\]
where \(z_{ij}^{+}\) and \(z_{ij}^{-}\) are defined as \(x_{i}w_{ij}^{+}\) and \(x_{i}w_{ij}^{-}\) respectively. Following the definition from DTD, \(w_{ij}^{+}\) is defined to be \(\max(w_{ij},0)\) and \(w_{ij}^{-}\) is defined to be \(\min(w_{ij},0)\). By adding the second term, the relevance for the negative torque output is measured.
The existence of a negative value in the inputs, such as a negative joint position or negative joint velocity, is also problematic when applying the \(z^{+}\)-rule because it is derived from the constraints of positive inputs. If a negative value
Fig. 2: An illustration of relevance propagation at the output layer. The motor torque outputs are weighted in proportion to the degree to which each motor torque contributes to the end-effector movement.
exists in the inputs, the root point is not guaranteed to be found. To address this problem, the sign of the negative input and the sign of their connected weights are flipped. This guarantees that the root point can be found, while also preserving the function value. Thus, the relevance is propagated to the layer which includes the negative input from Equation 2,
\[R_{i}=\sum_{j}\frac{\mathbb{1}_{\{x_{i}>0\}}z_{ij}^{+}+\mathbb{1}_{\{x_{i}<0\}} z_{ij}^{-}}{\sum_{i^{\prime}}(\mathbb{1}_{\{x_{i^{\prime}}>0\}}z_{i^{\prime}j}^{+}+ \mathbb{1}_{\{x_{i^{\prime}}<0\}}z_{i^{\prime}j}^{-})}. \tag{2}\]
The second term propagates the relevance \(R_{j}\) to negative input features by flipping the signs of the negative input and their connected weights. Finally, we achieve the relevance of each input feature by adding the relevance values of all joint torques, which are weighted by the importance factor \(\alpha\).
## IV Experimental Results
### _Experimental Settings_
In our experiments, we analyze the decisions made from deep visuomotor policy models trained on simulated systems as well as real-world systems. We measure relevance of each input feature to the total end-effector movement. MuJoCo arms are used for the simulated systems [24], and the Baxter research robot is used for the real-world systems. In the simulations, three different tasks are assigned to examine how the relevance of each input changes depending on the task. The first of these is a reaching task with a 2-DoF MuJoCo arm. The goal is to reach the target object by observing a raw image, where four different target points are given. For this task, the positions and velocities of both joints and the end-effector are given as a configuration input to the policy network. The second task is a peg insertion task with a 7-DoF arm. Initially, the arm is located at four different positions with the peg held in its gripper. The goal is to insert the peg into a hole when the robot observes a raw image of its arm and the hole. As the configuration input, joint positions and joint velocities are given. The final task is a door opening task. The goal of this task for the robot is to open a door with a hook attached to its end-effector. Similar to the previous task, the simulated MuJoCo agent has a 7-DoF arm. It is also trained to accomplish the task in four different conditions, where the initial location of the agent is different.
In real-world systems, the reaching task is trained with the Baxter research robot. Similar to the reaching task in the simulation setting, the goal is to reach the target, but in this case, the target location to reach is the position of the end-effector of the left arm. The agent is considered to accomplish the task if the block held by the right end-effector reaches to the left end-effector. No configuration information about the left arm, such as the joint position or end-effector position, is provided. A raw observed image and joint configurations which consist of joint positions and joint velocities of the right arm at each time step are provided as the input of the policy model.
### _Qualitative Analysis of Deep Policy Networks_
We qualitatively analyze the policy models by visualizing a heatmap using DTD. For the multiple different experimental settings, we aim to answer the following questions: Does the inside of the deep policy network work as expected? Does it really refer to the image feature that is related to the task? Does observing the relevant image feature affect the performance and/or the task completion?
The visualization results from DTD are presented in Fig. 3. Feature points extracted from the vision layer are plotted on the observed raw image in the image on the left in Fig. 3. The heatmaps shown on the right side of each robot image present the relevance score of the observed image per pixel and the relevance score of the joint configuration.
In the case of the reaching task shown in Fig. 2(a), some feature points which are extracted from the vision layer [6] are located at unexpected positions. For example, feature points located on the top middle and bottom right appear to be irrelevant with regard to accomplishing the task for humans, although most of them are on the target object. In
Fig. 3: Visualization results of qualitative analysis. Visual input with feature points is on the left of each figure with its corresponding heatmap, and configuration input is on the right with its corresponding heatmap. Brighter color represents high relevance score which means that corresponding input feature contributes more to the decisions of the robot policy.
fact, the model is affected by these two task-irrelevant points, as as shown in the heatmap where the relevance score is high at these points. In the case of the peg insertion task shown in Fig. 2(b), more unexpected feature points exist than in the previous task. Although the relevance at the feature points which are on the robot arm is higher than almost all other feature points, the relevance at the feature point located on top of the desk is also high, which was unexpected. A heatmap of the door opening task is presented in Fig. 2(c). In this task, there are even more task-irrelevant feature points. Even worse, the relevance at these points is higher than those at the others. However, surprisingly, the agent succeeds in accomplishing all tasks. In the case of the real-world robot system shown in Fig. 2(d), the policy network basically refers to the location of the block and the robot arm. However, it is also affected more by features at the top right and at the bottom of the image; these appear to be irrelevant with regard to how a human solves the task.
The experimental results from the qualitative analysis show that the robot policy models are affected more by task-irrelevant image features as the complexity of the task increases. However, this does not always result in poor task completion. This implies that referring to task-relevant image features is not a necessary condition for solving the task. We expand the interpretation from the results of qualitative analysis further in the following sections with the results of a quantitative analysis.
### _Quantitative Analysis of Deep Policy Networks_
This section provides a quantitative analysis of which input feature attributes more to the manipulation of joints in robotic arms. For multiple different experimental settings, we aim to answer the following questions: To what extent does the deep policy model rely on two different inputs: the visual input and the configuration input? How does the relevance ratio change during the execution of the policy? Which role is each input feature responsible for?
#### Iv-C1 Static Analysis of Robotic Manipulations
We analyze which input feature contributes more to the motor torque to understand the features which features we have to consider when we design the model. In order to verify the correctness of the relevance, we apply three methods: DTD, RAP, and GBP. Fig. 4 represents the average relevance ratio of each input feature along the trajectory of the policy. The relevance ratio from RAP and GBP in the MuJoCo reaching task, the peg insertion task, and the Baxter reaching task are not presented due to space constraints.
In Fig. 4, the results show a consistent tendency of the average relevance ratio of each input feature for each task among all methods. The slight differences in the results come from a different propagating rule of each method. For example, the DTD \(z^{+}\) rule masks negative weights \(w_{ij}^{-}\) to zero and propagates the relevance, while GBP propagates the relevance and zeroes out negative gradients.
In the MuJoCo reaching task, the position information is considered to be the most important feature, while the image information is the next. Generally, three features, the image, joint position, and end-effector position, contribute to the decisions of the motor torque by a rate exceeding 90\(\%\). In the case of the MuJoCo peg insertion task, the relevance ratios from each input attribution method differ slightly from each other. Nevertheless, both the image and position input features are considered to be important for solving the task. Similarly, in the case of the door opening task and the Baxter reaching task, the joint position is said to be the most important feature by all methods. The second most important feature is the image feature. The policy model refers to the velocity feature the least.
In general, the position information, including the joint position and end-effector position, is considered to be the most valuable input, while the velocity information is considered to be the least valuable input in all tasks. In summary, the ratios of the relevance of the image input and the relevance of the configuration input are approximately 20:80, 60:40, 30:70, and 35:65 in the MuJoCo reaching task, the peg insertion task, the door opening task, and the Baxter reaching task, respectively.
#### Iv-C2 Dynamic Analysis of Robotic Manipulations
This section analyzes how the relevance ratio changes as the policy executes. Fig. 5 represents the relevance ratio change during the execution of the policy. The trends of the relevance ratio change results from the three different methods show consistency in each task. For example, when the relevance of the configuration input increases at time step 1 in the case of the peg insertion task from DTD, the relevance of the configuration input from RAP and GBP at that time step also increases.
Surprisingly, the tendencies of the relevance ratio changes are similar across all tasks, despite the fact that an individual policy model is trained for each task. The relevance of the image in the initial step is significantly high, and it declines considerably. On the other hand, position information in
Fig. 4: Bar chart of the relevance ratio of the MuJoCo door opening task (top), and bar chart of the relevance ratio of the Baxter reaching task, the MuJoCo peg insertion task, and the reaching task from DTD (bottom). Each bar represents the mean of the relevance ratio of each input feature with the standard deviation along the trajectory of the policy.
the initial step is not considered as an important feature. However, the relevance of the joint position (and the end-effector position if it exists) increases steeply. Similar to the image input, the relevance of the joint velocity (and end-effector velocity if it exists) falls considerably as the time step increases.
Such a situation is necessary to differentiate where the target is. As we describe in Section IV-A, in all tasks, the agent is trained to accomplish the given task under several different conditions. Different conditions could be the different initial position of the agent or a different position of the target object. Note that both have the same effect, as the base coordinate is defined on the torso of the robot. In this regard, the position input does not provide any clue as to where the target is at the initial time step. Therefore, at the beginning, the visuomotor policy model refers to the image information to determine where the target is.
Especially for the GPS algorithm that we used for the training algorithm, the policy model is trained to follow the mean of a Gaussian trajectory, which is determined by solving the trajectory optimization problem [25]. In this procedure, a Gaussian trajectory is generated only from the joint configuration. Therefore, the variance in early time steps is high given that the robot arm has a similar joint configuration in these time steps for each different condition. However, the variance decreases as it approaches the target. For this reason, when the variance of the trajectory is high in the initial step, the trajectory is selected by considering the image input. However, as the variance decreases, the position information becomes more valuable as using it is sufficient to determine which trajectory current policy follows with the joint position. Therefore, the model starts to consider the position information more as the policy runs.
#### Iv-B3 Dynamic Analysis of Robot Behavior when the Target Changes
This section analyzes how the inside of the model reacts when a perturbation occurs. To this end, the dynamic change of the relevance ratio is observed while the target is changed in the middle of the policy execution. The changed target position is randomly located in between the trained target positions. The relevance ratio results in this section are computed from the DTD \(z^{+}\)-rule, as this rule has a solid theoretical background.
In Fig. 6, we show the relevance ratio change results when the target is changed. The red vertical dotted line represents the time step when the target is changed. In the previous section, we find that the relevance ratio of the position feature increases constantly, whereas the relevance ratio of the velocity feature decreases. However, when the target is changed in the middle of the policy execution, the relevance of the position feature suddenly drops. It then bounces back after several time steps from the target change. On the other hand, the policy model refers more to the vision information and the velocity information immediately after the target change occurs. After several time steps from the target change, the relevance ratio of the velocity decreases again.
These results imply that position information plays an important role in local manipulation because the relevance ratio of the position feature increases as the robot arm moves closer to the target object. In contrast, the relevance ratio of the velocity feature peaks in the early stage of policy execution when the robot arm is far from the target object, but then drops as it becomes closer to the target object. These
Fig. 5: Relevance ratio change of the MuJoCo door opening task (top), and the Baxter reaching task, the MuJoCo peg insertion task, and the reaching task with DTD (\(z^{+}\)-rule) (bottom). Red, blue, and purple solid lines represent the ratio of the total relevance, image relevance, and configuration relevance, respectively. Black, yellow, green, and pink dotted lines represent the joint position, joint velocity, end-effector position, and end-effector velocity, respectively.
Fig. 6: Relevance ratio change when the target is changed. Red, blue, and purple solid lines represent the ratio of the total relevance, image relevance, and configuration relevance, respectively. Black, yellow, green, and pink dotted lines represent the joint position, joint velocity, end-effector position, and end-effector velocity, respectively. The time step when the target is changed is represented by the vertical dotted line.
results imply that velocity information is in charge of global manipulation. Meanwhile, visual information provides a clue in the early stage about which trajectory it should follow.
## V Conclusion
In this paper, we explain the decisions of the deep visuomotor policy models for robotic manipulation. In order to analyze the decision making reasons, three different input attribution methods, Deep Taylor Decomposition, Relative Attributing Propagation, and Guided Backpropagation, are utilized. To handle negative inputs and outputs properly, we propose a modified relevance propagation method. In addition, we expand input attribution methods which were originally proposed for image classifiers to explain robot policy networks by computing the contributions of each torque on the end-effector movement. With these methods, we explicitly measure the extent to which each input contributes to the decisions of the policy models and visualize a heatmap. From the results of a qualitative analysis, we identify that referring to task-relevant image features is not a necessary condition for task completion. In addition, the dynamic relevance ratio change results imply that position information is responsible for local manipulation, while velocity information is responsible for global manipulation. Meanwhile, the policy model chooses the trajectory to follow by referring to visual information in the initial stage of policy execution. To the best of our knowledge, this is the first report to identify the dynamic changes of the input attributions of sensor inputs in deep policy networks for robotic manipulation.
## Acknowledgment
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Ministry of Science and ICT (MSIT) (No. 2017-0-01779, XAI and No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)) and Industrial Strategic Technology Development Program funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea) (No. 10077533, Development of robotic manipulation algorithm for grasping/assembling with the machine learning using visual and tactile sensing information).
|
2303.16639
|
On local likelihood asymptotics for Gaussian mixed-effects model with
system noise
|
The Gaussian mixed-effects model driven by a stationary integrated
Ornstein-Uhlenbeck process has been used for analyzing longitudinal data having
an explicit and simple serial-correlation structure in each individual.
However, the theoretical aspect of its asymptotic inference is yet to be
elucidated. We prove the local asymptotics for the associated log-likelihood
function, which in particular guarantees the asymptotic optimality of the
suitably chosen maximum-likelihood estimator. We illustrate the obtained
asymptotic normality result through some simulations for both balanced and
unbalanced datasets.
|
Takumi Imamura, Hiroki Masuda, Hayato Tajima
|
2023-03-29T12:54:43Z
|
http://arxiv.org/abs/2303.16639v2
|
# On local likelihood asymptotics for Gaussian mixed-effects model with system noise
###### Abstract.
The Gaussian mixed-effects model driven by a stationary integrated Ornstein-Uhlenbeck process has been used for analyzing longitudinal data having an explicit and simple serial-correlation structure in each individual. However, the theoretical aspect of its asymptotic inference is yet to be elucidated. We prove the local asymptotics for the associated log-likelihood function, which in particular guarantees the asymptotic optimality of the suitably chosen maximum-likelihood estimator. We illustrate the obtained asymptotic normality result through some simulations for both balanced and unbalanced datasets.
## 1. Introduction
### Setup and objective
We consider the local likelihood asymptotics for the Gaussian linear mixed-effects integrated Ornstein-Uhlenbeck (IOU) model originally introduced in [12], in which the dynamics of the \(i\)th individual is described by
\[Y_{i}(t)=X_{i}(t)\beta+Z_{i}(t)b_{i}+W_{i}(t)+\epsilon_{i}(t) \tag{1.1}\]
for \(i=1,\cdots,N\) and a given fixed time horizon \(t\in[0,T]\), where the ingredients are given as follows.
* We observe \(\{(t_{ij},X_{i}(t_{ij}),Y_{i}(t_{ij}),Z_{i}(t_{ij}))\}_{j=1}^{n_{i}}\) for each \(i=1,\cdots,N\), with \[\sup_{N}\max_{i\leq N}n_{i}<\infty,\] where \(0\equiv t_{i0}<t_{i1}<\cdots<t_{in_{i}}\leq T\) for each \(i\), and where \(X_{i}(t)\in\mathbb{R}^{p_{\beta}}\) and \(Z_{i}(t)\in\mathbb{R}^{p_{b}}\) are non-random explanatory variables (processes) satisfying that \[\max_{i\leq N}\left(|X_{i}|\vee|Z_{i}|\right)=O(1).\] Here and in what follows, the order and asymptotic symbols are used for \(N\to\infty\) unless otherwise mentioned.
* \(\beta\in\mathbb{R}^{p_{\beta}}\) is fixed-effect unknown parameter, and the random effects \(b_{1},b_{2},\cdots\in\mathbb{R}^{p_{b}}\) are i.i.d. \(N_{p_{b}}(0,G(\gamma))\) for some function \(G(\gamma):\,\mathbb{R}^{p_{\gamma}}\to\mathbb{R}^{p_{b}}\otimes\mathbb{R}^{p_ {b}}\).
* \(W_{i}(t)\) is a system noise described by the i.i.d. (centered) integrated Gaussian Ornstein-Uhlenbeck process \[W_{i}(t)=\int_{0}^{t}\zeta_{i}(s)ds=\frac{\zeta_{i}(0)}{\alpha}(1-e^{-\alpha t })+\frac{\tau}{\alpha}\int_{0}^{t}(1-e^{-\alpha(t-v)})dw_{i}(v)\] (1.2) for \(\zeta_{1},\zeta_{2},\dots\) being i.i.d. stationary Gaussian OU process of the form \[\zeta_{i}(t)=\int_{-\infty}^{t}e^{-\alpha(t-s)}\tau dw_{i}(s)\sim N\left(0,\, \frac{\tau^{2}}{2\alpha}\right),\] (1.3) which equals a solution process to the stochastic differential equation \[d\zeta_{i}(t)=-\alpha\zeta_{i}(t)dt+\tau dw_{i}(t),\] with \(w_{1},w_{2},\dots\) denoting i.i.d. standard Wiener processes and both \(\alpha>0\) and \(\tau>0\) being unknown parameters.
* \(\epsilon_{i}(t)\) denotes a measurement error at time \(t\) for the individual \(i\). We assume that \(\epsilon_{1},\epsilon_{2},\dots\) are independent centered Gaussian white noises such that for each \(i\), \(\text{cov}_{\theta}[\epsilon_{i}(t_{ij}),\epsilon_{i}(t_{ik})]=\sigma^{2} \delta_{jk}\) (\(\delta_{jk}\) denotes the Kronecker delta).
* \(\{b_{i}\}\), \(\{W_{i}\}\), and \(\{\epsilon_{i}\}\) are stochastically independent.
The model allows us to look at irregularly spaced and different-number observations across individuals and also missing values for some of the variables; use of a Gaussian process for such simple and transparent correlation-structure modeling goes back to [2]. The time-integrated character is suitable for many applications where sample paths of system noise, hence those of the objective time series, are smoother than non-differentiable diffusion-type models; as we will briefly mention in Remark 2.5, our asymptotic framework could handle several other processes for \(W_{i}\). The covariance structure of the system noise is expressed only through the two parameters \(\alpha\) and \(\tau\) in a unified way. From (2.2) below, we see that the process \(W_{i}\) approximates: for \(j\neq k\), with \(c:=\tau/\alpha>0\) being fixed,
* A scaled Wiener process where \(H_{i;jk}(\alpha,\tau)\approx c^{2}\min(t_{ij},t_{ik})\) for \(\alpha\to\infty\);
* A Gaussian white noise process where \(H_{i;jk}(\alpha,\tau)\approx 0\) for \(\alpha\to 0\).
In the context of the random effect models, the parameter \(\alpha>0\) is referred to as "the degree of derivative tracking" (a degree of maintaining the same trajectory over time): the process \(W_{i}\) becomes degenerate (to the process identically zero) for \(\alpha\to 0\) with fixed \(c>0\) (then \(\tau\to 0\)), as can be seen from the expression (1.3).
Our objective is to study the local asymptotic property of the associated likelihood function for estimating the finite-dimensional parameter
\[\theta:=\big{(}\beta,\gamma,\alpha,\tau,\sigma^{2}\big{)}\in\mathbb{R}^{p_{ \beta}}\times\mathbb{R}^{p_{\gamma}}\times(0,\infty)^{3}\subset\mathbb{R}^{p},\]
where \(p:=p_{\beta}+p_{\gamma}+3\geq 5\) denotes the dimension of \(\theta\). The statement is given in Section 2. We will present some numerical experiments in Section 3.
### Some literature review in brief
After [12] introduced the model (1.1), several works studied its application to specific areas. Then, [10] extended the model to the case of bivariate response and applied it to analyzing AIDS data, in particular predictions of future observations and cause-and-effect relationship therein. Later on, [13] introduced a semiparametric extension, by adding a nonparametric mean function of time. More recently, [5] developed an optimization algorithm and [4] did xtmixediou using Stata's matrix programming language. As a further application example, we refer to [3], where the authors studied Bayesian regularization of a related model and applied it to analyzing CD4 yeast cell-cycle genomic data: Bayesian ridge, lasso, and elastic net approaches were considered together with computational aspects of the posterior.
Nevertheless, a related theoretical study from an asymptotic viewpoint seems missing in the literature. The primary scope of this paper is to derive the local asymptotics for the maximum-likelihood estimator (MLE), providing us with the fundamental notion of an asymptotically efficient estimator. See also Remarks 2.3, 2.4, and 2.5 for some related details and issues.
## 2. Local asymptotics
For notational convenience, let us write
\[\xi_{ij}=\xi_{i}(t_{ij})\]
for \(\xi=X\), \(Y\), \(Z\), \(W\), and \(\epsilon\). Further, let \(X_{i}=(X_{ij})\in\mathbb{R}^{n_{i}}\otimes\mathbb{R}^{p_{\beta}}\), \(Y_{i}=(Y_{ij})\in\mathbb{R}^{n_{i}}\), \(Z_{i}=(Z_{ij})\in\mathbb{R}^{n_{i}}\otimes\mathbb{R}^{p_{\emptyset}}\), \(W_{i}=(W_{ij})\in\mathbb{R}^{n_{i}}\), and \(\epsilon_{i}=(\epsilon_{ij})\in\mathbb{R}^{n_{i}}\). With these shorthands and (1.1), we have the expression
\[Y_{i}=X_{i}\beta+Z_{i}b_{i}+W_{i}+\epsilon_{i}\]
for the sample from the \(i\)th individual. We denote by
\[v=(\gamma,\alpha,\tau,\sigma^{2})\]
the parameters contained in the covariance matrix of \(Y_{i}\); \(\gamma_{k}\) and \(v_{l}\) denote the \(k\)th and \(l\)th components of \(\gamma\) and \(v\), respectively. Let \(P_{\theta}\) denote the distribution of \((\{b_{i}\},\{W_{i}\},\{\epsilon_{i}\})\), and write \(E_{\theta}\) and \(\mathrm{cov}_{\theta}\) for the associate expectation and covariance, respectively.
The process \(W_{i}\) is centered in the sense that \(E_{\theta}[W_{i}(t)]=0\) for each \(t\). By the expression (1.2), we obtain the following specific covariance structure \(H_{i}(\alpha,\tau)=:(H_{i;jk}(\alpha,\tau))_{j,k}\):
\[H_{i;jk}(\alpha,\tau) :=\mathrm{cov}_{\theta}\,[W_{ij},W_{ik}]\] \[=\frac{1}{\alpha^{2}}(1-e^{-\alpha t_{ij}})(1-e^{-\alpha t_{ik}}) E_{\theta}[\zeta_{i}(0)^{2}]\] \[\qquad+\frac{\tau^{2}}{\alpha^{2}}\int_{0}^{t_{ij}\wedge t_{ik}}( 1-e^{-\alpha(t_{ij}-s)})(1-e^{-\alpha(t_{ik}-s)})ds \tag{2.1}\]
\[=\frac{\tau^{2}}{2\alpha^{3}}\left(2\alpha\min(t_{ij},t_{ik})+e^{-\alpha t_{ij}}+e^{ -\alpha t_{ik}}-1-e^{-\alpha|t_{ij}-t_{ik}|}\right). \tag{2.2}\]
We have (under \(P_{\theta}\))
\[Y_{i}\stackrel{{ P_{\theta}}}{{\sim}}N_{n_{i}}\left(X_{i}\beta,\, Q_{i}(v)\right)\]
for \(i=1,\ldots,N\), where
\[Q_{i}(v):=Z_{i}G(\gamma)Z_{i}^{\top}+H_{i}(\alpha,\tau)+\sigma^{2}I_{n_{i}}, \tag{2.3}\]
with \(I_{p}\) denoting the \(p\)-dimensional identity matrix. Here and in what follows, we set the parameter space to be
\[\Theta=\Theta_{\beta}\times\Theta_{v}=\Theta_{\beta}\times\Theta_{\gamma} \times\Theta_{(\alpha,\tau,\sigma^{2})}\subset\mathbb{R}^{p_{\beta}}\times \mathbb{R}^{p_{\gamma}}\times(0,\infty)^{3},\]
a domain in \(\mathbb{R}^{p}\), for which the covariances \(Q_{i}(v)\) are uniformly non-degenerate:
\[\forall v\in\Theta_{v},\quad\inf_{i\geq 1}\lambda_{\min}\left(Q_{i}(v)\right)>0. \tag{2.4}\]
Then, the log-likelihood function is well-defined for \(\theta\in\Theta\) and is given by
\[\ell_{N}(\theta) =\sum_{i=1}^{N}\log\phi_{n_{i}}\left(Y_{i};\,X_{i}\beta,\,Q_{i}( v)\right)\] \[=-\frac{\log(2\pi)}{2}\sum_{i=1}^{N}n_{i}-\frac{1}{2}\sum_{i=1}^{ N}\left\{\log|Q_{i}\left(v\right)|+\left(Y_{i}-X_{i}\beta\right)^{\top}Q_{i} \left(v\right)^{-1}\left(Y_{i}-X_{i}\beta\right)\right\}. \tag{2.5}\]
We write
\[\Delta_{N}(\theta)=\frac{1}{\sqrt{N}}\partial_{\theta}\ell_{N}(\theta)\]
for the normalized score function, where \(\partial_{\theta}\) denotes the partial-differentiation operator with respect to \(\theta\). For a multilinear form \(M=\{M_{i_{1}i_{2}\ldots i_{m}}\}\), we will write \(M[u_{i_{1}},\ldots,u_{i_{m}}]=\sum_{i_{1},\ldots,i_{m}}M_{i_{1}i_{2}\ldots i_ {m}}u_{i_{1}}\ldots u_{i_{m}}\).
**Theorem 2.1**.: _Fix any \(\theta_{0}=(\beta_{0},v_{0})=(\beta_{0},\gamma_{0},\alpha_{0},\tau_{0},\sigma _{0}^{2})\in\Theta\) as a true value of \(\theta\). Suppose the following conditions:_
* _The function_ \(G(\gamma):\,\Theta_{\gamma}\to\mathbb{R}^{p_{k}}\otimes\mathbb{R}^{p_{k}}\) _is of class_ \(\mathcal{C}^{3}(\overline{\Theta_{\gamma}})\)_._
* _There exist symmetric-matrix-valued_ \(\mathcal{C}^{1}(\overline{\Theta_{v}})\)_-functions_ \(A(v)\) _and_ \(U(v)=(U_{jk}(v))_{j,k}\) _satisfying that for each_ \(v\in\Theta_{v}\)_,_ \[\frac{1}{N}\sum_{i=1}^{N}X_{i}^{\top}Q_{i}(v)^{-1}X_{i}\to A(v),\] (2.6) \[\frac{1}{N}\sum_{i=1}^{N}\frac{1}{2}\operatorname{trace}\left\{Q_{i}(v)^{- 1}\left(\partial_{v_{k}}Q_{i}(v)\right)Q_{i}(v)^{-1}\left(\partial_{v_{j}}Q_{ i}(v)\right)\right\}\to U_{jk}(v),\] (2.7) _and that both_ \(A(v)\) _and_ \(U(v)\) _are positive-definite uniformly in_ \(v\) _for each compact_ \(K_{v}\subset\Theta_{v}\)_._
_Then, the following statements hold under \(P_{\theta_{0}}\)._
1. _For any bounded sequence_ \((u_{N})_{N\geq 1}\subset\mathbb{R}^{p}\)_,_ \[\ell_{N}\left(\theta_{0}+\frac{1}{\sqrt{N}}u_{N}\right)-\ell_{N}\left(\theta_{0} \right)=\Delta_{N}(\theta_{0})[u_{N}]-\frac{1}{2}\mathcal{I}(v_{0})[u_{N}^{ \otimes 2}]+o_{p}(1),\] _where_ \(\Delta_{N}(\theta_{0})\xrightarrow{\mathcal{L}}N_{p}(0,\mathcal{I}(v_{0}))\) _and_ \[\mathcal{I}(v):=\operatorname{diag}\left(A(v),\ U(v)\right).\]
2. _There exists a local maximum point_ \(\hat{\theta}_{N}\) _of_ \(\ell_{n}(\theta)\) _with_ \(P_{\theta_{0}}\)_-probability tending to_ \(1\)_, for which_ \[\sqrt{N}(\hat{\theta}_{N}-\theta_{0})=\mathcal{I}(v_{0})^{-1}\Delta_{N}(\theta_ {0})+o_{p}(1)\xrightarrow{\mathcal{L}}N_{p}\left(0,\mathcal{I}(v_{0})^{-1} \right).\]
Before the proof, we make a couple of remarks.
**Remark 2.2**.: Since the terminal time \(T>0\) is fixed throughout, it is not essential that \(\alpha>0\) for obtaining the local asymptotic results in Theorem 2.1; even when \(\alpha\leq 0\), the covariance matrix \(H_{i}(\alpha,\tau)\) is well-defined by (2.1), and then (2.4) remains valid. However, it should be noted that the mean-reverting feature of the process \(\zeta_{i}(t)\) only holds for \(\alpha>0\) and the expression (2.2) is based on (1.3).
**Remark 2.3** (Asymptotic efficiency).: There are several important implications and consequences of Theorem 2.1 worth mentioning. Theorem 2.1(1) shows the local asymptotic normality (LAN) of the family \(\{P_{\theta}\}_{\theta\in\Theta}\), based on which the classical asymptotic theory enables us to define the asymptotic efficiency of regular estimators: any estimators \(\hat{\theta}_{N}^{*}=(\hat{\beta}_{N}^{*},\hat{v}_{N}^{*})\) satisfying that
\[\sqrt{N}(\hat{\theta}_{N}^{*}-\theta_{0})=\mathcal{I}(v_{0})^{-1}\Delta_{N}( \theta_{0})+o_{p}(1) \tag{2.8}\]
are regular, and in particular, we have the so-called Hajek-Le Cam asymptotic lower bound for a class of loss functions including the quadratic one:
\[\liminf_{n\to\infty}E_{\theta_{0}}\left[\left|\sqrt{N}(\hat{\theta}_{N}^{*}- \theta_{0})\right|^{2}\right]\geq\int\left|\mathcal{I}(v_{0})^{-1/2}z\right|^ {2}\phi(z)dz=\operatorname{trace}\left(\mathcal{I}(v_{0})^{-1}\right),\]
where \(\phi(z)\) denotes the density of \(N_{p}(0,I_{p})\). In the present context, we necessarily have the asymptotic normality \(\sqrt{N}(\hat{\theta}_{N}^{*}-\theta_{0})\xrightarrow{\mathcal{L}}N_{p}\left( 0,\,\mathcal{I}(v_{0})^{-1}\right)\), followed by the Studentized version
\[\operatorname{diag}\left(\hat{A}_{N},\;\hat{U}_{N}\right)^{1/2}\sqrt{N}(\hat{ \theta}_{N}^{*}-\theta_{0})\xrightarrow{\mathcal{L}}N_{p}(0,I_{p}),\]
with
\[\hat{A}_{N} :=\frac{1}{N}\sum_{i=1}^{N}X_{i}^{\top}Q_{i}(\hat{v}_{N}^{*})^{-1} X_{i},\] \[\hat{U}_{N} :=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{2}\operatorname{trace}\left\{ Q_{i}(\hat{v}_{N}^{*})^{-1}\left(\partial_{v}Q_{i}(\hat{v}_{N}^{*})\right)Q_{i}( \hat{v}_{N}^{*})^{-1}\left(\partial_{v}Q_{i}(\hat{v}_{N}^{*})\right)\right\},\]
where the elements of \(\hat{U}_{N}\) are more specified by (recall (2.3) and \(v=(\gamma,\alpha,\tau,\sigma^{2})\)): \(\partial_{\gamma}Q_{i}(v)=Z_{i}\partial_{\gamma}G(\gamma)Z_{i}^{\top}\), \(\partial_{(\alpha,\tau)}Q_{i}(v)=\partial_{(\alpha,\tau)}H_{i}(\alpha,\tau)\), and \(\partial_{\sigma^{2}}Q_{i}(v)=I_{n_{i}}\). From a theoretical point of view, we may then call any such regular estimator \(\hat{\theta}_{N}^{*}\) asymptotically efficient, as soon as the sequence \(\{|\sqrt{N}(\hat{\theta}_{N}-\theta_{0})|^{2}\}_{N}\) is uniformly integrable; the terminology "efficient" stems from the minimality of the asymptotic covariance, and also from the asymptotically maximal concentration. We refer to [1] and [6] for general details.
**Remark 2.4** (Theoretical refinments).: A good root \(\hat{\theta}_{N}\) of the likelihood equation \(\partial_{\theta}\ell_{N}(\theta)=0\) in Theorem 2.1 is not a single choice and also may not necessarily be the best from a computational point of view. Our interest here is in the first-order asymptotic inference, and we did not consider the conventional REML (restricted maximum-likelihood) estimator. Likewise, a popular way of construction of such an estimator \(\hat{\theta}_{N}^{*}\) includes the stepwise one: usually, one first uses the (un-weighted) least-squares for \(\beta\), and then proceeds with a variance-component estimation; note that we then obtain a globally consistent estimator as was studied in [11] for joint estimation of all the components of \(\theta\) under a series of regularity conditions.
We could derive the asymptotic distribution of the above-mentioned stepwise estimator even when the sources of randomness in the model are non-Gaussian, in particular even when the driving process in \(W_{i}(t)\) is a non-Gaussian Levy process; this point will have an attractive feature, for it ensures that the inference procedure becomes robust against the misspecified Gaussian assumptions. Moreover, it would be possible to deduce the uniform tail-probability estimate of the associated Gaussian quasi-maximum likelihood estimator, enabling us to conclude the asymptotic efficiency in the sense mentioned in Remark 2.3, and further study the model selection issue by constructing appropriate information criteria. We will report the related details elsewhere.
**Remark 2.5** (Other system-noise processes).: Although we are focusing on the IOU process for \(W_{i}\), the same likelihood analysis based on the low-frequency sampling for each individual could formally go through other system-noise processes parametrized by a finite-dimensional parameter as long as \(\operatorname{cov}_{\theta}[W_{i}(s),W_{i}(t)]\) exist and is explicitly given. For example, \(W_{1},\ldots,W_{N}\) could be i.i.d. copies of a (drift-free) scaled fractional Brownian motion: \(W_{i}\) is a centered Gaussian process with stationary increments such that \(E_{\theta}[W_{i}(t)]=0\), \(\operatorname{var}_{\theta}[W_{i}(t)-W_{i}(s)]=\tau^{2}|t-s|^{2\mathsf{H}}\), and
\[\operatorname{cov}_{\theta}[W_{i}(t),W_{i}(s)]=\frac{\tau^{2}}{2}\left(t^{2 \mathsf{H}}+s^{2\mathsf{H}}-|t-s|^{2\mathsf{H}}\right),\qquad t,s\geq 0,\]
for the scale parameter \(\tau>0\) and the Hurst parameter \(\mathsf{H}\in(0,1)\); then, the covariance-matrix parameter is \(v=(\gamma,\mathsf{H},\tau,\sigma^{2})\) and (2.2) becomes \(H_{i}^{\prime}(\mathsf{H},\tau)=:(H_{i;jk}^{\prime}(\mathsf{H},\tau))_{j,k}\) with
\[H_{i;jk}^{\prime}(\mathsf{H},\tau):=\frac{\tau^{2}}{2}\left(t_{ij}^{2\mathsf{H }}+t_{ik}^{2\mathsf{H}}-|t_{ij}-t_{ik}|^{2\mathsf{H}}\right).\]
Correspondingly, we could deduce a variant of Theorem 2.1 without any essential change: we replace \(Q_{i}(v)\) by \(Q_{i}^{\prime}(v):=Z_{i}G(\gamma)Z_{i}^{\top}+H_{i}^{\prime}(\mathsf{H},\tau)+ \sigma^{2}I_{n_{i}}\), and impose similar assumptions to (2.6) and (2.7); for the latter, the partial derivative \(\partial_{(\mathsf{H},\tau)}Q_{i}^{\prime}(v)\) are given through
\[\partial_{\mathsf{H}}H_{i}^{\prime}(\mathsf{H},\tau) =\tau^{2}\left(t_{ij}^{2\mathsf{H}}\log(t_{ij})+t_{ik}^{2\mathsf{ H}}\log(t_{ik})-|t_{ij}-t_{ik}|^{2\mathsf{H}}\log|t_{ij}-t_{ik}|\right),\] \[\partial_{\tau}H_{i}^{\prime}(\mathsf{H},\tau) =\tau\left(t_{ij}^{2\mathsf{H}}+t_{ik}^{2\mathsf{H}}-|t_{ij}-t_{ ik}|^{2\mathsf{H}}\right).\]
As a specific application to longitudinal biomedical data, this model was used in [8] for empirical analysis of CD4 counts in HIV-positive patients. Compared with the IOU model, however, the fractional Brownian motion cannot quantitatively capture the degree of derivative tracking.
Proof of Theorem 2.1.: We introduce the normalized observed information matrix:
\[\mathcal{I}_{N}(\theta):=-\frac{1}{N}\partial_{\theta}^{2}\ell_{N}(\theta).\]
We are going to complete the proof by verifying the following two conditions for \(N\to\infty\): for any \(\epsilon>0\), \(c>0\), and compact \(K\subset\Theta\),
\[S_{1,N}(\epsilon,K) :=\sup_{\theta\in K}P_{b}\left[|\mathcal{I}_{N}(\theta)-\mathcal{ I}(v)|>\epsilon\right]\to 0, \tag{2.9}\] \[S_{2,N}(\epsilon,c,K) :=\sup_{\theta\in K}P_{b}\left[\frac{1}{\sqrt{N}}\sup_{\theta^{ \prime}\in\Theta:|\theta^{\prime}-\theta|\leq cN^{-1/2}}|\partial_{\theta} \mathcal{I}_{N}(\theta^{\prime})|>\epsilon\right]\to 0. \tag{2.10}\]
Using the criterion in [9] (see Theorems 1 and 2 therein), these conditions ensure both claims in Theorem 2.1.
To prove the law of large numbers (2.9), we recall the expression (2.5) of the log-likelihood function \(\ell_{N}(\theta)\). To proceed, we need to compute the partial derivatives of \(\ell_{N}(\theta)\). Let \(H_{i}^{(\alpha)}:=\partial_{\alpha}H_{i}\) and \(H_{i}^{(\tau)}:=\partial_{\tau}H_{i}\). By (2.2), the \((j,k)\)th entries of these matrices are given as follows:
\[H_{i}^{(\alpha)}(\alpha,\tau)_{j,k} =\frac{\tau^{2}}{2\alpha^{4}}\Big{(}-4\alpha\min\left(t_{ij},t_{ ik}\right)-3+\alpha t_{ij}\big{)}\,e^{-\alpha t_{ij}}-\left(3+\alpha t_{ik} \right)e^{-\alpha t_{ik}}+3\] \[\qquad+\left(3+\alpha|t_{ij}-t_{ik}|\right)e^{-\alpha|t_{ij}-t_{ ik}|}\Big{)},\] \[H_{i}^{(\tau)}(\alpha,\tau)_{j,k} =\frac{\tau}{\alpha^{3}}\left(2\alpha\min\left(t_{ij},t_{ik} \right)+e^{-\alpha t_{ij}}+e^{-\alpha t_{ik}}-1-e^{-\alpha|t_{ij}-t_{ik}|} \right).\]
Then, we have the expressions for the first-order derivatives:
\[\partial_{\beta}\ell_{N}(\theta) =\sum_{i=1}^{N}\left\{X_{i}^{\top}Q_{i}\left(v\right)^{-1}Y_{i}-X _{i}^{\top}Q_{i}\left(v\right)^{-1}X_{i}\beta\right\},\] \[\partial_{\gamma_{l}}\ell_{N}(\theta) =\frac{1}{2}\sum_{i=1}^{N}\Big{\{}\left(Y_{i}-X_{i}\beta\right)^ {\top}Q_{i}(v)^{-1}Z_{i}\left(\partial_{\gamma_{l}}G(\gamma)\right)Z_{i}^{\top }Q_{i}(v)^{-1}\left(Y_{i}-X_{i}\beta\right)\] \[\qquad-\operatorname{trace}\left(Q_{i}(v)^{-1}Z_{i}\left( \partial_{\gamma_{l}}G(\gamma)\right)Z_{i}^{\top}\right)\Big{\}},\] \[\partial_{\alpha}\ell_{N}(\theta) =\frac{1}{2}\sum_{i=1}^{N}\Big{\{}(Y_{i}-X_{i}\beta)^{\top}Q_{i}(v )^{-1}H_{i}^{(\alpha)}(\alpha,\tau)Q_{i}(v)^{-1}\left(Y_{i}-X_{i}\beta\right)- \operatorname{trace}\left(Q_{i}(v)^{-1}H_{i}^{(\alpha)}(\alpha,\tau)\right) \Big{\}},\] \[\partial_{\tau}\ell_{N}(\theta) =\frac{1}{2}\sum_{i=1}^{N}\Big{\{}(Y_{i}-X_{i}\beta)^{\top}Q_{i}(v )^{-1}H_{i}^{(\tau)}(\alpha,\tau)Q_{i}(v)^{-1}\left(Y_{i}-X_{i}\beta\right)- \operatorname{trace}\left(Q_{i}(v)^{-1}H_{i}^{(\tau)}(\alpha,\tau)\right)\Big{\}},\] \[\partial_{\sigma^{2}}\ell_{N}(\theta) =\frac{1}{2}\sum_{i=1}^{N}\Big{\{}(Y_{i}-X_{i}\beta)^{\top}\left(Q _{i}(v)^{-1}\right)^{2}\left(Y_{i}-X_{i}\beta\right)-\operatorname{trace} \left(Q_{i}(v)^{-1}\right)\Big{\}}\,.\]
Then, we obtain the expressions for the second-order derivatives:
\[\partial_{\beta}^{2}\ell_{N}(\theta) =-\sum_{i=1}^{N}X_{i}^{\top}Q_{i}(v)^{-1}X_{i},\] \[\partial_{\beta}\partial_{v_{k}}\ell_{N}(\theta) =\sum_{i=1}^{N}X_{i}^{\top}Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_ {i}(v)\right)Q_{i}(v)^{-1}\left(Y_{i}-X_{i}\beta\right),\]
\[\partial_{v_{j}}\partial_{v_{k}}\ell_{N}(\theta) =\frac{1}{2}\sum_{i=1}^{N}\Big{\{}\left(Y_{i}-X_{i}\beta\right)^{ \top}\partial_{v_{j}}\left\{Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i}(v)\right)Q _{i}(v)^{-1}\right\}\left(Y_{i}-X_{i}\beta\right)\] \[\qquad-\partial_{v_{j}}\left\{\text{trace}\left(Q_{i}(v)^{-1} \left(\partial_{v_{k}}Q_{i}(v)\right)\right)\right\}\Big{\}}.\]
First, by (2.6) we have (deterministic convergence)
\[-\frac{1}{N}\partial_{\beta}^{2}\ell_{N}\left(\theta\right)=\frac{1}{N}\sum_{ i=1}^{N}X_{i}^{\top}Q_{i}(v)^{-1}X_{i}\to A(v)\]
for each \(\theta\); under (2.4), this is valid uniformly in \(\theta\in K\) since the derivative \(\partial_{v}\{Q_{i}(v)^{-1}\}\) is bounded over \(K\). Next, since the summands of \(\partial_{\beta}\partial_{v_{k}}\ell_{N}(\theta)\) is \(E_{\theta}\)-expectation zero for each \(\theta\) and since \(Y_{i}-X_{i}\beta\overset{P_{\theta}}{\sim}N_{n_{i}}\left(0,\,Q_{i}(v)\right)\), we have
\[\sup_{\theta\in K}E_{\theta}\left[\left|-\frac{1}{N}\partial_{ \beta}\partial_{v_{k}}\ell_{N}(\theta)\right|^{2}\right]=\frac{1}{N}\sup_{ \theta\in K}E_{\theta}\left[\left|-\frac{1}{\sqrt{N}}\partial_{\beta}\partial _{v_{k}}\ell_{N}(\theta)\right|^{2}\right]\] \[\lesssim\frac{1}{N}\frac{1}{N}\sum_{i=1}^{N}\sup_{\theta\in K}E_ {\theta}[|Y_{i}-X_{i}\beta|^{2}]\lesssim\frac{1}{N}\sup_{\theta\in K}\sup_{i \geq 1}\text{trace}(Q_{i}(v))\to 0. \tag{2.11}\]
It follows that
\[\sup_{\theta\in K}P_{\theta}\left[\left|-\frac{1}{N}\partial_{\beta}\partial _{v_{k}}\ell_{N}\left(\theta\right)\right|>\epsilon\right]\to 0.\]
To manage the remaining \(\partial_{v_{j}}\partial_{v_{k}}\ell_{N}(\theta)\), we note that
\[E_{\theta}\left[\left(Y_{i}-X_{i}\beta\right)^{\top}\partial_{ v_{j}}\left\{Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i}(v)\right)Q_{i}(v)^{-1} \right\}\left(Y_{i}-X_{i}\beta\right)\right]\] \[=E_{\theta}\left[\text{trace}\left\{\partial_{v_{j}}\left\{Q_{i}(v )^{-1}\left(\partial_{v_{k}}Q_{i}(v)\right)Q_{i}(v)^{-1}\right\}\left(Y_{i}-X _{i}\beta\right)\left(Y_{i}-X_{i}\beta\right)^{\top}\right\}\right]\] \[=\text{trace}\left\{\partial_{v_{j}}\left\{Q_{i}(v)^{-1}\left( \partial_{v_{k}}Q_{i}(v)\right)Q_{i}(v)^{-1}\right\}Q_{i}(v)\right\}.\]
Noting the identities
\[\partial_{v_{j}}\left\{Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i}( v)\right)Q_{i}(v)^{-1}\right\}Q_{i}(v)\] \[\qquad=-Q_{i}(v)^{-1}\left(\partial_{v_{j}}Q_{i}(v)\right)Q_{i}(v )^{-1}\left(\partial_{v_{k}}Q_{i}(v)\right)\] \[\qquad\qquad+Q_{i}(v)^{-1}(\partial_{v_{j}}\partial_{v_{k}}Q_{i} (v))-Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i}(v)\right)Q_{i}(v)^{-1}\left( \partial_{v_{j}}Q_{i}(v)\right),\] \[\partial_{v_{j}}\left\{\text{trace}\left(Q_{i}(v)^{-1}\left( \partial_{v_{k}}Q_{i}(v)\right)\right)\right\}\] \[\qquad=\text{trace}\left\{-Q_{i}(v)^{-1}\left(\partial_{v_{j}}Q_{ i}(v)\right)Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i}(v)\right)+Q_{i}(v)^{-1}\left( \partial_{v_{j}}\partial_{v_{k}}Q_{i}(v)\right)\right\},\]
we obtain
\[E_{\theta}\left[\partial_{v_{j}}\partial_{v_{k}}\ell_{N}(\theta)\right] =\frac{1}{2}\sum_{i=1}^{N}E_{\theta}\Big{[}\left(Y_{i}-X_{i}\beta \right)^{\top}\partial_{v_{j}}\left\{Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i}( v)\right)Q_{i}(v)^{-1}\right\}\left(Y_{i}-X_{i}\beta\right)\] \[\qquad-\partial_{v_{j}}\left\{\text{trace}\left(Q_{i}(v)^{-1} \left(\partial_{v_{k}}Q_{i}(v)\right)\right)\right\}\Big{]}\] \[=-\frac{1}{2}\sum_{i=1}^{N}\text{trace}\left\{Q_{i}(v)^{-1}\left( \partial_{v_{k}}Q_{i}(v)\right)Q_{i}(v)^{-1}\left(\partial_{v_{j}}Q_{i}(v) \right)\right\}\]
for each \(\theta\). This together with (2.7) and a similar estimate to (2.11) concludes that
\[\sup_{\theta\in K}P_{\theta}\left[\left|-\frac{1}{N}\partial_{v_{j }}\partial_{v_{k}}\ell_{N}\left(\theta\right)-U_{jk}(v)\right|>\epsilon\right] \leq\sup_{\theta\in K}P_{\theta}\left[\left|-\frac{1}{N} \partial_{v_{j}}\partial_{v_{k}}\ell_{N}\left(\theta\right)+\frac{1}{N}E_{ \theta}\left[\partial_{v_{j}}\partial_{v_{k}}\ell_{N}(\theta)\right]\right|>\epsilon\right]\] \[\qquad+\sup_{\theta\in K}P_{\theta}\left[\left|-\frac{1}{N}E_{ \theta}\left[\partial_{v_{j}}\partial_{v_{k}}\ell_{N}(\theta)\right]-U_{jk}(v) \right|>\epsilon\right]\] \[\to 0.\]
The proof of (2.9) is complete.
Turning to the asymptotic negligibility (2.10), we note the following expressions for the third-order derivatives:
\[\partial_{\beta}^{3}\ell_{N}(\theta)=0,\]
\[\partial_{\beta}^{2}\partial_{v_{k}}\ell_{N}(\theta) =\sum_{i=1}^{N}X_{i}^{\top}Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{i }\right)Q_{i}(v)^{-1}X_{i},\] \[\partial_{\beta}\partial_{v_{k}}\partial_{v_{j}}\ell_{N}(\theta) =-\sum_{i=1}^{N}X_{i}^{\top}\partial_{v_{k}}\left\{Q_{i}(v)^{-1} \left(\partial_{v_{j}}Q_{i}(v)\right)Q_{i}(v)^{-1}\right\}\left(Y_{i}-X_{i} \beta\right),\] \[\partial_{v_{k}}\partial_{v_{j}}\partial_{v_{k}}\ell_{N}(\theta) =\frac{1}{2}\sum_{i=1}^{N}\Big{\{}(Y_{i}-X_{i}\beta)^{\top} \partial_{v_{k}}\partial_{v_{j}}\left\{Q_{i}(v)^{-1}\left(\partial_{v_{k}}Q_{ i}(v)\right)Q_{i}(v)^{-1}\right\}\left(Y_{i}-X_{i}\beta\right)\] \[\qquad-\partial_{v_{k}}\partial_{v_{j}}\left\{\operatorname{trace }(Q_{i}(v)^{-1}\partial_{v_{k}}Q_{i}(v))\right\}\Big{\}}.\]
For each \(\theta\), we may and do focus on the set \(\overline{B_{\delta}(\theta)}\subset\Theta\), the closed ball at the center \(\theta\) with radius \(\delta>0\) being small enough. By the above expressions for \(\partial_{\theta}\mathcal{I}_{N}(\theta)\), we have
\[\frac{1}{\sqrt{N}}\sup_{\theta^{\prime}\in\Theta:\;|\theta^{ \prime}-\theta|\leq N^{-1/2}}|\partial_{\theta}\mathcal{I}_{N}(\theta^{\prime })|\] \[\lesssim\frac{1}{\sqrt{N}}\sup_{\theta^{\prime}\in\Theta:\;| \theta^{\prime}-\theta|\leq N^{-1/2}}\frac{1}{N}\sum_{i=1}^{N}\left(|Y_{i}-X_ {i}\beta^{\prime}|^{2}+1\right)\lesssim\frac{1}{\sqrt{N}}\frac{1}{N}\sum_{i=1 }^{N}\left(|Y_{i}-X_{i}\beta|^{2}+1\right).\]
The \(E_{\theta}\)-expectation of the leftmost side can be bounded by a constant multiple of \(N^{-1/2}\) uniformly in \(\theta\in K\), concluding that \(S_{2,N}(\epsilon,c,K)\to 0\).
## 3. Numerical experiments
To evaluate the bias and the asymptotic normality of MLEs for the Gaussian linear mixed-effects IOU model, we conducted some simulation studies under two dataset structures following [5]: balanced and unbalanced longitudinal data. On the one hand, the balanced dataset consists of the subject's data where the numbers of time points and measurement time points are equal across all subjects. On the other hand, for the unbalanced dataset, we allow that the number of time points per subject and the time intervals between consecutive time points need not be equal between subjects and within a subject.
For each Monte Carlo simulation, we generated longitudinal data \(\{Y_{i}(t_{ij})\}_{j=1}^{n_{i}}\) for \(i=1,\ldots,N\) from the IOU model (1.1):
\[Y_{i}(t_{ij})=X_{i}(t_{ij})^{\top}\beta_{0}+Z_{i}(t_{ij})^{\top}b_{i}+W_{i}(t_ {ij})+\epsilon_{i}(t_{ij}),\]
where the ingredients are given as follows.
* The explanatory variables \(X_{i}(t_{ij})=(x_{1}(t_{ij}),x_{2}(t_{ij}))\in\mathbb{R}^{2}\), \(Z_{i}(t_{ij})=(z_{1}(t_{ij}),z_{2}(t_{ij}))\in\mathbb{R}^{2}\) were generated as \(x_{1}(t_{ij})=t_{ij}\), \(x_{2}(t_{ij})=0\,\mathrm{or}\,1\) according as the Bernoulli distribution with the parameter \(0.5\) before starting Monte Carlo simulation, and \((z_{1}(t_{ij}),z_{2}(t_{ij}))=(1,t_{ij})\).
* The random effect vector \(b_{i}\sim N_{2}\left(0,\begin{pmatrix}\gamma_{0,1}^{2}&\gamma_{0,2}\\ \gamma_{0,2}&\gamma_{0,3}^{2}\end{pmatrix}\right)\).
* The system noise vector \((W_{i}(t_{ij}))_{j=1}^{n_{i}}\sim N_{n_{i}}(0,H_{i}(\alpha_{0},\tau_{0}))\).
* The measurement error vector \((\epsilon_{i}(t_{ij}))_{j=1}^{n_{i}}\sim N_{n_{i}}(0,\sigma_{0}^{2}I_{n_{i}})\).
The true parameter was given as
\[(\beta_{0},v_{0})=(\beta_{0,1},\beta_{0,2},\gamma_{0,1},\gamma_{0,2},\gamma_{0, 3},\alpha_{0},\tau_{0},\sigma_{0})=(-0.25,0.50,1.25,1.00,1.50,1.30,0.40,1.25).\]
The number of time points \(n_{i}\) and measurement time points \(\{t_{ij}\}_{j=1}^{n_{i}}\) for \(i=1,\ldots,N\) were set differently for balanced and unbalanced longitudinal data simulation:
* For the balanced data simulation, we set the number of time points as \(n_{i}=20\) and time points \(t_{ij}=j\) for all \(i=1,\ldots,N\), that is, the time intervals between consecutive time points are equal between subjects and within a subject;
* For the unbalanced data simulation, we generated the data under the setting that the number of time points \(n_{i}\) was obtained from the integer part of \(\text{Uniform}(15,20)\)-random number and measurement time points \(t_{i1},\ldots,t_{in_{i}}\) were randomly selected from \(\{1,2,\ldots,20\}\) before starting the simulation.
We generated 1000 datasets for all the Monte Carlo simulations, and we set the sample size \(N=250\) or \(500\) for the balanced and unbalanced longitudinal datasets, respectively. To optimize the log-likelihood function (2.5), we used the built-in optim function in the R software. For optimizations in all simulations,
we used an 8-dimensional vector of values 1 as the initial value. We used in all optimizations the Nelder-Mead method as an optimization algorithm because of a complication for the first and second derivative functions of the log-likelihood function.
Tables 1 and 2 show the bias and the standard error for each parameter and true parameter, calculated by the Monte Carlo method. To estimate the inaccuracy of Monte Carlo samples, we introduce the Monte Carlo standard error (MCSE, e.g. [7]) deified by
\[\mathrm{MCSE}=\sqrt{\frac{1}{M(M-1)}\sum_{m=1}^{M}(\hat{\theta}_{m}-\bar{ \theta})^{2}},\]
where \(M\) denotes the number of iterations of the simulation, \(\hat{\theta}_{m}\) is the estimate of \(\theta\) for \(m\)th repetition, and \(\bar{\theta}\) is the sample mean of \(\hat{\theta}_{m}\) across repetitions.
As shown in Table 1 and Table 2, there was little difference between the biases of all parameters in both two dataset structures and sample size settings (\(N=250,500\)). The estimates of the fixed-effect parameters and the variance parameter of the measurement error were unbiased for all simulations. The biases of the variance parameters for the random effects were not large to matter. In contrast, the biases of the variance parameters for the system noise were not negligibly small. One possible reason is that, as can be seen from Figure 1, the optimizations were not successful because of the small curvature around the true value of the log-likelihood function for \((\alpha,\tau)\). The previous study [5] recommends a reparametrized Gaussian mixed-effects IOU model as \(\alpha\) and \(\omega\) (\(\omega:=\tau^{2}/\alpha^{2}\)); however, in our simulation studies, the calculated \(\omega\) had a large bias.
Figures 2 and 3 show histograms of the Studentized MLEs and normal quantile-quantile plots (Q-Q plots) under the unbalanced longitudinal data setting (\(N=500\)), respectively. From these figures, the standard normal approximation seemed to hold for all the MLEs except \(\hat{\sigma}_{N}\). The magnitude of the variance parameter of the measurement error was very small.
The problem we faced in our numerical experiments was the computational cost of obtaining the MLEs. For example, the average time was about 7 minutes for 1 iteration in the balanced data simulation with \(N=500\). One possible solution to this problem of computation time is to change the optimization method. The previous study [5] recommends using the Newton-Raphson (NR) algorithm in terms of convergence and the time taken to reach convergence. Considering the actual application of this model, it would be better to use the NR method with a low computational cost. In the present study, we do not go into details in this direction.
we need further theoretical developments including robustification against distributional misspecification and also model selection criteria.
**Acknowledgement.** This work was partially supported by JST CREST Grant Number JPMJCR2115, Japan, and by JSPS KAKENHI Grant Number 22H01139.
|
2310.16701
|
Odd-Sunflowers
|
Extending the notion of sunflowers, we call a family of at least two sets an
odd-sunflower if every element of the underlying set is contained in an odd
number of sets or in none of them. It follows from the Erd\H os--Szemer\'edi
conjecture, recently proved by Naslund and Sawin, that there is a constant
$\mu<2$ such that every family of subsets of an $n$-element set that contains
no odd-sunflower consists of at most $\mu^n$ sets. We construct such families
of size at least $1.5021^n$. We also characterize minimal odd-sunflowers of
triples.
|
Peter Frankl, János Pach, Dömötör Pálvölgyi
|
2023-10-25T15:18:36Z
|
http://arxiv.org/abs/2310.16701v2
|
# Odd-sunflowers
###### Abstract.
Extending the notion of sunflowers, we call a family of at least two sets an _odd-sunflower_ if every element of the underlying set is contained in an odd number of sets or in none of them. It follows from the Erdos-Szemeredi conjecture, recently proved by Naslund and Sawin, that there is a constant \(\mu<2\) such that every family of subsets of an \(n\)-element set that contains no odd-sunflower consists of at most \(\mu^{n}\) sets. We construct such families of size at least \(1.5021^{n}\). We also characterize minimal odd-sunflowers of triples.
## 1. Introduction
A family of at least three sets is a _sunflower_ (or a _\(\Delta\)-system_) if every element is contained either in all of the sets, or in at most one. If a family of sets contains no sets that form a sunflower, it is called _sunflower-free_. This notion was introduced by Erdos and Rado [10] in 1960, and it has become one of the standard tools in extremal combinatorics [14]. Erdos and Rado conjectured that the maximum size of any sunflower-free family of \(k\)-element sets is at most \(c^{k}\), for a suitable constant \(c>0\). This conjecture is still open; for recent progress, see [4].
Erdos and Szemeredi [11] studied the maximum possible size of a sunflower-free family of subsets of \(\{1,\ldots,n\}\). Denote this quantity by \(f(n)\) and let \(\mu=\lim f(n)^{1/n}\). Erdos and Szemeredi conjectured that \(\mu<2\), and this was proved by Naslund and Sawin [18], using the methods of Croot, Lev, P. Pach [6], Ellenberg and Gijswijt [8], and Tao [19]. They showed that \(\mu<1.89\), while the best currently known lower bound, \(\mu>1.551\), follows from a construction of Deuber _et al._[7].
Erdos, Milner and Rado [9] called a family of at least three sets a _weak sunflower_ if the intersection of any pair of them has the same size. For a survey, see Kostochka [16]. In the literature, we can also find pseudo-sunflowers [13] and near-sunflowers [3]. By restricting the parities of the sets, other interesting questions can be asked, some of which can be answered by the so-called linear algebra method (even-town, odd-town theorems; see [5]).
We introduce the following new variants of sunflowers.
**Definition 1**.: _A nonempty family of nonempty sets forms an even-degree sunflower or, simply, an even-sunflower, if every element of the underlying set is contained in an even number of sets (or in none)._
_Analogously, a family of at least two nonempty sets forms an odd-degree sunflower or, simply, an odd-sunflower, if every element of the underlying set is contained in an odd number of sets, or in none._
Note that any family of pairwise disjoint sets is an odd-sunflower, but not an even-sunflower. A (classical) sunflower is an odd-sunflower if and only if it consists of an odd number of sets. In particular, an odd-sunflower-free family is also sunflower-free, as any sunflower contains a sunflower that consists of three sets. On the other hand, there exist many odd-sunflowers that contain no sunflower of size three. For example, \(\{\{1,2\},\{1,3\},\{2,3\},\{1,2,3\}\}\) is a minimal odd-sunflower. This example can be generalized as follows.
Let \(\mathcal{C}_{n}\) denote the \((n-1)\)-uniform family consisting of all \((n-1)\)-element subsets of \(\{1,\ldots,n\}\). (In some papers this family is denoted by \(\binom{[n]}{n-1}\).) Let \(\mathcal{C}_{n}^{+}\) denote the same family completed with the set \(\{1,\ldots,n\}\). Obviously, \(\mathcal{C}_{n}\) is an odd-sunflower if and only if \(n\) is even, and it is an even-sunflower if and only if \(n\) is odd. The family \(\mathcal{C}_{n}^{+}\) is an odd-sunflower if and only if \(n\) is odd,
and it is an even-sunflower if and only if \(n\) is even. Notice that in any subfamily of these families, the nonzero degrees of the vertices differ by at most one. Therefore, in every subfamily of \(\mathcal{C}_{n}\) and \(\mathcal{C}_{n}^{+}\) which is an odd- or even-suffower, all nonzero degrees need to be the same, showing that \(\mathcal{C}_{n}\) and \(\mathcal{C}_{n}^{+}\) are minimal odd- or even-sunflowers. There are many other examples; e.g., all graphs in which every degree is odd/even are 2-uniform odd/even-sunflowers. In fact, every cycle is a minimal 2-uniform even-sunflower. In general, it is not hard to show that it is \(\mathbf{NP}\)-complete to decide whether an input family is odd-sunflower-free or not (see Appendix A), so there is no hope of a characterization of minimal odd-sunflowers either. This is in contrast with (classic) sunflowers, where the problem is trivially in P. Nevertheless, for any fixed \(k\), there is a constant number of minimal \(k\)-uniform odd-sunflowers; we study these in Section 5.
The main question studied in this paper is the following: What is the maximum size of a family \(\mathcal{F}\) of subsets of \(\{1,\ldots,n\}\) that contains no even-sunflower (or no odd-sunflower, respectively)? We denote these maximums by \(f_{even}(n)\) and by \(f_{odd}(n)\), respectively. As in the case of the even-town and odd-town theorems, the answers to these questions are quite different.
**Theorem 2**.: \(f_{even}(n)=n\)_, i.e., for any even-sunflower-free family \(\mathcal{F}\subset 2^{\{1,\ldots,n\}}\) we have \(|\mathcal{F}|\leq n\)._
**Theorem 3**.: \(f_{odd}(n)>1.502148^{n}\) _if \(n>n_{0}\), i.e., there are odd-sunflower-free families \(\mathcal{F}\subset 2^{\{1,\ldots,n\}}\), for any large enough \(n\) with \(|\mathcal{F}|>1.502148^{n}\)._
Let \(\mu_{odd}=\lim f_{odd}(n)^{1/n}\). (The existence of the limit easily follows from our Lemma 5 and Fekete's lemma, just like for ordinary sunflowers; see [1].) Using the fact that any odd-sunflower-free family \(\mathcal{F}\) is also sunflower-free, the result of Naslund and Savin [18] mentioned above implies that \(f_{odd}(n)\leq 1.89^{n}\). Thus, we have
\[1.502148<\mu_{odd}\leq\mu<1.89.\]
It would be interesting to decide whether \(\mu_{odd}\) is strictly smaller than \(\mu\), and to find a direct proof for \(\mu_{odd}<2\). Is the new slice rank method required?
The starting point of our approach is a 50 years old idea of Abbott, Hanson, and Sauer [2] concerning ordinary sunflowers: one can use "direct sums" to recursively produce larger constructions from smaller ones; see Lemmas 5 and 6 and the discussion on MathOverflow [17].
The rest of this paper is organized as follows. In Section 2, we prove Theorem 2. In Section 3, we show that if \(n\) is large enough, then the largest odd-sunflower-free families on the underlying set \(\{1,\ldots,n\}\) cannot be obtained by using only direct sums in the way (to be) described in Lemma 5. Building on this, in Section 4, we establish Theorem 3. In Section 5, we study minimal \(k\)-uniform odd-sunflower-free families and characterize them for \(k\leq 3\). The final section contains some remarks and open problems.
## 2. Proof of Theorem 2
The lower bound \(f_{even}(n)\geq n\) follows from taking \(n\) singleton sets. For the upper bound \(f_{even}(n)\leq n\), we sketch the argument in two different forms: using linear algebra (as in the usual proof of the odd-town theorem) and by a parity argument (which does not work there).
_First proof._ Represent each set by its characteristic vector over \(\mathbb{F}_{2}^{n}\). If \(|\mathcal{F}|>n\), these vectors have a nontrivial linear combination that gives zero. The sets whose coefficients are one in this combination yield an even-sunflower. \(\Box\)
_Second proof._ There are \(2^{|\mathcal{F}|}-1\) nonempty subfamilies of \(\mathcal{F}\). If \(|\mathcal{F}|>n\), by the pigeonhole principle, there are two different subfamilies that contain precisely the same elements of \(\{1,\ldots,n\}\) an odd number of times. But then their symmetric difference is an even-sunflower.
## 3. Direct Sum Constructions
Before we prove Theorem 3, we make some definitions and state some simple lemmas.
In a _multifamily_ of sets, every set \(F\) can occur a positive integer number of times. This number is called the _multiplicity_ of \(F\). A multifamily of at least two nonempty sets is an _odd-sunflower_ if the degree of every element of the underlying set is odd or zero. Note that, similarly to sunflowers, restricting an odd-sunflower multifamily to a smaller underlying set also gives an odd-sunflower multifamily, unless fewer than two nonempty sets remain.
A family \(\mathcal{F}\) is called an _antichain_, or _Sperner_, if it is containment-free, i.e., \(F,G\in\mathcal{F}\) and \(F\subset G\) imply that \(F=G\). Let \(f_{oa}(n)\) denote the maximum size of an antichain \(\mathcal{F}\) on the underlying set \(\{1,\ldots,n\}\) that contains no odd-sunflower. Note that any _slice_ of \(\mathcal{F}\), i.e., any subfamily of \(\mathcal{F}\) whose sets are of the same size, form an antichain. Obviously, we have \(f_{odd}(n)/n\leq f_{oa}(n)\leq f_{odd}(n)\) and, therefore,
\[\lim f_{oa}(n)^{1/n}=\mu_{odd}.\]
Given two families, \(\mathcal{F}\) and \(\mathcal{G}\), on different base sets, their _direct sum_ is defined as \(\mathcal{F}+\mathcal{G}=\{F\cup F\mid F\in\mathcal{F},G\in\mathcal{G}\}.\) We can repeatedly apply this operation to obtain \(\mathcal{F}+\mathcal{F}+\cdots+\mathcal{F}\). In such _direct sum constructions_, we call \(\mathcal{F}\) the "building block."
We start with the following simple construction.
**Construction 1:** Let \(k=\lfloor n/3\rfloor\). Make \(k\) disjoint groups of size \(3\) from \(\{1,\ldots,n\}\). Define \(\mathcal{F}\) as the family of all sets that intersect each group in exactly \(2\) elements. Then we have \(|\mathcal{F}|=3^{k}\), i.e., \(\sqrt[3]{3}^{n}\), whenever \(n\) is divisible by \(3\). This shows that
\[\mu_{odd}\geq\sqrt[3]{3}>1.44. \tag{1}\]
We prove that this construction is odd-sunflower-free using a series of lemmas, which we will also use later.
**Lemma 4**.: _If \(\mathcal{F}\) is odd-sunflower-free family, and \(\mathcal{H}\) is a multifamily of size at least two, comprised of elements \(\mathcal{F}\), then \(\mathcal{H}\) is an odd-sunflower multifamily if only if it consists of an odd number of copies of a single member \(F\in\mathcal{F}\), and an even number of copies of some subsets of \(F\)._
_In particular, if \(|\mathcal{H}|\) is even, it cannot be an odd-sunflower._
_Remark._ If \(\mathcal{F}\) is an _antichain_, then the multifamily \(\mathcal{H}\) is an odd-sunflower if and only if it consists of an odd number of copies of the same set \(F\in\mathcal{F}\).
Proof.: The "if" part of the statement is obvious.
Assume that \(\mathcal{H}\) is an odd-sunflower. Reduce the multifamily \(\mathcal{H}\) to a family \(\mathcal{H}^{\prime}\) by deleting all sets of even multiplicity and keeping only one copy of each set whose multiplicity is odd. This does not change the parity of the degree of any vertex.
Suppose that \(\mathcal{H}^{\prime}\subseteq\mathcal{F}\) consists of at least two sets. Since \(\mathcal{H}^{\prime}\subseteq\mathcal{F}\) is odd-sunflower-free, there is an element which is contained in a nonzero even number of sets of \(\mathcal{H}^{\prime}\) and, therefore, in a nonzero even number of sets in the multifamily \(\mathcal{H}\). This contradicts our assumption that \(\mathcal{H}\) was an odd-sunflower.
If \(\mathcal{H}^{\prime}\) is empty, then any element covered by \(\mathcal{H}\) is contained in an even number of sets from \(\mathcal{H}^{\prime}\), thus \(\mathcal{H}\) again cannot be an odd-sunflower.
Finally, consider the case when the reduced family \(\mathcal{H}^{\prime}\) consists of a single set \(F\in\mathcal{F}\). If all sets in the multifamily \(\mathcal{H}\) are copies of \(F\), we are done. Otherwise, there are some other sets \(F^{\prime}\neq F\) participating in \(\mathcal{H}\) with even multiplicity. If any such \(F^{\prime}\) has an element that does not belong to \(F\), then this element is covered by a nonzero even number of sets of the multifamily \(\mathcal{H}\), contradicting the assumption that \(\mathcal{H}\) is an odd-sunflower. Therefore, all such \(F^{\prime}\) are subsets of \(F\), as claimed.
**Lemma 5**.: _If \(\mathcal{F}\) and \(\mathcal{G}\) are odd-sunflower-free families, and at least one of them is an antichain, then \(\mathcal{F}+\mathcal{G}\) is also odd-sunflower-free. Moreover, if both \(\mathcal{F}\) and \(\mathcal{G}\) are antichains, then so is \(\mathcal{F}+\mathcal{G}\)._
_Remark_.: If none of \(\mathcal{F}\) and \(\mathcal{G}\) are antichains, then it can happen that \(\mathcal{F}+\mathcal{G}\) contains an odd-sunflower. For example, if \(\mathcal{F}=\{\{1\},\{1,2\}\}\) and \(\mathcal{G}=\{\{3\},\{3,4\}\}\), then \(\{\{1,3\},\{1,2,3\},\{1,3,4\}\}\) is an odd-sunflower.
Proof.: The "moreover" part of the statement, according to which \(\mathcal{F}+\mathcal{G}\) is an antichain, is trivial.
Suppose for contradiction that \(\mathcal{F}+\mathcal{G}\) has a subfamily \(\mathcal{H}\) consisting of at least two sets that form an odd-sunflower. Without loss of generality, \(\mathcal{G}\) is an antichain.
Assume first that the parts of the sets of \(\mathcal{H}\) that come from \(\mathcal{G}\) are not all the same. These parts are the restriction of \(\mathcal{H}\) to the underlying set of \(\mathcal{G}\), so they form a multifamily which is an odd-sunflower. Applying Lemma 4 to this subfamily, it follows that the parts of the sets in \(\mathcal{H}\) that come from \(\mathcal{F}\) all coincide, contradicting our assumption.
Otherwise, the parts of the sets of \(\mathcal{H}\) that come from \(\mathcal{G}\) are all the same, in which case the parts that come from \(\mathcal{F}\) are all different. But then we can use that \(\mathcal{F}\) is sunflower-free.
**Corollary 6**.: _For any integers \(n,m,t>0\), we have \(f_{oa}(n)+f_{oa}(m)\geq f_{oa}(n+m),\) and thus \(f_{oa}(tn)\geq tf_{oa}(n),\mu_{odd}\geq f_{oa}(n)^{1/n}.\)_
This follows by repeated application of Lemma 5 to the direct sum construction with building block \(\mathcal{F}\), i.e., to \(\mathcal{F}+\mathcal{F}+\cdots+\mathcal{F}\). When \(\mathcal{F}=\mathcal{C}_{3}\) consists of the two-element subsets of \(\{1,2,3\}\), we recover Construction 1. This proves (1).
## 4. Wreath Product Constructions
In this section, we describe another construction that uses the _wreath product_ of two families. This is a common notion in group theory [15], but less common in set theory. It was introduced in the PhD thesis of the first author [12]; see also [17].
Let \(n,m\) be positive integers, \(\mathcal{F}\subseteq 2^{\{1,\ldots,n\}}\), \(\mathcal{G}\subseteq 2^{\{1,\ldots,m\}}\) families of subsets of \(N=\{1,\ldots,n\}\) and \(M=\{1,\ldots,m\}\), respectively. Take \(n\) isomorphic copies \(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\) of \(\mathcal{G}\) with pairwise disjoint underlying sets \(M_{1},\ldots,M_{n}\). Define the _wreath product_ of \(\mathcal{F}\) and \(\mathcal{G}\), denoted by \(\mathcal{F}\wr\mathcal{G}\), on the underlying set \(\cup_{i=1}^{n}M_{i}\), as follows.
\[\mathcal{F}\wr\mathcal{G}=\{\bigcup_{i\in F}G_{i}\mid F\in\mathcal{F},G_{i} \in\mathcal{G}_{i}\}.\]
That is, for each \(F\in\mathcal{F}\), for every \(i\in F\), for every \(G_{i}\in\mathcal{G}_{i}\), we take the set \(\cup_{i\in F}G_{i}\). We obviously have \(|\mathcal{F}\wr\mathcal{G}|=\sum_{F\in\mathcal{F}}|\mathcal{G}|^{|F|}\). Thus, \(|\mathcal{F}\wr\mathcal{G}|=|\mathcal{F}||\mathcal{G}|^{k}\) holds, provided that \(\mathcal{F}\) is _\(k\)-uniform_, i.e., \(|F|=k\) for every \(F\in\mathcal{F}\).
**Lemma 7**.: _If \(\mathcal{F}\) and \(\mathcal{G}\) are odd-sunflower-free families and \(\mathcal{G}\) is an antichain, then \(\mathcal{F}\wr\mathcal{G}\) is also odd-sunflower-free. Moreover, if \(\mathcal{F}\) is also an antichain, then so is \(\mathcal{F}\wr\mathcal{G}\)._
_Remark_.: If \(\mathcal{G}\) is not an antichain, then it may happen that \(\mathcal{F}\wr\mathcal{G}\) contains an odd-sunflower, even if \(\mathcal{F}\) was an antichain. For example, if \(\mathcal{F}=\{\{1,2\}\}\) and \(\mathcal{G}=\{\{3\},\{3,4\}\}\), then the three sets \(\{3_{1},3_{2}\}\),\(\{3_{1},3_{2},4_{1}\}\),\(\{3_{1},3_{2},4_{2}\}\) form an odd-sunflower.
Proof.: The "moreover" part of the statement, according to which \(\mathcal{F}\wr\mathcal{G}\) is an antichain, is trivial.
We need to show that in any family \(\mathcal{H}\) of at least two sets from \(\mathcal{F}\wr\mathcal{G}\), there is an element contained in a nonzero even number of sets from \(\mathcal{H}\). Consider the multifamily \(\mathcal{H}^{\prime}\) of sets from \(\mathcal{F}\), in which the multiplicity of a set \(F\) is as large, as many sets of the form \(\cup_{i\in F}G_{i}\) belong to \(\mathcal{H}\).
Since \(\mathcal{F}\) is sunflower-free, there are two possibilities.
_Case A_: Some set in the multifamily \(\mathcal{H}^{\prime}\) has multiplicity greater than one.
In this case there exists an element \(i\in F\) such that the multifamily of sets from \(\mathcal{G}_{i}\), consisting of the intersections of the sets from \(\mathcal{H}\) with \(M_{i}\), has at least two _distinct_ sets. Otherwise, the sets of \(\mathcal{H}\) that correspond to the repeated set of \(\mathcal{H}^{\prime}\) would coincide, and \(\mathcal{H}\) has no repeated sets. Applying Lemma 4 to the multifamily of sets from \(\mathcal{G}_{i}\) for such an \(i\), we find an element of \(M_{i}\) contained in a nonzero even number of sets from \(\mathcal{H}\), as required.
_Case B_: The multifamily \(\mathcal{H}^{\prime}\) is not an odd-sunflower. That is, there exists an element \(i\in\{1,\ldots,n\}\) which is covered by an even number of sets in \(\mathcal{H}^{\prime}\).
This means that \(\mathcal{H}\) has a nonzero even number of sets with nonempty intersections with \(M_{i}\). Thus, applying Lemma 4 to the multifamily of sets from \(\mathcal{G}_{i}\) formed by these nonempty intersections, again we find an element of \(M_{i}\) contained in a nonzero even number of sets from \(\mathcal{H}\).
This completes the proof. \(\Box\)
**Corollary 8**.: _Let \(\mathcal{F}\) is a \(k\)-uniform odd-sunflower-free antichain on \(n\) elements. Then we have_
\[f_{oa}(nm)\geq|\mathcal{F}|(f_{oa}(m))^{k}.\]
_In particular, \(f_{oa}(nm)\geq n(f_{oa}(m))^{n-1},\) for odd \(n\)._
The second part of the corollary follows by choosing \(\mathcal{F}=\mathcal{C}_{n}\), the family of all \((n-1)\)-element subsets of \(\{1,\ldots,n\}\). These families have high uniformity, so they are natural candidates to increase the size of the family fast, because the uniformity \(k\) appears in the exponent in Corollary 8.
As a simple, concrete application, consider the following.
**Construction 2:** The family \(\mathcal{C}_{9}\wr\mathcal{C}_{3}\) consists of \(|\mathcal{C}_{9}||\mathcal{C}_{3}|^{8}=9\cdot 3^{8}=3^{10}\) subsets of a \(9\cdot 3=27\)-element set. Thus, we have
\[\mu_{odd}\geq|\mathcal{C}_{9}\wr\mathcal{C}_{3}|^{1/27}=3^{10/27}>1.502144. \tag{2}\]
Lemma 7 implies that \(\mathcal{C}_{9}\wr\mathcal{C}_{3}\) contains no odd-sunflower. Thus, \(f_{oa}(27)\geq 3^{10}\), and by Corollary 6, \(\mu_{odd}\geq f_{oa}(27)^{1/27}\).
By Corollaries 6 and 8, we get \(\mu_{odd}\geq f_{oa}(mn)^{1/mn}\geq(n|\mathcal{G}|^{n-1})^{1/mn}\). Here, to get the best bound, we need to choose \(n\) so as to maximize the last expression. Letting \(n=x|\mathcal{G}|\), we obtain
\[\mu_{odd}\geq(n|\mathcal{G}|^{n-1})^{1/mn}=(x|\mathcal{G}|^{n})^{1/mn}=| \mathcal{G}|^{1/m}x^{1/xm|\mathcal{G}|}.\]
Since \(|\mathcal{G}|\) and \(m\) are independent of \(n\), this is equivalent to maximizing \(x^{1/x}\). A simple derivation shows that the optimal choice is \(x=e\), so we need \(n\) to be the largest odd integer smaller than \(e|\mathcal{G}|\), or the smallest odd integer greater than \(e|\mathcal{G}|\). In the case of Construction 2, \(3e\) is closest to \(9\).
The above reasoning also shows that any lower bound \(|\mathcal{G}|^{1/m}\leq\mu_{odd}\) that comes from the direct sum construction using \(\mathcal{G}\) as a the building block, can be slightly improved by taking \(\mathcal{C}_{n}\wr\mathcal{G}\) for some odd \(n\) close to \(e|\mathcal{G}|\). For example, if \(\mathcal{G}=\mathcal{C}_{9}\wr\mathcal{C}_{3}\) is the \(16\)-uniform family of \(3^{10}\) sets on \(27\) elements obtained in Construction 2, then we can choose \(n\) to be \(160511\approx e3^{10}\).
**Construction 3:** The family \(\mathcal{C}_{160511}\wr(\mathcal{C}_{9}\wr\mathcal{C}_{3})\) consists of \(|\mathcal{C}_{160511}||\mathcal{C}_{9}\wr\mathcal{C}_{3}|^{160510}=160511\cdot 3 ^{1605100}\) subsets of a \(160511\cdot 27=4333797\)-element set. Thus, we have
\[\mu_{odd}\geq(160511\cdot 3^{1605100})^{1/4333797}>1.502148. \tag{3}\]
Of course, the improvement on the lower bound for \(\mu_{odd}\) is extremely small as the families grow.
## 5. Minimal odd-sunflowers (MOS-s)
An odd-sunflower is called _minimal_, or a _MOS_, if it has no proper subfamily which is an odd-sunflower. A \(k\)-uniform MOS is a \(k\)_-MOS_. We start with the following simple observation that will help us characterize all MOS's.
**Lemma 9**.: _If the underlying set of a MOS \(\mathcal{F}\) has \(n\) elements, then \(|\mathcal{F}|\leq n\)._
_Moreover, if \(\mathcal{F}\) is a \(k\)-MOS for an even \(k\), then \(|\mathcal{F}|\leq n-1\)._
Proof.: Assume that \(\mathcal{F}\) is an odd-sunflower. If \(|\mathcal{F}|>n\), then by Theorem 2, \(\mathcal{F}\) has a subfamily \(\mathcal{F}^{\prime}\) which is an even-sunflower. But then \(\mathcal{F}\setminus\mathcal{F}^{\prime}\) is an odd-sunflower contained in \(\mathcal{F}\), contradicting the minimality of \(\mathcal{F}\).
If \(\mathcal{F}\) is a \(k\)-MOS on \(n\) elements where \(k\) is even, then the sum of the degrees in every subfamily \(\mathcal{F}^{\prime}\subset\mathcal{F}\) is even. So, there are only \(2^{n-1}\) options for the degree sequences of the elements modulo \(2\), over all subfamilies \(\mathcal{F}^{\prime}\). If \(|\mathcal{F}|=n\), then there are two distinct subfamilies \(\mathcal{F}_{1},\mathcal{F}_{2}\subset\mathcal{F}\) that give the same degree sequence. In this case, however, \(\mathcal{F}\setminus(\mathcal{F}_{1}\Delta\mathcal{F}_{2})\) would be a smaller odd-sunflower contained in \(\mathcal{F}\).
Up to isomorphism, there is only one 1-MOS: the family consisting of two singletons, \(\{\{1\},\{2\}\}\).
We have two different 2-MOS-s: \(\{\{1,2\},\{3,4\}\}\), and \(\{\{1,2\},\{1,3\},\{1,4\}\}\). Indeed, if a 2-uniform family does not have the second configuration, then it corresponds to a graph where the maximum degree is two. In an odd-sunflower every degree is odd. Hence, every degree must be one, in which case we have a collection of disjoint edges: the first configuration.
Note that the above examples either consist of two disjoint sets or form a (classic) sunflower. For 3-MOS-s, this is not true. Notice that if we only considered minimal odd-sunflowers consisting of _at least three sets_, then in the 1-uniform case the only minimal example would be \(\{\{1\},\{2\},\{3\}\}\), while in the 2-uniform case the examples would be \(\{\{1,2\},\{3,4\},\{5,6\}\}\) and \(\{\{1,2\},\{1,3\},\{1,4\}\}\). All of these examples are sunflowers with three petals.
Next, we characterize all 3-MOS-s. Of course, two disjoint 3-element sets form a 3-MOS. To simplify the notation, in the sequel, we omit the inner set symbols, so this example will be denoted as \(\{123,456\}\).
If in a 3-MOS, there is a vertex contained in all sets, then deleting these vertices gives a 1- or 2-uniform family, which we have characterized already. Because of the odd-degree condition for the vertices contained in all sets, we can use only those examples from above that consist of three sets. These give the following 3-MOS-s: \(\{123,124,125\}\) or \(\{123,145,167\}\).
If there is no vertex contained in all sets of a 3-MOS \(\mathcal{F}\), we define for any element \(x\) of the underlying set, a graph \(G_{x}\) as follows. Let
\[\mathcal{F}_{x}=\{F\in\mathcal{F}:x\in F\}\;\;\text{and}\;\;\mathcal{F}_{ \overline{x}}=\{F\in\mathcal{F}:x\notin F\}.\]
Let the vertices of \(G_{x}\) be the elements of the underlying set of \(\mathcal{F}_{x}\), apart from \(x\), and let the edge set of \(G_{x}\) be \(E_{x}=E(G_{x})=\{F\setminus\{x\}:F\in\mathcal{F}_{x}\}\). From our earlier observations, we can conclude that \(G_{x}\) has maximum degree two and it has no three disjoint edges. This implies that \(|E_{x}|\leq 6\).
However, we can prove a better bound. First of all, since every degree is odd, \(deg(x)=|E_{x}|\) is also odd, thus \(|E_{x}|\leq 5\). If \(|E_{x}|=5\), then, using the fact that \(G_{x}\) has maximum degree two and it has no three disjoint edges, it must be either a cycle of length five or the disjoint union of a triangle and a path of length two. We will show that in fact none of these cases is feasible.
If \(G_{x}\) is a cycle on five vertices, then each set in \(\mathcal{F}_{\overline{x}}\) must intersect all edges of \(G_{x}\), because \(\mathcal{F}\) is an intersecting family. This implies that the underlying set of \(\mathcal{F}\) has only six elements. Then, by Lemma 9, \(\mathcal{F}_{\overline{x}}\) consists of only one set, and it cannot turn the degrees of all five even-degree vertices of \(\mathcal{F}_{x}\) odd.
If \(G_{x}\) is the disjoint union of a triangle \(\{12,23,31\}\) and a path \(\{45,56\}\), then let
\[\mathcal{F}_{x}=\{x12,x23,x31,x45,x56\}.\]
As \(\mathcal{F}\) is intersecting, all sets from \(\mathcal{F}_{\overline{x}}\) must consist of the element \(5\), and two of the elements \(1,2,3\). Hence, the degree of one of \(1,2,3\) will be even, irrespective of the cardinality of \(\mathcal{F}_{\overline{x}}\), which is impossible.
We obtained
**Lemma 10**.: _In a 3-MOS, in which no vertex is contained in all sets, the degree of every vertex \(x\) contained in more than one set is \(|E_{x}|=3\)._
From here, we can conclude that \(|\mathcal{F}|\leq 7\) as follows.
Pick any set \(123\in\mathcal{F}\). Every set in \(\mathcal{F}\) must intersect \(123\), so each of them must contribute to at least one of \(E_{1}\), \(E_{2}\), and \(E_{3}\), where \(123\) contributes thrice. Therefore, the number of sets in \(\mathcal{F}\) is at most \(1+3\cdot 2=7\). Moreover, unless \(\mathcal{F}\) is 3-regular, we even have \(|\mathcal{F}|\leq 5\), by repeating the above argument by picking 1 to be an element included in only one set. From here, a case analysis (which can be found in Appendix B) gives the following.
**Proposition 11**.: _Up to isomorphisms, we have the following \(7\) different 3-MOS-s:_
_Case (1): Two disjoint triples: \(\{123,456\}\)._
_Case (2): Sunflower of 3 triples with one common element: \(\{123,145,167\}\)._
_Case (3): Sunflower of 3 triples with two common elements: \(\{123,124,125\}\)._
_Case (4): \(\mathcal{C}_{4}\): \(\{123,124,134,234\}\)._
_Case (5): Complement of a \(5\)-cycle: \(\{123,124,135,245,345\}\)._
_Case (6): \(\mathcal{C}_{4}\) with one element split into three: \(\{123,124,135,236\}\)._
_Case (7): \(\{123,124,156,256,345,346\}\)._
Note that each 3-MOS satisfies \(|\mathcal{F}|\leq 6\).
_Remark_.: More generally, with the above reasoning we can bound the number of \(k\)-tuples in a \(k\)-MOS as \(1+(k-1)g(k)\) where \(g(k)\) is the size of the largest sunflower-free family. If the Erdos-Rado conjecture is true, this gives an upper bound of \(c^{k}\). Maybe it is possible to prove such an exponential bound without invoking the conjecture as well. For example, using Lemma 9 another upper bound is \(1+(k-1)g_{v}(k)\) where \(g_{v}(k)\) is the size of the base set of the largest sunflower-free family. We could not find any papers studying the quantity \(g_{v}(k)\), and the base set of most sunflower-free constructions grows only linearly in \(k\). However, it is not hard to see that we have an exponential lower bound even in the case when the family is odd-sunflower-free: \(g_{v}(k)\geq 2^{k}-1\). This is achieved by the \(k\)-uniform family whose sets are the root-to-leaf paths in a rooted binary tree of depth \(k\). Note that this construction is not optimal, in fact, not even maximal; we can add, say, a set that contains the two children of the root, and \(k-2\) new vertices.
## 6. Concluding remarks
In this note, we studied the Erdos-Szemeredi-type sunflower problem for odd-sunflowers. We want to remark that our structural result is also true for (ordinary) sunflowers, using essentially the same proof.
**Proposition 12**.: _If \(\mathcal{F}\) is any sunflower-free \(k\)-uniform family on \(n\) elements, denoting the direct sum construction with building block \(\mathcal{F}\) by \(\mathcal{F}^{(t)}=\mathcal{F}+\cdots+\mathcal{F}\), then \(\lim_{t\to\infty}|\mathcal{F}^{(t)}|^{1/tn}<\mu\)._
In other words, direct sum constructions will never reach the optimal value \(\mu\). As far as we know, this result is new. The best currently known examples of Deuber _et al._[7] use a combination of a direct sum construction and some other _ad hoc_ tricks that do not work for odd-sunflowers.
What about the Erdos-Rado-type sunflower problem, i.e., what is the maximum possible size of an odd-sunflower-free \(k\)-uniform set system? We pose the following weakening of Erdos and Rado's conjecture.
**Conjecture 13**.: _The maximum size of any odd-sunflower-free \(k\)-uniform family is at most \(c^{k}\), for a suitable constant \(c>0\)._
Note that the respective problem does not make sense for even-sunflowers, as any number of disjoint sets is even-sunflower-free.
We would like to pose another weakening, already mentioned at the end of Section 5.
**Conjecture 14**.: _The maximum number of base elements, each of which is contained in at least one set of a sunflower-free \(k\)-uniform family, is at most \(c^{k}\), for a suitable constant \(c>0\)._
_Acknowledgment_.: We are grateful to Balazs Keszegh, and to the members of the Miklos Schweitzer Competition committee of 2022 for valuable discussions.
|
2303.15330
|
Fingerprint of vortex-like flux closure in isotropic Nd-Fe-B bulk magnet
|
Taking advantage of recent progress in neutron instrumentation and in the
understanding of magnetic-field-dependent small-angle neutron scattering, here,
we study the three-dimensional magnetization distribution within an isotropic
Nd-Fe-B bulk magnet. The magnetic neutron scattering cross section of this
system features the so-called spike anisotropy, which points towards the
presence of a strong magnetodipolar interaction. This experimental result
combined with a damped oscillatory behavior of the corresponding correlation
function and recent micromagnetic simulation results on spherical nanoparticles
suggest an interpretation of the neutron data in terms of vortex-like
flux-closure patterns. The field-dependent correlation length Lc is well
reproduced by a phenomenological power-law model. While the experimental
neutron data for Lc are described by an exponent close to unity (p = 0.86), the
simulation results yield p = 1.70, posing a challenge to theory to include
vortex-vortex interaction effects.
|
Mathias Bersweiler, Yojiro Oba, Evelyn Pratami Sinaga, Inma Peral, Ivan Titov, Michael P. Adams, Konstantin L. Metlov, Andreas Michels
|
2023-03-27T15:31:09Z
|
http://arxiv.org/abs/2303.15330v2
|
# Fingerprint of vortex-like flux closure in isotropic Nd-Fe-B bulk magnet
###### Abstract
Taking advantage of recent progress in neutron instrumentation and in the understanding of magnetic-field-dependent small-angle neutron scattering, here, we study the three-dimensional magnetization distribution within an isotropic Nd-Fe-B bulk magnet. The magnetic neutron scattering cross section of this system features the so-called spike anisotropy, which points towards the presence of a strong magnetodipolar interaction. This experimental result combined with a damped oscillatory behavior of the corresponding correlation function and recent micromagnetic simulation results on spherical nanoparticles suggest an interpretation of the neutron data in terms of vortex-like flux closure patterns. The field-dependent correlation length is very well reproduced by a power-law model used to describe the London penetration depth in the vortex state of type-II superconductors and suggests the "pairing" (interaction) of magnetic vortices.
## I. Introduction
Permanent magnets are defined by their high remanent magnetization and high coercivity. Ideally, their magnetization in the remanent state should be as uniform as possible within the bulk. Yet, because of the high coercivity (related to the large magnetic anisotropy) saturating them is not an option for quantifying how far their remanent state is away from the uniform one and what kind of magnetization nonuniformities develop at remanence.
These nonuniformities are usually micrometer in size and their direct observation within the bulk of the magnet is only possible with tomographic techniques. The current state of the art in x-ray magnetic nanotomography [1, 2, 3], which utilizes the magnetic circular dichroism effect, is strongly tied to the details of the absorption edge of a particular element (e.g., Gd) and is not applicable to an arbitrary magnetic material without significant adjustments. By contrast, magnetic small-angle neutron scattering (SANS) is universally applicable to any kind of magnetic material, and can disclose the magnetic microstructure in the bulk and on the relevant mesoscopic length scale of \(\sim\)1-1000 nm [4, 5, 6].
Sintered Nd-Fe-B is nowadays one of the most used permanent magnets that finds application in many key industry sectors [7]. Commercial-grade magnets consist of highly magnetic (and with high crystalline magnetic anisotropy) Nd\({}_{2}\)Fe\({}_{14}\)B grains, sintered and magnetized in such a way that their average magnetization points (mostly) towards the same direction. However, due to the magnetostatic interaction the grains tend to develop various flux-closure magnetization textures, which, depending on the grain size and other technological parameters, result in a multitude of multidomain structures (observable e.g. by surface microscopy and/or Bitter pattern techniques [8, 9]), lowering the energy product of the magnet. This problem becomes less pronounced for smaller grains (especially the ones that are embedded in the bulk of the magnet), whose magnetization becomes almost uniform, but even then the magnetostatic energy favors a subtle magnetization curling [10]. Also, the surface flux-closure domains are almost always very different from the magnetic texture in the bulk, which for Nd-Fe-B still remains unobserved, but (for macroscopic sample sizes) makes the major contribution towards its remanent magnetization.
As shown by Vivas _et al_. [11] using micromagnetic simulations, the flux-closure structures in a set of spherical magnetic nanoparticles can be analyzed using the correlation function analysis of the corresponding magnetic SANS cross section. In this study, we apply this specific analysis technique to real experimental scattering data and find strong evidence for the existence of vortex-like flux-closure textures within the grains of a Nd-Fe-B magnet.
## II Experimental Details
For the neutron experiments, a circular-shaped disk of a commercially available sintered isotropic (i.e., untextured) Nd-Fe-B permanent magnet (grade: N52) with a diameter of 22.0 mm and a thickness of 420 \(\upmu\)m was prepared. A summary of the microstructural and magnetic characterization results of the Nd-Fe-B specimen can be found in Ref. [12]. The neutron experiments were conducted at the instrument SANS-J at the JRR-3 research reactor in Tokai, Japan [13]. Figure 1(a) sketches the scattering geometry used in this work. The neutron experiments were done at room temperature using an unpolarized neutron beam with a mean wavelength of \(\lambda=6.5\) A and a wavelength broadening of 14 % (FWHM). By employing a focusing neutron-lens setup, the accessible magnitude of the momentum-transfer vector \(q\) ranged between about 0.003 nm\({}^{\text{-1}}\leq q\leq 0.3\) nm\({}^{\text{-1}}\), so that real-space structures on a scale of a few nm up to a few \(\upmu\)m could be probed. It is this particular feature of the neutron instrumentation at SANS-J that allows us to access large-scale magnetization fluctuations, well beyond the capabilities of conventional SANS instruments (see Ref. [13] for further details). A magnetic field \(\mathbf{H}_{0}\) was applied perpendicular to the incident neutron beam (\(\mathbf{H}_{0}\parallel\mathbf{e}_{\text{z}}\perp\mathbf{k}_{0}\)). Neutron data were recorded by reducing the magnetic field from 10 T (maximum field available) down to 0 T. The neutron-data reduction (corrections for background scattering and sample transmission) was performed using an in-house program written in Igor Pro software (WaveMetrics). For the neutron-data analysis, the experimental purely magnetic SANS cross sections \(d\Sigma_{\text{mag}}/d\Omega\) were determined by subtracting the total (nuclear + magnetic) SANS cross section \(d\Sigma/d\Omega\) measured at 10 T (approach-to-saturation regime) from the ones measured at lower fields. This subtraction procedure eliminates the (field-independent) nuclear scattering contribution, and has been successfully used in several other studies, e.g., to investigate the magnetization profile within nanoparticles [14], or to disclose the magnetic microstructure in off-stoichiometric Heusler alloys [15].
**III. Magnetic correlation function and correlation length**
The quantity of interest in our experiments is the magnetic correlation function \(C(r)\), which provides information on the real-space correlations of the three-dimensional magnetization vector field on the mesoscopic length scale [6]. We have numerically computed the \(C(r)\) from the experimental purely magnetic SANS cross section \(d\Sigma_{\rm mag}/d\Omega\) via an indirect Fourier transform (IFT) technique, based on the following Fourier transform:
\[C(r)=\frac{1}{r}\int_{0}^{\infty}\frac{d\Sigma_{\rm mag}}{d\Omega}(q)\sin{(qr) }qdq\quad. \tag{1}\]
The numerical inversion method was introduced in the 1970s by Glatter [16]; for the particular case of magnetic SANS, it represents a fast and robust means to obtain model-free magnetic information that reflects the real-space magnetization of magnetic materials (e.g., nanoparticles [17, 18] or bulk ferromagnets [15, 19]); see Ref. [20] for technical information and discussion about the IFT approach. We have also computed the magnetic correlation length \(L_{\rm c}\), which characterizes the average distance over which fluctuations of the magnetization vector field are correlated. There are several methods discussed in the literature to define and quantify \(L_{\rm c}\); e.g., \(L_{\rm c}\) can be determined from the logarithmic derivative of the magnetic correlation function \(C(r)\) in the limit \(r\to 0\) (Ref. [21]), or it can be defined as the value of \(r\) for which \(C(r)=C(0){\rm exp}(-1)\) (Ref. [22]). These two methods for estimating \(L_{\rm c}\) focus on the behavior of \(C(r)\) at small distances \(r\). Here, in order to obtain the full information on the vortex structures (over the entire \(r\) range), we determined \(L_{\rm c}\) at a particular external field \(H_{0}\) according to
\[L_{\rm c}(H_{0})=\frac{\int_{0}^{\infty}r\;C(r,H_{0}){\rm d}r}{\int_{0}^{ \infty}C(r,H_{0}){\rm d}r}\quad. \tag{2}\]
We emphasize that for exponentially-decaying correlations all three definitions yield the same correlation length. In our analysis, we also display results for the so-called distance distribution function \(P(r)\), which is related to the correlation function via \(P(r)=r^{2}C(r)\). Due to the \(r^{2}\) factor, features at medium and large distances \(r\) are more pronounced in \(P(r)\) than in \(C(r)\).
## IV. Results and Discussion
Figures 1(b) and (c) display typical examples of the experimental two-dimensional (2D) total SANS cross section \(d\Sigma/d\Omega\) of isotropic Nd-Fe-B at the selected fields of 10 T (near saturation) and 0 T (remanence), respectively. As can be seen, near saturation the pattern is slightly elongated perpendicular to the magnetic-field direction. This feature in \(d\Sigma/d\Omega\) is the signature of the so-called "\(\sin^{2}(\theta)\)-type" angular anisotropy due to predominantly longitudinal magnetization fluctuations. By contrast, at remanence, one can clearly observe the emergence of maxima in \(d\Sigma/d\Omega\) along the magnetic-field direction. This observation indicates the built-up of a more complex magnetization texture. Figure 1(d) shows the corresponding 2D purely magnetic SANS cross section \(d\Sigma_{\text{mag}}/d\Omega\) obtained from the
Figure 1: (a) Sketch of the scattering geometry used for the magnetic SANS experiments. The momentum-transfer vector \(\mathbf{q}\) corresponds to the difference between the wavevectors of the incident (\(\mathbf{k_{0}}\)) and the scattered (\(\mathbf{k_{1}}\)) neutrons, i.e., \(\mathbf{q=k_{0}-k_{1}}\). The magnetic field \(\mathbf{H_{0}}\) is applied perpendicular to the incident neutron beam, i.e., \(\mathbf{H_{0}}\parallel\mathbf{e_{z}\perp k_{0}}\). For small-angle scattering (i.e., \(\Psi\ll 1\)), the component \(q_{x}\) of \(\mathbf{q}\) is smaller than the other two components \(q_{y}\) and \(q_{z}\), so that only correlations in the plane perpendicular to the incident neutron beam are probed. (b) and (c) Experimental two-dimensional total (nuclear \(+\) magnetic) unpolarized SANS cross section \(d\Sigma/d\Omega\) of isotropic Nd-Fe-B permanent magnet at the selected fields of 10 T [(b), near saturation] and 0 T [(c), remanence]. (d) Corresponding purely magnetic SANS cross section \(d\Sigma_{\text{mag}}/d\Omega\) obtained by subtracting (b) from (c). Note that in (b) to (d) the SANS data are plotted in polar coordinates with \(q\) in nm-1, \(\theta\) in degrees, and the intensity in cm-1. White dashed line: guide for the eyes to emphasize the spike-type angular anisotropy due to the magnetodipolar interaction.
subtraction of (b) from (c). In this way, the sharp maxima along the field direction become more clearly visible, thereby revealing the so-called "spike-type" angular anisotropy. As detailed by Perigo _et al._[23], this particular feature in \(d\Sigma_{\rm mag}/d\Omega\) is a consequence of the magnetostatic pole-avoidance principle; it is due to a nonzero magnetic volume charge density of the magnetization that shows up as a characteristic angular dependence of the Fourier modes of the magnetostatic field, which in turn determine the Fourier components of the magnetization and, therefore, of \(d\Sigma_{\rm mag}/d\Omega\).
Figure 2: Magnetic-field dependence of the (over \(2\pi\)) azimuthally-averaged purely magnetic SANS cross section \(d\Sigma_{\rm mag}/d\Omega\) of isotropic Nd-Fe-B permanent magnet (log-log scales). Red dashed lines: Extrapolation of \(d\Sigma_{\rm mag}/d\Omega\propto q^{-n\pm 0.05}\). These Porod fits were restricted to \(0.034\leq q\leq 0.14\) nm\({}^{\shortmid}\). Colored solid lines: Reconstructed \(d\Sigma_{\rm mag}/d\Omega\) based on the indirect Fourier transform (IFT) of the numerically computed \(P(r)\) shown in Fig. 3.
Figure 2 presents the (over \(2\pi\)) azimuthally-averaged purely magnetic SANS cross sections \(d\Sigma_{\rm mag}/d\Omega\) of isotropic Nd-Fe-B permanent magnet. In the low-\(q\) region, the \(d\Sigma_{\rm mag}/d\Omega\) curves reveal a Guinier-type behavior around \(q\)\(\sim\)0.006 nm\({}^{\text{-1}}\), indicating that the probed magnetic microstructure is at least 1 \(\upmu\)m in size. In the high-\(q\) region, the \(d\Sigma_{\rm mag}/d\Omega\) exhibit a \(q^{-n}\)-type decay, where the asymptotic power-law exponent \(n\) is found to be consistently larger than the value of \(n=4\), corresponding to scattering from particles with sharp interfaces or from exponentially-correlated magnetization fluctuations. The finding of values \(n>4\) supports the notion of dominant spin-misalignment scattering in isotropic Nd-Fe-B permanent magnet, and is consistent with theoretical predictions and experimental results [6; 12; 24]. This observation is a consequence of the fact that magnetic SANS has its origin in smoothly-varying continuous magnetization profiles, rather than in sharp discontinuous magnetic-moment variations.
Figure 3: (a) Field dependence of the magnetic correlation function \(C(r)\) [Eq. (1)], which was numerically computed via an indirect Fourier transform (IFT) of the experimental \(d\Sigma_{\rm mag}/d\Omega\) data shown in Fig. 2. Blue dashed lines: (a) \(C(r)=1-3r/(4R)+r^{3}/(16R^{3})\) and (b) \(P(r)=r^{2}C(r)\) for a uniformly magnetized sphere with a radius of \(R=560\) nm. Note that the \(C(r)\) at each field have been normalized by their respective maximum values. (b) Corresponding (normalized) distance distribution functions \(P(r)\).
Figure 3 shows the magnetic-field dependence of the magnetic correlation function \(C(r)\) and of the corresponding distance distribution function \(P(r)\) computed via IFT. In the following, we focus the discussion on the behavior of the \(P(r)\) [rather than on the \(C(r)\)], since due to the \(r^{2}\) factor features at medium and large distances are more pronounced in \(P(r)\) than in \(C(r)\). Depending on the magnetic field strength, two distinct behaviors are observed for the \(P(r)\). At the highest field, \(P(r)\) exhibits a deformed bell-like shape, which is reminiscent of an approximately spherical correlation volume. Compared to the case of a uniformly magnetized sphere (blue dashed lines in Fig. 3), it is reasonable to assume that this \(P(r)\) corresponds to a magnetic structure with a size of about 1 \(\upmu\)m and with an internal spin configuration that deviates only slightly from the perfect alignment along the field direction. By reducing the magnetic field strength, the \(P(r)\) disclose a damped oscillatory behavior with negative values, and a zero-crossing shifting to smaller \(r\). As previously discussed by Vivas _et al._[11], the combination of these two features can be used as a strong indication for the presence of an inhomogeneous magnetization texture; more specifically, numerical micromagnetic computations revealed that these features in \(P(r)\) are related to the emergence of a vortex-like flux closure in nanoparticles.
Figure 4: Signature of a vortex-like magnetization texture in the magnetic SANS observables. (a) Snapshot of a vortex-type magnetization distribution (at remanence) in a 40-nm-sized spherical nanoparticle, obtained using the open-source software code MuMax3 [25]. For the micromagnetic simulations of the real-space magnetization distribution, the sphere volume was discretized into cubic cells with a size of \(2\times 2\times 2\) nm\({}^{3}\), and the magnetic material parameters of Fe were used [26, 27] (b) Magnetic-field dependence of the (over \(2\pi\)) azimuthally-averaged magnetic SANS cross section \(d\Sigma_{\rm mag}/d\Omega\), numerically
computed from the simulated real-space magnetization distribution at selected fields \(H_{0}\) [ranging from 0.5 T (near saturation) to remanence with an increment of 0.05 T]. (c) Corresponding magnetic correlation functions \(\mathcal{C}(r)\) [Eq. (1)]. (d) Magnetic distance distribution functions \(P(r)=r^{2}\mathcal{C}(r)\).
Figure 4 displays micromagnetic simulation results for the specific case of a vortex-like magnetization texture in a nanoparticle; details on the micromagnetic simulations using MuMax3 can be found in Ref. [28]. As discussed by Vivas _et al._[11], the most characteristic signature of a vortex-type magnetization distribution [as shown in Fig. 4(a)] in the neutron-scattering data is a damped oscillatory behavior of the corresponding correlation functions \(\mathcal{C}(r)\) and \(P(r)\) with a shift of the zero crossing to smaller \(r\) [compare Fig. 4(c) and (d)]. The oscillatory behavior can be readily explained as follows: a vortex structure is characterized by relatively large spin variations, circulating about the vortex axis, so that the autocorrelation of a vortex with its displaced "ghost" gets dominated (at some particular value of displacement \(r\)) by "anticorrelations" with negative values in both \(\mathcal{C}(r)\) and \(P(r)\). We would like to emphasize that the emergence of a vortex-type spin structure and the concomitant oscillatory feature in the \(P(r)\) is a direct consequence of the dipolar interaction [11, 28], which is also of decisive importance for the appearance of the spike-type pattern in the two-dimensional experimental \(d\Sigma_{\text{mag}}/d\Omega\) [see Fig. 1(d)]. In fact, as demonstrated in Ref. [23], without the dipolar interaction the spike feature is absent in \(d\Sigma_{\text{mag}}/d\Omega\). Moreover, while the simulation results at low fields reveal the absence of a Guinier behavior in \(d\Sigma_{\text{mag}}/d\Omega\) at the smallest momentum transfers [compare Fig. 4(b)], this is not seen in our experimental neutron data [Fig. 2], which at all fields studied feature a Guinier-type behavior followed by a plateau region. In this regard we emphasize that the presence of a vortex-type texture in the sample does not imply a peak-type dependence of the one-dimensional \(d\Sigma_{\text{mag}}/d\Omega\); even a plateau at small \(q\) is compatible with the existence of vortex structures [compare Fig. 4(b) and 4(d)].
Figure 5 presents the field dependence of the magnetic correlation length \(L_{\rm c}(H_{0})\) obtained from the experimental neutron scattering data of isotropic Nd-Fe-B and from the numerical micromagnetic simulations of a single nanoparticle exhibiting a vortex-type magnetization distribution. In the case of the isotropic Nd-Fe-B sample (micromagnetic simulations on a single nanoparticle) \(L_{\rm c}\) increases from \(\sim\) 127 (4.7) nm at remanence to \(\sim\)175 (10.3) nm at the highest field of 4 T (0.5 T). So far there exists no theoretical model that is able to describe the behavior of \(L_{\rm c}(H_{0})\) for the case of a vortex-like flux closure texture. An excellent description of the \(L_{\rm c}(H_{0})\) data is obtained using the following power-law expression (solid lines in Fig. 5):
\[L_{\rm c}(H_{0})=L_{\rm c}(H_{0}=0)+\beta{H_{0}}^{p}\quad, \tag{3}\]
where \(L_{\rm c}(0)=127.0\pm 0.3\) nm, \(\beta=14.5\pm 0.5\) nm/T\({}^{p}\), and \(p=0.86\pm 0.02\) for the case of the isotropic Nd-Fe-B sample, and \(L_{\rm c}(0)=4.8\pm 0.1\) nm, \(\beta=19.0\pm 1.0\) nm/T\({}^{p}\), and \(p=1.70\pm 0.09\) for the simulation data [\(H_{0}\) in Eq. (3) is in Tesla]. A model similar to Eq. (3) has already been used by Sonier _et al._[29] to explain the field dependence of the magnetic penetration depth in the vortex state
Figure 5: (\(\circ\)) Field dependence of the magnetic correlation length \(L_{\rm c}\) obtained from the computed \(C(r)\) data shown in Fig. 3(a) and using Eq. (2). (\(\bullet\)) \(L_{\rm c}(H_{0})\) obtained from the micromagnetic simulation data shown in Fig. 4(c) (note the different scales). Solid lines: fits to Eq. (3). The dashed line represents the theoretical limit \(L_{\rm c}^{H\to\infty}=\frac{8}{15}R\) for a uniformly magnetized 40-nm-sized sphere using \(C(r)=1-3r/(4R)+r^{3}/(16R^{3})\) and Eq. (2).
of a type-II superconductor. As discussed in Ref. [29], a linear increase (\(p=1\)) is expected in the case of "unconventional" vortex pairing, whereas a quadratic field dependence (\(p=2\)) is expected if there is no pairing between vortex states. Even though the analogy to the superconductors seems to be rather far fetched, we do see the change of \(p\) from \(p\longrightarrow 1\) in the case of isotropic Nd-Fe-B to \(p\to 2\) for the data from our micromagnetic simulations. It can, probably, be attributed to the fact that we simulated only a single vortex state (i.e., a system with no vortex-vortex interactions). A better comprehension of the magnetic field dependence of the magnetic correlation length \(L_{\rm c}\) requires the further extension of the micromagnetic SANS theory to include interacting vortex-like structures [30, 31]. Finding the exponent that describes the field evolution of \(L_{\rm c}\) is an important open question in this regard.
## V Conclusion
We have investigated the spin microstructure of an isotropic Nd-Fe-B bulk magnet using magnetic field-dependent small-angle neutron scattering (SANS) combined with micromagnetic simulations. Thanks to a focusing neutron-lens setup, we could access the long-range magnetic correlations in the magnetic SANS cross section \(d\Sigma_{\rm mag}/d\Omega\). The two-dimensional \(d\Sigma_{\rm mag}/d\Omega\) features a spike-type angular anisotropy, which is a consequence of the magnetodipolar interaction and the ensuing pole avoidance principle. Analysis of the magnetic correlation and distance distribution functions, numerically computed from the experimental \(d\Sigma_{\rm mag}/d\Omega\) via an indirect Fourier transform method, suggest the emergence of an internal vortex-like flux-closure magnetization distribution by reducing the magnetic field strength. The field dependence of the corresponding magnetic correlation length can be well described by a power-law expression, and is attributed to the "pairing" of vortices, i.e., to vortex-vortex interactions. The investigation of vortex-like flux-closure patterns in isotropic Nd-Fe-B might be extended to textured commercial-grade Nd-Fe-B, where their appearance might reduce the corresponding remanent magnetization and, hence, limit the performance of the magnet. Finally, our study underlines that magnetic SANS combined with micromagnetic simulations is a promising approach towards the resolution of three-dimensional mesoscale spin structures in bulk materials.
## Acknowledgements
Financial support by the National Research Fund of Luxembourg (AFR Grant No. 15639149 and PRIDE MASSENA Grant) and by KAKENHI (Grant No. 19K05102) is gratefully acknowledged. We thank the Japan Atomic Energy Agency for the provision of neutron beam time at the SANS-J instrument. Konstantin L. Metlov acknowledges the support of the Russian Science Foundation under Project No. RSF 21-11-00325.
|
2305.09497
|
Curious Rhythms: Temporal Regularities of Wikipedia Consumption
|
Wikipedia, in its role as the world's largest encyclopedia, serves a broad
range of information needs. Although previous studies have noted that Wikipedia
users' information needs vary throughout the day, there is to date no
large-scale, quantitative study of the underlying dynamics. The present paper
fills this gap by investigating temporal regularities in daily consumption
patterns in a large-scale analysis of billions of timezone-corrected page
requests mined from English Wikipedia's server logs, with the goal of
investigating how context and time relate to the kind of information consumed.
First, we show that even after removing the global pattern of day-night
alternation, the consumption habits of individual articles maintain strong
diurnal regularities. Then, we characterize the prototypical shapes of
consumption patterns, finding a particularly strong distinction between
articles preferred during the evening/night and articles preferred during
working hours. Finally, we investigate topical and contextual correlates of
Wikipedia articles' access rhythms, finding that article topic, reader country,
and access device (mobile vs. desktop) are all important predictors of daily
attention patterns. These findings shed new light on how humans seek
information on the Web by focusing on Wikipedia as one of the largest open
platforms for knowledge and learning, emphasizing Wikipedia's role as a rich
knowledge base that fulfills information needs spread throughout the day, with
implications for understanding information seeking across the globe and for
designing appropriate information systems.
|
Tiziano Piccardi, Martin Gerlach, Robert West
|
2023-05-16T14:48:08Z
|
http://arxiv.org/abs/2305.09497v3
|
# Curious Rhythms: Temporal Regularities of Wikipedia Consumption
###### Abstract
Wikipedia, in its role as the world's largest encyclopedia, serves a broad range of information needs. Although previous studies have noted that Wikipedia users' information needs vary throughout the day, there is to date no large-scale, quantitative study of the underlying dynamics. The present paper fills this gap by investigating temporal regularities in daily consumption patterns in a large-scale analysis of billions of timezone-corrected page requests mined from English Wikipedia's server logs, with the goal of investigating how context and time relate to the kind of information consumed. First, we show that even after removing the global pattern of day-night alternation, the consumption habits of individual articles maintain strong diurnal regularities. Then, we characterize the prototypical shapes of consumption patterns, finding a particularly strong distinction between articles preferred during the evening/night and articles preferred during working hours. Finally, we investigate topical and contextual correlates of Wikipedia articles' access rhythms, finding that article topic, reader country, and access device (mobile vs. desktop) are all important predictors of daily attention patterns. These findings shed new light on how humans seek information on the Web by focusing on Wikipedia as one of the largest open platforms for knowledge and learning, emphasizing Wikipedia's role as a rich knowledge base that fulfills information needs spread throughout the day, with implications for understanding information seeking across the globe and for designing appropriate information systems.
1Stanford University
2Wikimedia Foundation
3EPFL
[email protected], [email protected], [email protected]
## Introduction
Human life is driven by strong temporal regularities at multiple scales, with events and activities recurring in daily, weekly, monthly, yearly, or even longer periods. The cyclical nature of human life is also reflected in human behavior on digital platforms. Understanding and modeling temporal regularities in digital-platform usage is important from a practical as well as a scientific perspective: on the practical side, understanding user behaviors and needs is critical for building effective, user-friendly platforms; on the scientific side, since online behavior reflects user needs, studying temporal regularities of platform usage can shed light on the structure of human life itself and is thus consequential for sociology, psychology, anthropology, medicine, and other disciplines. For instance, in information science, time has long been recognized as a crucial contextual factor that drives human information seeking [13]. The study of traces recorded on digital platforms has yielded novel insights about circadian rhythms [1, 14, 15, 16, 17] and periodic fluctuations in alertness [18], mood [19, 20], focus [21], musical taste [22, 23], purchase habits [10], and ad engagement [17].
With regard to both aspects--practical and scientific--Wikipedia constitutes a particularly important online platform to study: On the practical side, Wikipedia is one of the most frequently visited sites on the Web, such that a better understanding of user behavior can potentially lead to site improvements with consequences for billions of users. On the scientific side, Wikipedia is not just yet another website; it is the world's largest encyclopedia, where each page is about a precise concept. Temporal regularities of Wikipedia usage thus have the potential to reveal regularities of human necessities, telling us what humans care about, and what they are curious about, at what times. Some temporal regularities of Wikipedia usage are already known: e.g., Wikipedia's overall usage frequency, as well as the length of sessions, varies by time of day [24], as do users' reasons for reading Wikipedia [25, 26]. These studies have, however, glanced at temporal regularities merely superficially and in passing while focusing primarily on other aspects of Wikipedia usage. The most related to our work is a study that showed that Wikipedia _editing_ follows daily and weekly regularities [26], but the study was limited to editing behavior. One reason why previous studies have not been able to analyze reading, rather than editing, behavior to date is that, as opposed to edit logs (where editors' geo-locations can be approximated via logged IP addresses), geo-located reading logs are not publicly avail
able. Rather, Wikipedia's public hourly pageview logs1 report only Coordinated Universal Time (UTC) without specifying the reader's local time. Especially in large language editions such as English, which are accessed from many countries in different timezones, this constitutes a major limitation.
Footnote 1: [https://dumps.wikimedia.org/other/pageviews/readme.html](https://dumps.wikimedia.org/other/pageviews/readme.html)
We overcome this limitation by working with a non-public dataset of English Wikipedia's full webrequest logs, enriched with timezone information inferred from user IP addresses, which allows us to timezone-correct all timestamps and thus to faithfully study, for the first time, temporal regularities of Wikipedia reading at daily granularity. We characterize how information consumption on the platform varies by time of day and how time interacts with other contextual properties, such as article topics and reader country. Adding to previous studies on the consumption and popularity of Wikipedia content, we provide insights into the daily temporal rhythms that drive Wikipedia usage. Specifically, we address the following research questions:
**RQ1 Strength of rhythms:**: How strongly is Wikipedia consumption driven by periodic rhythms?
**RQ2 Shapes of rhythms:**: What are the typical shapes of Wikipedia consumption rhythms?
**RQ3 Correlates of rhythms:**: How do topical and contextual factors determine Wikipedia consumption rhythms?
Regarding RQ1, we find that fluctuations in Wikipedia's total access volume are largely explained by a diurnal baseline rhythm corresponding to the human circadian wake-sleep cycle. Crucially, however, individual articles deviate systematically from the baseline rhythm, with deviations themselves following periodic diurnal patterns.
Regarding RQ2, principal component analysis reveals that individual articles' access rhythms are heavily driven by a small number of prototypical temporal signatures, but that particle-specific rhythms do not cluster into distinct groups, but vary smoothly along a continuum.
Regarding RQ3, regression analysis shows that temporal regularities in article access volume vary systematically by article topic, access method (mobile vs. desktop), and reader country. User country is the strongest determinant of access rhythm, whereby different countries' interests in English Wikipedia fluctuate in idiosyncratic periodic patterns over the course of the day. In terms of topic, to a first approximation, stem and history & society articles tend to be more popular early in the day, and media articles later in the day, with culture articles being less concentrated in time.
Taken together, these findings further our understanding of how human information needs vary in space and time, and can serve as a stepping stone toward the informed design of improved information systems, on Wikipedia and beyond.
## Related Work
**Temporal rhythms in digital traces.** Temporal patterns using digital traces have been explored in a large variety of topics. Consumption logs can bring invaluable insight into our understanding of how time impacts human activities. For example, mobile phone traces can help characterize daily rhythms [10] and expose insights about our circadian rhythm [20], which may be difficult to study otherwise. Analyzing the temporal dynamics of digital traces can also uncover biological and cognitive rhythms during the day, such as sense of alertness [1], chronotypes [13], and focus [14]. Individual temporal rhythms can also serve as predictors of medical conditions that need attention [15]. Social media data, like Twitter [16, 17], and messaging apps, like WhatsApp [16], can be used as large-scale sensors revealing insights about sleeping patterns [15], and happiness fluctuations [12], with variations across cultures [18]. Time of day also impacts our choices in terms of musical taste [14, 15], purchase habits [17], and ad engagement [10].
Interactions with Wikipedia, too, are affected by time. Previous studies describe how consumption patterns reveal information about seasonal fluctuations in mood [13] at the population scale and how editors' temporal patterns exhibit culture-specific differences [16]. It has also been noted that modeling Wikipedia temporal readers' needs has implications for technical improvements and could be exploited to develop scalable infrastructure adapted to reading patterns [15].
**Wikipedia reader behavior.** Given Wikipedia's central role in the Web ecosystem, there is increasing attention to characterizing reader behavior. Previous work investigated the reasons that lead readers to consume Wikipedia content [20], finding differences by time and country [14]. Complementary studies focus on consumption dynamics [13, 14, 15] and interaction with various article elements, such as citations [16, 17], external links [18], and images [19]. Significant attention is also dedicated to navigating from page to page to understand the mechanism that allows users to move across content in a natural setup [18, 19, 20, 21, 22] and interaction with various article elements, such as citations [16, 17], external links [18, 19]. Significant attention is also dedicated to navigating from page to page to understand the mechanism that allows users to move across content in a natural setup [18, 19, 20, 21, 22]. Our work complements these findings by focusing on the shape of temporal regularities, which have not been analyzed in detail before.
**Information needs and curiosity.** Our work relates broadly to more theoretical work on information needs. The formulation of information needs was initially popularised by Wilson [17] and the resulting information models re
fined over the years [14, 15]. Information needs have also been linked to concepts in biology by comparing the need for information to the need for food [12]. This idea is formalized in information foraging theory [13], which has developed behavioral models describing humans in the information space as predators relying on mechanisms such as information scent [15] to find what they need [14, 15, 16]. More recent work [13] investigates the mechanism of online consumption by focusing on the role of curiosity, using Wikipedia as the reference platform and finding substantial differences in how humans explore information networks.
## Data
Our study relies on the access logs of English Wikipedia, collected over four weeks (1-28 March 2021) on the servers of the Wikimedia Foundation. These server logs, stored for a limited time, describe the requests received by the server when readers access the site, capturing the title of the requested article, the time the request was made, the user's IP address and geo-location (approximately inferred from the IP address), and more.
To ensure anonymity and prepare the data for this study, we preprocessed it as follows. First, we consider only requests for articles (namespace 0), ignoring requests for pages in other namespaces (e.g., talk pages), and we consider only requests originating from external websites, ignoring requests made from other Wikipedia pages. The restriction to incoming traffic from external websites better captures the (exogenous) information needs that cause users to visit Wikipedia, rather than (endogenous) information needs that are caused by visiting Wikipedia. Additionally, we refine the logs by removing sequential loads of the same page from the same client because they could be an artifact of the browser [10], not representing a real intention to visit Wikipedia.
We anonymize the data by removing all sensitive information such as IP addresses, user-agent strings, fine-grained geo-coordinates, and all requests from logged-in users (3% of the pagelloads). Finally--and crucially for our purposes--we align all requests by converting timestamps to the user's local time using timezone information available in the logs.
After these steps, we retain 3.45B pagelloads associated with 6.3M articles. We represent the number of pagelloads of article \(a\) in hour \(h\) of the week (averaged over the four weeks) by \(n_{a}(h)\), i.e., each article is represented by a 168-dimensional time series, with one entry for each of the \(168=24\times 7\) hours per week.
**Article properties.** ORES [11], Wikipedia's official article scoring system, offers a classifier for labeling articles with topics. The labels are organized in a two-level taxonomy manually derived from WikiProjects2 (groups of editors who self-organize to work on specific topical areas). We applied the classifier to all articles in our dataset and, for each article, obtained a probability for each of 38 topics, grouped into five top-level topics: stem, culture, history & society, media (originally included in culture, but promoted to a top-level topic for our study), geographical. In ORES, geographical contains fine-grained regional topics such as east asia or central America, which we removed, maintaining only the top-level topic geographical. Note that each article is independently scored for each topic, so topic probabilities need not sum to 1 for a given article.
Footnote 2: [https://en.wikipedia.org/wiki/Wikipedia:WikiProject](https://en.wikipedia.org/wiki/Wikipedia:WikiProject)
**Access properties.** For each pageload, we retain two contextual properties of the request: access method and user country. The access method indicator specifies if the user loaded the article from a desktop or a mobile device, such as a smartphone or tablet. The user country is estimated by geo-locating the IP address associated with the request.
**Baseline rhythm.** Fig. 1 (top) shows the total number of pagelloads per hour of the week (averaged over the four weeks considered). The consumption rhythm follows a diurnal pattern, with the lowest access volume between 4:00 and 5:00 from Monday to Saturday and between 5:00 and 6:00 on Sunday, and with a sharp increase in traffic from 18:00 every day of the week, consistently reaching its peak at 21:00. When broken down by access method (Fig. 1, bottom), different device types are associated with different patterns, with mobile devices driving the increase in evening traffic. Similarly, access from desktop devices, likely more dependent on working rhythms, shows a small reduction around 12:00 and 18:00 and reduced activity on weekend mornings.
Denoting the number of pagelloads in hour \(h\) by \(N(h)=\sum_{a}n_{a}(h)\) and normalizing \(N(h)\) to be a distribution over the 168 hours of a week, we obtain what we term Wikipedia's _baseline rhythm_\(\Pr(h):=N(h)/\sum_{l^{\prime}=0}^{167}N(h^{\prime})\).
**Divergence from the baseline rhythm.** Having observed that overall Wikipedia access volume varies widely throughout the week (Fig. 1, top), we move on to studying temporal access volume regularities for individual articles \(a\), by analyzing article-specific distributions over hours, \(\Pr(h|a):=n_{a}(h)/\sum_{l^{\prime}=0}^{167}n_{a}(h^{\prime})\). Individual articles' rhythms \(\Pr(h|a)\) are heavily driven by the overall baseline rhythm \(\Pr(h)\) dictated by human wake-sleep patterns. Therefore, to study article
Figure 1: _Top:_ Total number of pagelloads from external origin by hour of the week, averaged over four weeks. _Bottom:_ Idem, divided by desktop and mobile.
specific rhythms in isolation from the baseline rhythm, we remove the latter by computing the hourly _divergence_\(D_{a}(h)\) of \(a\)'s rhythm \(\Pr(h|a)\) from the baseline rhythm \(\Pr(h)\) via pointwise division:
\[D_{a}(h)=\frac{\Pr(h|a)}{\Pr(h)}=\frac{n_{a}(h)/\sum_{h^{\prime}=0}^{167}n_{a}(h ^{\prime})}{N(h)/\sum_{h^{\prime}=0}^{167}N(h^{\prime})}. \tag{1}\]
A divergence greater [less] than 1 indicates that in hour \(h\), article \(a\) receives more [less] attention than expected from the baseline rhythm, whereas a divergence of 1 implies that a global weekly rhythm fully determines \(a\)'s consumption.
Given the skewed distribution of article popularity, to avoid data sparsity issues, we keep only articles with at least 1,000 pageloads over the four weeks considered, or 35 pageloads per day on average. After applying this filter, we retain 439K articles, accounting for 83.8% (2.89B pageloads) of English Wikipedia's traffic.
## RQ1: Strength of Rhythms
We start by quantifying the strength of temporal regularities in Wikipedia consumption patterns. Taking a signal processing approach, we decompose a time series \(x(h)\) of interest (e.g., \(x(h)=\Pr(h)\) or \(D_{a}(h)\)) into its frequency components via the Fourier transform, which represents \(x(h)\) as a weighted sum of sinusoids of all possible frequencies. We denote the weight (Fourier coefficient) of frequency \(f\in\{0,\ldots,167\}\) by \(\hat{x}(f)\). We then measure the contribution of each frequency \(f\) to time series \(x(h)\) (or, equivalently, \(\hat{x}(f)\)) via the so-called _energy spectral density_\(E(f)\), which captures the fraction of \(x(h)\)'s total variance (or energy) explained by each frequency \(f\):
\[E(f)=\frac{|\hat{x}(f)|^{2}}{\sum_{f^{\prime}=0}^{167}|\hat{x}(f^{\prime})|^{2 }}. \tag{2}\]
**Baseline rhythm.** Using \(x(h)=\Pr(h)\) lets us study temporal regularities in the weekly baseline rhythm. Fig. 2 (top) plots the variance \(E(f)\) explained by each frequency component \(f\) of the baseline rhythm \(\Pr(h)\) (mean-centered across \(h\) in order to discard the constant offset term corresponding to \(f=0\)). As previously anticipated by Fig. 1, the dominant frequency corresponds to a 24-hour cycle (\(f=7\)). The daily and half-daily cycles (\(f=7\) and 14; wavelengths 24 and 12 hours) together explain 96.2% of the variance (74.8% and 21.4%, respectively). When split by access method, the pattern on mobile is more predictable, with daily and half-daily cycles explaining 94% of the total variance for mobile, vs. 87% for desktop.
**Article-specific rhythms.** To measure the strength of article-specific daily rhythms that do not depend on the overall baseline rhythm \(\Pr(h)\), we next repeat the above analysis, but now with \(x(h)=D_{a}(h)\) (again mean-centered across \(h\)). We obtain the energy spectral density \(E(f)\) for each article \(a\) and average it over \(a\) in order to measure the average strength of frequency \(f\) in divergences \(D_{a}(h)\) from the baseline rhythm, plotted in Fig. 2 (bottom). We see that, even after removing the overall baseline rhythm, still 22.3% of the variance is explained by a combination of the cycles of wavelengths of 24 (16.5%) and 12 (5.8%) hours. This means that the individual articles' access volume rhythms are not just driven by the change in the overall number of pageloads throughout the day/week, but are also strongly determined by article-specific factors.
To summarize, investigating RQ1, we found that overall Wikipedia consumption follows a strong baseline day-night rhythm, but also that individual articles considerably deviate from this baseline rhythm. The deviations themselves also largely follow diurnal patterns.
**RQ2: Shapes of Rhythms**
Answering RQ1, we established that the consumption pattern of articles exhibits daily regularities even after removing the weekly baseline rhythm. Next, we investigate the shape of these patterns. Since the daily rhythm (wavelength 24 hours) was found to be the strongest by far, we henceforth focus on the daily instead of the weekly cycle; i.e., we now have \(h\in\{0,\ldots,23\}\) instead of \(\{0,\ldots,167\}\), with averages taken over 28 days instead of over four weeks.
Fig. 3 shows examples of the resulting daily time series for two articles \(a\) associated with different topics (stem and
Figure 3: Daily access volume of two articles with different topics: stem (dashed blue) and media (dotted red). (a) Normalized time series \(\Pr(h|a)\). (b) Divergence \(D_{a}(h)\) of \(\Pr(h|a)\) from the overall baseline rhythm \(\Pr(h)\) (cf. Eq. 1).
Figure 2: _Top/blue:_ Contribution of each frequency to baseline rhythm \(\Pr(h)\) of Wikipedia access volume (measured as fraction of total variance explained). _Bottom/red:_ Contribution of each frequency to article-specific divergence \(D_{a}(h)\) from baseline rhythm (Eq. 1) (computed per article \(a\), then averaged over articles).
media). Fig. 3a shows the shape of the normalized daily access volume time series \(\Pr(h|a)\), and Fig. 3b shows their divergence \(D_{a}(h)\) from the daily baseline rhythm \(\Pr(h)\).
### Principal Components
To investigate prototypical article consumption behaviors, we extract the principal components describing the daily divergence time series \(D_{a}(h)\) of articles \(a\). For this purpose, we stack the individual divergence time series \(D_{a}(h)\) in a \(439\text{K}\times 24\) matrix \(D\) whose rows correspond to \(a\) and whose columns correspond to \(h\). We mean-center \(D\) column-wise and compute its singular value decomposition (SVD) \(D=U\Sigma V^{T}\). The (orthogonal) columns of \(V\), then, are the principal components of \(D\), capturing prototypical divergence time series. Individual articles' divergence time series \(D_{a}(h)\) can be approximated as linear combinations of the top principal components. \(\Sigma\) is a diagonal matrix whose \(i\)-th entry represents the standard deviation of the data points (rows of \(D\)) when projected on the \(i\)-th principal component. Using the elbow method, we find that the first four principal components provide a good representation of consumption behavior, accounting, respectively, for 39.2%, 19.7%, 10.2%, and 4.5% of the total variance, jointly capturing 73.6% of the total variance.
Fig. 4 shows the first four principal components of \(D\). Since each component can contribute positively or negatively to the reconstruction of the rows of \(D\) (since principal components are sign-invariant), we depict each component in its positive and negative variant (dark and light). The first and strongest component (PC1) captures the distinction between articles that receive more attention during the day versus evening/night (or vice versa), with two turning points around 7:00 and 21:00. The second component (PC2) describes patterns with a strong positive or negative contribution in the morning, with a peak around 5:00, which is reversed in the late hours of the day. The third principal component (PC3) captures a consumption pattern that peaks positively or negatively in the early morning and evening. Finally, the fourth principal component (PC4) models a similar behavior with the strongest contribution around 4:00 and in the early evening.
### Clustering
Next, we aim to identify groups of articles with similar temporal access patterns via clustering. As clustering tends to work better in low dimensions, we start by obtaining low-dimensional time series representations by truncating \(U\) (obtained via SVD) to the first four columns \(U_{1:4}\) (corresponding to projections on the first four principal components) and approximating article \(a\)'s divergence time series \(D_{a}(h)\) via the \(a\)-th row of \(U_{1:4}\). We then cluster the rows of \(U_{1:4}\) using \(k\)-means and search for the optimal number \(k\) of clusters using the average silhouette width and sum-of-squares criteria. Both criteria indicate \(k=2\) as the optimal number of clusters, suggesting that the articles cannot easily be separated into distinct groups. This intuition is supported by Fig. 5a, which shows the clusters obtained for \(k=2,3,4\). The upper plots show a UMAP [16] reduction in two dimensions of the four principal components used to cluster the articles. The visualization is based on a sample of 10K random articles, color represents cluster assignments, and centroids are marked as black dots. The lower plots show the centroid of each cluster, reconstructed from the first four principal components. These plots support the intuition that consumption behaviors do not separate into different groups but distribute along a continuum.3
Footnote 3: Silhouette scores obtained by grid-searching the number of principal components and clusters do not improve the separation. Density-based clustering yields the same conclusions.
On the right side of each of the three scatter plots, we see articles popular during daytime. In contrast, on the left, we see articles popular during the evening/night. This partition becomes more evident when increasing the number of clusters. With three clusters, the data on the left separate into an additional group (green) containing articles popular during the night. Similarly, adding the fourth cluster isolates a set of articles (red) popular in the early morning.
To summarize, investigating RQ2, we found that Wikipedia articles' access rhythms are driven by only few prototypical basis patterns (e.g., day vs. night). Individual articles' access patterns can be captured as weighted combinations of those basis patterns, yet they do not cluster into distinct, well-separated groups, but vary along a smooth continuum.
## RQ3: Correlates of Rhythms
In this section, we investigate what factors correlate with the shape of daily consumption. We focus on three factors: article topics, access method, and the user country.
Fig. 5b offers initial evidence that article topics are associated with daily access patterns. The plot represents the subset of articles from Fig. 5a with topic media or stem. By coloring the data by topic (red for media, blue for stem), we can notice a natural separation akin to the clustering with \(k=2\) clusters (Fig. 5a, left). Note that the topic was not considered during clustering; the separation happens only based on the shape of access patterns. Articles about stem are on the right side, indicating a pattern aligned with more attention during daytime, whereas articles about media cover the right side, associated with evening and night consumption.
Going deeper, we analyze prototypical shapes of access volume time series via regression analysis, quantifying the relationship of each factor (topics, access method, country) with time, and measuring the influence of each factor in an ablation study.
Figure 4: Four principal components of the daily access volume time series, capturing 73.6% of total variance.
### Principal Components and Topics
In an initial analysis, we leverage the principal components introduced earlier for RQ2 and investigate the relationship between topics and access patterns using four linear regressions, one per top principal component. Given an article \(a\), we predict the projection of its divergence time series onto the \(i\)-th principal component (i.e., \(U_{ai}\)) using \(a\)'s topics as binary predictors.
Fig. 6 shows a summary of the coefficients obtained by fitting the four regressions. The first principal component, capturing the day-night rhythm, separates articles about STEM from those about entertainment (also cf. Fig. 4(b)). Articles associated with topics such as comics & anime, films, and television show a negative association with this principal component, suggesting nighttime popularity, whereas the temporal patterns of articles about chemistry, mathematics, and physics show a positive association, suggesting daytime popularity. The second component, associated with higher activity in the early morning and a drop during the evening, shows that the topics associated most positively with this pattern are libraries & information, radio, and philosophy & religion. On the other side of the spectrum, films, television, and food & drinks are most negatively associated with this pattern (i.e., low activity in the early morning, spike during the evening). Finally, the third and fourth principal components, showing peaks in the early morning and during the evening, are most positively associated with radio and food & drink, respectively.
### Temporal Variation of Correlates
Next, we are interested in the prototypical access volume time series associated with each level of each factor (topics, access method, user country). A naive way of accomplishing this would be to first restrict the data to the respective factor level (e.g., requests for biology articles only, or requests from Canada only) and to then compute access volume time series for the restricted data only. This approach is, however, compromised by the fact that the various factors are themselves correlated among each other. Thus, in order to disentangle the factors, we employ linear regression.
**Model.** We prepare the data as follows. First, we decompose each article \(a\)'s divergence time series \(D_{a}(h)\) by country and access method. To avoid data sparsity issues, we limit ourselves to the 20 countries with the most pageloads. This step generates, for each article, 40 daily time series (20 countries, two access methods) describing the divergence of the attention to \(a\) from a specific country and access method vs. the global baseline rhythm \(\Pr(h)\). We explode each time series into 24 samples, such that each article is represented by 960 data points describing its divergence from the baseline rhythm for each combination of 20 countries, two access methods, and 24 hours. We then model divergence (transformed via the natural logarithm, to make the model multiplicative) as a linear combination of country, access method, topics, and hour of the day. Additionally, since we are interested in modeling the relationship between country, access method, and topics _with time,_ we include interaction terms between each of these three factors with the hour-of-the-day indicator. We represent all binary predictors via deviation coding, which lets us interpret the coefficients associated with each level as differences from the grand mean captured by the intercept (since differences are taken in log
Figure 5: (a) _Top:_ UMAP projection of access volume time series for 10K random articles, with colors representing clusters obtained with \(k\)-means for different values of \(k=2,3,4\). _Bottom:_ Centroids reconstructed from their first four principal components. (b) A subset of the same 10K articles, with colors representing topics (stem and media).
Figure 6: Linear regression coefficients of the topics most associated with each of the four main principal components of article access volume time series.
space, they correspond to ratios with the grand mean in linear space). With this setup, we keep the combinations where, during the four-week period of study, Wikipedia received at least 100 pageloads, and we fit a linear regression using a random sample of 30K articles. The obtained model fits the data with \(R^{2}=0.181\).
**Coefficient analysis.** Sorting the 24 interaction coefficients of each factor level with hour of the day allows us to characterize the typical temporal shape of each factor level (in terms of its deviation from Wikipedia's baseline rhythm) _when controlling for all other factors_.
Fig. 7 shows the temporal shapes of the topics organized in the five top-level topical groups. The plot shows how articles about stem topics, such as chemistry, physics, and mathematics, tend to receive more attention than average during the daytime and a visible reduction outside the typical working hours. On the other hand, articles about films, television, and biography have an inverted shape, with less consumption than average during the day and a substantial increase during the evening. Interestingly, the shapes of the temporal patterns suggest that content about video games, comics & anime, internet culture, military, and society are consumed by night-active readers, with relative consumption peaking during the night.
On the contrary, articles about radio, libraries & information, and philosophy, are more preferred in the early hours of the day. Some of the shapes, especially the ones associated with stem articles, show a reduction of attention around noon, suggesting that they might be affected by the lunch break when people's attention moves to other content types. This is corroborated by the fact that attention on articles about food sees an increase during common meal times.
Next, Fig. 8 shows the interaction coefficients of country by time. Some countries share similar daily patterns. For example, readers from the United States, Germany, Netherlands, and Nigeria tend to consume Wikipedia more than average in the early morning. This behavior is inverted for readers from India, Ireland, Italy, and Spain, who during the same hours consume less content than average. Meanwhile, other countries, such as Malaysia, Singapore, Brazil, Russia, and Pakistan, show higher consumption during the night. Furthermore, some countries, like the Philippines, Italy, France, and Spain, also reveal shared habits, such as a reduction of information consumption around noon, possibly associated with lunchtime.
Finally, the rightmost plot of Fig. 8 shows the coefficients of the interaction of access method with time, in particular, the shape of the consumption from desktop devices. As already observed by breaking down the baseline rhythm by device in Fig. 1, access from desktop devices is above the global average in the central hours of the day, between 9:00 and 17:00.
### Typical Access Times of Topics
We now aim to summarize each topic-wise time series of Fig. 8 concisely in a single number describing at what time of day articles from the respective topic are particularly popular. In order to obtain point estimates of "average time", the simple arithmetic mean is unsuitable, given the circular nature of time (e.g., the average between 23:00 and 1:00 should be 0:00, not the arithmetic mean of 12:00), so we use the angular mean instead (essentially an arithmetic mean in the 2D plane described by a clock's hands). Fig. 9 shows the topic-wise time series of Fig. 8 in a circular fashion as light-colored curves, and average times as bold crosses, with one panel per top-level topic (where a cross's distance from the
Figure 8: Linear regression coefficients of the interaction between country and hour (left, gray background, sorted by the total access volume), and between device and hour (rightmost plot). Yellow area in rightmost plot represents typical working hours in Western countries. Red bands represent 95% confidence intervals.
Figure 7: Linear regression coefficients of the interaction between topics and hour. Background colors represent top-level topics: stem (green), media (yellow), culture (blue), history & society (red) and geographical (gray). For each top-level topic, specific topics are sorted alphabetically. Red bands represent 95% confidence intervals.
origin captures the variance of the respective time series). We observe that articles about stem (except for space) tend to garner particularly much attention during working hours between 8:00 and 14:00. On the other hand, media is, on average, consumed more during the evening and night. radio is an exception in this group, peaking in the early morning, as visible in Fig. 7, bringing its daily average to around 10:00. Articles about books are at the limit of the typical working hours, with average access in the late afternoon around 17:00. Differently from the previous two groups, articles about culture tend to have high variance and have average times spread over the day. Even in this case, entertainment topics such as fashion, internet culture, and comics & anime are concentrated during the night, with average access after midnight. Finally, content about history & society shows an average consumption split into two groups, night vs. working hours, with society, military, and transportation concentrated during the late evening and night, and business & economics, history, and education during the day.
### Strength of Correlates: Ablation Study
Finally, we investigate each factor's influence on temporal access rhythms by estimating the strength of the interaction between each factor and time in an ablation study. Using the fitted model described above, we proceed as follows. We use a held-out dataset of 10K random articles not used in training to assess the \(R^{2}\) fit after permuting all the interaction terms of the factor being investigated. This approach, typically used to estimate feature importance in machine learning [16, 15], has the advantage of keeping the model fixed--thus, no \(R^{2}\) adjustment is required--and measuring the impact of removing the correlation between the selected feature and the dependent variable.
The original model shows a \(R^{2}\) fit of 0.181. When permuting the interaction terms of "time by topic", the \(R^{2}\) drops to 0.055 (reduction by 69%); when permuting "time by access method", the \(R^{2}\) is 0.040 (reduction by 76%); and finally, when permuting "time by country", the \(R^{2}\) score is \(-0.298\)4 (reduction of 116%). A permutation of all three factors reduces the \(R^{2}\) to \(-0.429\). These observations suggest that all factors are essential for the prediction and indicate that the interaction of time with the readers' country plays the most important role.
Footnote 4: A negative \(R^{2}\) is possible on a held-out set, indicating that a flat line approximates the data better than the model.
To summarize, in answering RQ3, we found that temporal regularities in the popularity of Wikipedia articles vary in important and systematic ways by topic, access method, and user country, and that, out of these, user country has the strongest influence.
## Discussion
### Summary of findings.
In this study, we conducted the first large-scale study on temporal rhythms of information consumption on Wikipedia, based on a new timezone-corrected dataset of hourly-aggregated pageviews where the reader's local time was inferred from Wikipedia's webrequest logs.
First, we showed that the overall information consumption exhibits daily rhythmic patterns, with the strongest components having periods of 24 and 12 hours, respectively. We further showed that the consumption of individual articles follows specific rhythms that cannot be explained by Wikipedia's overall wake-sleep baseline rhythm. Rather, each article reveals a specific consumption fingerprint throughout the day.
Second, we provided a systematic description of the principal shapes of the different rhythms of the individual articles. We find that the main shape distinguishes articles that are read disproportionally more during the day than during the night (and vice versa). We do not, however, find distinct clusters of shapes but, instead, observe a continuum of different rhythms of information consumption.
Third, we show systematic differences in consumption patterns based on the reader's context (country or device) or the article's topics. We showed that articles of specific topics are associated with specific rhythmic patterns. For example, articles on stem and media naturally separate into two clusters related to rhythms with disproportional attention during the day and night, respectively. More generally, this leads to markedly different "average" times when articles from different topics are accessed. We also showed that pageloads from mobile vs. desktop devices are driven by substantially different 24-hour rhythms, with mobile pageloads showing an almost twofold increase in the evening hours compared to the day, which is absent for desktop devices. Lastly, we also find substantial variation in the access patterns across different countries.
Figure 9: Typical access times of topics. Lines represent the coefficient time series of Fig. 8, and bold crosses the average times per topic. The dotted line marks the global baseline rhythm \(\Pr(h)\). Yellow area shows typical working hours in Western countries. geographical is included in history & society to compress the visualization.
**Implications.** Our work shows that context is an important element to consider when trying to understand how information on Wikipedia is consumed. This has several implications.
_Diversity of information needs._ Wikipedia as a platform fulfills multiple information needs. These needs vary not only with geographic location, such as country [15], but also with time of day when a reader accesses the platform. In order to serve these needs, we need to consider the heterogeneity of Wikipedia's audience in _space and time_. Additionally, given the extension of topics that Wikipedia covers and its large adoption, our study offers insights into what content people consume online during the day, at a scale generally accessible only to search engines and Internet providers, with implications for design beyond Wikipedia.
_Cultural diversity._ Given the diversity of Wikipedia access rhythms across countries (Fig. 8), future work could revisit our results from an anthropological perspective, e.g., in order to pinpoint cultural differences in the daily rhythm of information needs--or curiosity--across the globe. For instance, we are fascinated by the fact that the rhythm of (English) Wikipedia's popularity in the U.S. is nearly exactly opposed to that in Spain, and that India and Pakistan--two countries that once were one--have entirely different rhythms of English Wikipedia consumption. Wikipedia logs offer a window into where in the world people care about what, when, and we anticipate that studying access time series while accounting for the interaction of country by topic (which we omitted here) has enormous potential for understanding global patterns of needs for knowledge.
_Customization._ These results should be taken into account when building tools to support users in finding and accessing relevant information (e.g. Wikipedia's search or recommendations such as RelatedArticles5) or to customize their online experience. For example, on average offering information about a movie in the morning has less value than in the evening. Similarly, understanding the content that draws more attention at a different time of the day has implications for designing systems to engage potential editors when they are more inclined to contribute to specific content.
Footnote 5: [https://www.mediawiki.org/wiki/Extension:RelatedArticles](https://www.mediawiki.org/wiki/Extension:RelatedArticles)
_Metrics for information needs._ The popularity of an article (i.e., pageloads) is often used as an indicator for its relevance [14] or a covariate used for stratification in the analysis of articles [10]. In our context, this corresponds to only looking at the volume of the consumption pattern and ignoring its shape throughout the day. In contrast, our approach focuses on the shape of the consumption pattern revealing substantial differences between articles. Thus, using the total number of pageviews (volume) as a single relevance metric will likely miss nuances about how these articles are useful to the reader. Complementary metrics capturing the usage of articles, such as the shape, describe important properties that might act as confounding factors, and other studies should consider them in stratified analyses. In addition, they could also be directly useful for editors providing additional information about the audience of articles, potentially helping to bridge the gap reported in the misalignment between supply and demand of articles [12].
_Infrastructure optimization._ As observed in previous work [11], these findings are valuable for optimizing the Wikimedia infrastructure. Given the scale of Wikipedia, optimizing data caching and load-balancing to reflect the actual needs of the consumers across space and time can offer significant benefits for the platform's performance.
**Limitations and future work.** Our analysis may have some data limitations, although we argue that it is unlikely they would alter our main findings and conclusions. For example, despite the best efforts of the Wikimedia infrastructure team in developing heuristics to detect bots, the data may still contain some automated traffic. Similarly, country and timezone are identified using the IP address of the request, which may be sensitive to the use of VPNs. Other aspects that may influence our findings are unobserved external factors such as biases introduced by search engines. Since the large majority of Wikipedia traffic originates from search engines [10], search engine results varying with the time of the day would also impact visits to Wikipedia.
One concrete limitation of our study that we aim to address in future work is focusing only on the English edition of Wikipedia. Readers from non-English-speaking countries consuming content from this edition may be a biased population. Future work should extend our study to multiple languages to compare information behaviors worldwide. Also, future work should study how individual-level (as opposed to population-level) information needs change during the day, possibly combined with demographic data, such as age or education [13]. In our case study, this was not possible due to privacy constraints.
Moreover, future work should explore the shape and strength of temporal rhythms on other platforms beyond Wikipedia, especially on search engines, which tend to be the first resource we turn to when we need information online. In this sense, studying search engines can offer a complementary view to our findings.
Ultimately, we hope that our study can help pave the way to better serve the needs of Wikipedia readers and Web users in general.
**Ethical considerations.** Server logs may contain sensitive information with implications for the privacy of users. In this work, we pay special care to ensure the researchers access only anonymized records, excluding activities of logged-in users and editors that may be linked through public data. Our findings describe aggregated behaviors that represent a minimal risk for privacy violations-we believe the benefit of presenting them outweighs the potential risks.
|
2308.06980
|
Digital Twin of the Radio Environment: A Novel Approach for Anomaly
Detection in Wireless Networks
|
The increasing relevance of resilience in wireless connectivity for Industry
4.0 stems from the growing complexity and interconnectivity of industrial
systems, where a single point of failure can disrupt the entire network,
leading to significant downtime and productivity losses. It is thus essential
to constantly monitor the network and identify any anomaly such as a jammer.
Hereby, technologies envisioned to be integrated in 6G, in particular joint
communications and sensing (JCAS) and accurate indoor positioning of
transmitters, open up the possibility to build a digital twin (DT) of the radio
environment. This paper proposes a new approach for anomaly detection in
wireless networks enabled by such a DT which allows to integrate contextual
information on the network in the anomaly detection procedure. The basic
approach is thereby to compare expected received signal strengths (RSSs) from
the DT with measurements done by distributed sensing units (SUs). Employing
simulations, different algorithms are compared regarding their ability to infer
from the comparison on the presence or absence of an anomaly, particular a
jammer. Overall, the feasibility of anomaly detection using the proposed
approach is demonstrated which integrates in the ongoing research on employing
DTs for comprehensive monitoring of wireless networks.
|
Anton Krause, Mohd Danish Khursheed, Philipp Schulz, Friedrich Burmeister, Gerhard Fettweis
|
2023-08-14T07:32:40Z
|
http://arxiv.org/abs/2308.06980v2
|
# Digital Twin of the Radio Environment: A Novel Approach for Anomaly Detection in Wireless Networks
###### Abstract
The increasing relevance of resilience in wireless connectivity for Industry 4.0 stems from the growing complexity and interconnectivity of industrial systems, where a single point of failure can disrupt the entire network, leading to significant downtime and productivity losses. It is thus essential to constantly monitor the network and identify any anomaly such as a jammer. Hereby, technologies envisioned to be integrated in 6G, in particular joint communications and sensing (JCAS) and accurate indoor positioning of transmitters, open up the possibility to build a digital twin (DT) of the radio environment. This paper proposes a new approach for anomaly detection in wireless networks enabled by such a DT which allows to integrate contextual information on the network in the anomaly detection procedure. The basic approach is thereby to compare expected received signal strengths (RSSs) from the DT with measurements done by distributed sensing units (SUs). Employing simulations, different algorithms are compared regarding their ability to infer from the comparison on the presence or absence of an anomaly, particular a jammer. Overall, the feasibility of anomaly detection using the proposed approach is demonstrated which integrates in the ongoing research on employing DTs for comprehensive monitoring of wireless networks.
Anomaly detection, digital twin (DT), machine learning (ML), resilience
## I Introduction
Sixth generation (6G) mobile networks will follow the trend of fifth generation (5G) networks to connect more and more things from the real world, such as machines, sensors, vehicles, etc. [1]. As the number of wirelessly connected devices increases, so does the potential risk posed by interference from external sources. Imagine a factory that is jammed, no matter whether it happens intentionally or unintentionally (e.g., out-of-band radiation from a device that does not meet the regulations). The disruption of wireless communications could lead to immense costs by production downtime or even to life-threatening situations in scenarios where humans and robots cooperate. Thus, resilience, i.e., the ability to maintain functionality under adverse conditions, is envisioned as one of the main assets of future 6G mobile networks [2].
To enable a network to take countermeasures against an abnormal status, the first step is to detect the anomaly, a topic that has received strong attention from the research community recently. Many recent works that consider jamming on the physical layer, process spectrograms using machine learning (ML) to detect whether there is an anomaly. Thereby, both supervised and unsupervised learning approaches are described in the literature. For the supervised learning approach, both normal and abnormal spectrograms are provided to the model in order to learn the classification of them [3, 4]. Yet, supervised learning has the shortcoming that types of anomalies not seen in the training phase are probably also not recognized in the operational phase. Thus, unsupervised learning has been examined, either based on autoencoders applied to the spectrogram [5, 6], whereby improperly reconstructed parts are considered as an anomaly, or prediction of signals [7], where deviations from the prediction are regarded as an anomaly. Another approach for anomaly detection is to monitor parameters such as the bit error rate and packet error rate, as described for example in [8] where also supervised learning is used.
However, none of the presented approaches integrates contextual information on the network, e.g., the number and accurate location of active regular transmitters, information which are envisioned to be available in future 6G networks. Based on the increasing need for resilience in future wireless networks, we present a novel approach for anomaly detection in wireless networks which incorporates such information by building a digital twin (DT) of the radio environment. While already widely employed in manufacturing, DTs of telecommunication systems still have a huge potential to unfold and are therefore in the focus of research from both industry and academia [9, 10, 11]. DTs are virtual representations of physical systems which are based on accurate digital models as well as interconnections between the DT and its physical world counterpart, the so called physical twin (PT) [12]. First, the system architecture of the DT of the radio environment is introduced. The radio environment as considered in this work is based on the concept of radio environment maps (REMs), which already have been widely described in the literature, for example in [13]. It includes the transmitters as well as the physical environment (obstacles and their materials, etc.) and propagation characteristics. Based on simulations we then show how anomalies in the radio environment - particularly jammers - can be detected using the proposed system together with anomaly detection algorithms. Those algorithms require no prior knowledge on the characteristics of potential jammers but do the inference only from normal operational data. This work provides an initial proof of concept rather than a compre
hensive performance evaluation. Therefore, we conclude with an outlook on future research directions.
## II System Architecture
An overview of the system intended for anomaly detection is provided in Fig. 1. The system consists of several sensing units (SUs) which are distributed over the area to be monitored. Each SU measures the received signal strength (RSS) at its fixed and exactly known location and provides it to a central unit (CU). The vector of RSS measurements is denoted as \(\mathbf{P}_{\text{rx}}\) in which each SU is represented by one value (see Section III for details). The system is intended for use in a licensed band where the regular transmitters are known. Additionally, the location of each regular transmitter is known by the CU. The positioning itself is considered to be provided by the 6G system and will not be further discussed here apart from the positioning accuracy in Section IV-A.
The CU has furthermore an accurate database on the physical environment available which can originate for example from the joint communications and sensing (JCAS) abilities of the 6G system or from other types of sensors, such as LIDAR or camera. For now, we assume isotropic transmitters, meaning that transmitter orientation has no effect on the modeled radio environment. Employing the available information about the transmitters, the physical environment database, and a database that contains the electromagnetic properties of different materials, propagation modeling (e.g., ray tracing) can be applied in a regular manner to estimate the expected RSS at the locations of the SUs. The vector of estimated RSS values is denoted as \(\hat{\mathbf{P}}_{\text{rx}}\) All the information processed in the CU together with the propagation modeling is denoted as DT of the radio environment within the scope of this work.
Subsequently, the CU compares the expected RSS from the DT with the RSS values that are actually measured by the SUs. The differences between the measured and the expected RSS values for all deployed SU serve then as input for the anomaly detection which is explained in detail in Section IV.
Due to its computational complexity, the proposed approach is intended for monitoring areas with limited spatial dimensions and increased resilience requirements (e.g., factories) rather than large-scale and non-critical networks.
## III System Model
Since resilience is of prominent significance for wireless networks in production environments, we orient our setting towards indoor campus networks, also referred to as non-public networks (NPNs). The detailed system model used for simulations and data generation is described in this section.
The considered area has a size of \(40\,\mathrm{m}\times 40\,\mathrm{m}\). Obstacles are not modeled to keep the scenario general. A random number \(N_{\text{reg}}\) of regular transmitters is active at random locations \(\mathbf{u}_{i}\) (\(i\in\{1,\dots,N_{\text{reg}}\}\)). All transmitters are assumed to use isotropic antennas and transmit with a power \(P_{\text{tx,reg}}=20\,\mathrm{dBm}\) at a carrier frequency \(f_{c}=3.7\,\mathrm{GHz}\), whereby the values are oriented on the regulations for NPN in Germany [14]. In the current state of our work, we assume the transmit power to be known exactly by the system and also to be constant, i.e., there is no power control active. Furthermore, there are \(N_{\text{SU}}\) SUs which measure the RSS at fixed locations \(\mathbf{v}_{j}\) (\(j\in\{1,\dots,N_{\text{SU}}\}\)) (see Section II). For the scope of this work the SUs are arranged in a grid, but other constellations would be conceivable as well. Unless otherwise stated, a grid size of 10 m is used, leading to \(N_{\text{SU}}=25\).
The path loss \(L\) is modeled using the log-distance path loss model with log-normal shadowing [15]
\[L_{i,j}[\mathrm{dB}]=10\,\alpha\log_{10}\left(\frac{d_{i,j}}{\mathrm{m}} \right)+20\log_{10}\left(\frac{f_{c}}{\mathrm{Hz}}\right)+L_{0}+X_{\sigma}, \tag{1}\]
where \(d_{i,j}\) denotes the distance between transmitter \(i\) and SU \(j\), \(L_{0}\) the path loss offset and \(X_{\sigma}\) the shadowing, which is zero-mean Gaussian distributed with a standard deviation \(\sigma\) (in dB). The shadowing is correlated with the covariance matrix \(\mathbf{C}\), whereby the entries are given by
\[[\mathbf{C}]_{k,l}=\sigma^{2}\exp\left(-\frac{d(\mathbf{x}_{k},\mathbf{x}_{l} )}{d_{\text{cor}}}\right) \tag{2}\]
for two points \(\mathbf{x}_{k}\) and \(\mathbf{x}_{l}\), separated by the distance \(d(\mathbf{x}_{k},\mathbf{x}_{l})\). Measurements have shown that the correlation in indoor environments is typically limited to small areas [16], thus we use a correlation distance \(d_{\text{cor}}=1\,\mathrm{m}\) in this work.
## IV System Model
In this work, we consider the case of a single-antenna
In addition to the regular transmitters, there may be \(N_{\text{jam}}\) jammers active. For the sake of simplicity, we consider only one jammer in this work, but we expect that our approach will also be able to cope with a higher number of jammers which will be verified in our future work. The jammer (if present) is located at the position **w** equipped with an isotropic antenna and also transmits at a carrier frequency of 3.7 GHz. The transmit power \(P_{\text{tx,jam}}\) is fixed to 20 dBm as if the jammer would mimic a regular transmitter. An overview of the parameters is given in Table I.
The system model is deliberately chosen simple for the proof of concept presented in this work and to able to draw conclusion which are not drawn to a specific scenario. For our future work, we will evaluate the concept in real-world oriented scenarios (see Section VI).
## IV Methods
The detection of anomalies in the radio environment as introduced in this work bases on the construction of a DT and its comparison with real-world measurements. While the building process is described in the first part of this section, the methodology to detect anomalies from the difference between real-world and DT measurements is described in the second part.
### _Building the Digital Twin_
Various information from different domains allow to build a virtual representation of the radio environment, the so called DT. As mentioned in Section III, this is basically done by employing propagation modeling for the regular transmitters. For the sake of simplicity, we employ in this work isotropic transmitters. While transmit powers can be measured relatively accurate, radio-based localization in indoor environments is still challenging. Thus, positioning inaccuracies are modeled as follows. According to [17], for uplink time difference of arrival localization in FR1 we can expect a positioning error of less than 2.19 m in 90% of the cases. For modeling it is assumed that the error in \(x\) and \(y\) direction is normally distributed and uncorrelated, leading to a Rayleigh distribution for the magnitude of the error vector. From this, a standard deviation of the error in \(x\) and \(y\) direction of 1.02 m is derived.
With the given information on the transmitter, the path loss \(\hat{L}_{i,j}\) between transmitter \(i\) and SU \(j\) is estimated using the log-distance path loss model
\[\hat{L}_{i,j}[\mathrm{dB}]=10\ \alpha\log_{10}\left(\frac{\hat{d}_{i,j}}{ \mathrm{m}}\right)+20\log_{10}\left(\frac{f_{c}}{\mathrm{Hz}}\right)+L_{0} \tag{3}\]
similar to Eq. (1). Because the true distance between the transmitter and the SU is unknown, it is replaced by the estimated distance \(\hat{d}_{i,j}\) between the estimated transmitter location \(\mathbf{\hat{u}}_{i}\) and the true location of SU \(j\). The shadowing term \(X_{\sigma}\) is omitted as it is unknown. The difference between the original radio environment and its DT is visualized in Fig. 2. For a better understanding, not only the RSS at the SU locations (which are required for anomaly detection) is shown but instead the radio map of the complete area. Fig. 1(a) shows an example radio map with ten regular transmitters and one jammer present. The radio map of the DT in Fig. 1(b) incorporates the regular transmitters but neither the jammer nor the random shadowing. The resulting difference is shown in Fig. 1(c). Big deviations occur close to the positions of the regular transmitters due to localization inaccuracy and at the jammer location, as the jammer is not present in the DT. Note, that Fig. 1(c) shows the difference between the original and the DT radio map on the whole area, but as input for the anomaly detection problem only the values at the SU locations (indicated by the black dots) are used.
### _Fundamentals of the Anomaly Detection_
Given a collection of data samples, an anomaly is one or a group of samples that is rare and differs significantly from the majority of samples in the overall collection [18]. In this work, we consider the presence of a jammer as an anomaly. Jamming can be intentional, i.e., aiming to disturb the operation of the network, but it can also be unintentional. Unintentional jamming might be for example out of band radiation of a device in a neighboring band that does not meet the regulations or an interfering neighboring NPN.
The detection of anomalies in the radio environment as proposed in this paper is based on comparing the RSS measured by the SUs at different locations with the RSS expected from the DT of the radio environment. With the fundamentals given in the previous section, the detection problem in the linear domain can be formulated as follows
\[P_{\text{rx},j}[\mathrm{mW}]=\begin{cases}\mathcal{H}_{0}:\ \sum_{i=1}^{N_{\text{rx}}} \frac{P_{\text{tx},i}}{L_{i,j}}\\ \mathcal{H}_{1}:\ \sum_{i=1}^{N_{\text{rx}}}\frac{P_{\text{tx},i}}{L_{i,j}}+ \frac{P_{\text{tx, jam}}}{L_{\text{jam},j}}\end{cases}, \tag{4}\]
where \(\mathcal{H}_{0}\) refers to the hypothesis that the received power at receiver \(j\) only sums up from the powers received by all regular transmitters. The alternative hypothesis \(\mathcal{H}_{1}\) refers to the hypothesis that there is a jammer present which contributes additional power.
However, the exact path loss \(L_{i,j}\) is not known. Thus, to further elaborate the detection problem, the vector \(\mathbf{\Delta}\) of differences \(\Delta_{j}\) between the measured RSS values \(P_{\text{rx},j}\) and the RSS values expected from the DT \(\hat{P}_{\text{rx},j}\) at each SU \(j\) is defined. The entries are given by
\[\Delta_{j}[\mathrm{dB}]=P_{\text{rx},j}[\mathrm{dBm}]-\hat{P}_{\text{rx},j}[ \mathrm{dBm}]\,\quad j\in\{1,\ldots,N_{\text{SU}}\}. \tag{5}\]
This vector serves as input for the anomaly detection algorithms. For the detection, each time the measurements at a single time instance are regarded. Based on Eq. (5) and under the assumption, that the transmit power \(P_{\text{tx,reg}}\) of regular transmitters is exactly known, the detection problem in Eq. (4) can be reformulated to
\[\Delta_{j}[\mathrm{dB}]=\begin{cases}\mathcal{H}_{0}:\ 10\log_{10}\left(\sum_{i=1}^{N_{ \text{rx}}}\frac{P_{\text{rx},i}}{L_{i,j}}\right)-10\log_{10}\left(\sum_{i=1}^{ N_{\text{rx}}}\frac{P_{\text{tx},i}}{L_{i,j}}\right)\\ \mathcal{H}_{1}:\ 10\log_{10}\left(\sum_{i=1}^{N_{\text{rx}}}\frac{P_{\text{ tx},i}}{L_{i,j}}+\frac{P_{\text{tx},\text{ jam}}}{L_{\text{jam},j}}\right)-10\log_{10}\left(\sum_{i=1}^{N_{\text{rx}}} \frac{P_{\text{tx},i}}{L_{i,j}}\right)\end{cases}, \tag{6}\]
with powers and losses given in linear scale. This means that the anomaly detection problem boils down to the decision whether the differences in \(\mathbf{\Delta}\) can be purely explained by inaccuracies in path loss modeling (due to shadowing and localization inaccuracy) or whether there is a jammer present.
### _Anomaly Detection Methods_
The procedure to implement the anomaly detection system is intended as follows. Based on models of the physical environment and propagation models, the DT of the radio environment is established first. In the next phase, denoted as training phase, training samples are collected during normal operation. Those samples are assumed to be free from anomalies and are used to train the algorithms described later on in this section. The task is thereby to generalize the statistics of normal data (i.e., of the modeling error) and to be able to identify samples as anomaly that seem to stem from a different generation process. This approach allows to identify jammer even without any prior knowledge their characteristics. Thus, the general problem is also often referred to as novelty detection. When the training is finished, the system starts its regular operation to detect anomalies.
In the following, the three applied algorithms are introduced. One-class support vector machine (OCSVM) and local outlier factor (LOF) are well-known unsupervised ML algorithms for anomaly (outlier) detection, while the adapted energy detector (AED) is inspired by the energy detector for signal detection tasks. For OCSVM and LOF, the _scikit_ library [19] implementations are used, whereas AED is self implemented.
#### Iii-C1 Adapted energy detector (AED)
The concept of energy detectors is well known and has already been widely applied, for example in cognitive radio [20]. The task thereby is to detect an unknown signal in the presence of noise. Given the decision problem in Eq. (6), the problem can be approached from a similar perspective. Either \(\Delta_{j}\) originates only from random model inaccuracies in case \(\mathcal{H}_{0}\) or there is a jamming signal present which contributes additional received power. For a perfect DT (i.e., \(L_{i,j}=\hat{L}_{i,j}\)) we could expect either \(\Delta_{j}=0\) in case \(\mathcal{H}_{0}\) or
\[\Delta_{j}[\mathrm{dB}]=10\log_{10}\underbrace{\left(\frac{\sum_{i=1}^{N_{ \text{sup}}}\frac{P_{i,i}}{L_{i,j}}+\frac{P_{\text{tx,im}}}{L_{i,j}}}{\sum_{i= 1}^{N_{\text{sup}}}\frac{P_{\text{tx,im}}}{L_{i,j}}}\right)}_{>1}>0 \tag{7}\]
in case \(\mathcal{H}_{1}\). Due to the previously mentioned random and unknown inaccuracies in the DT, \(\Delta_{j}\) is randomly distributed even in case no jammer is present. The presence of random log-normal components in the path loss, resulting from shadowing, prevents the existence of a straightforward mathematical expression for the distribution of \(\Delta_{j}\)[21]. Still, the findings justify the following approach
\[\overline{\Delta}=\frac{1}{N_{\text{SU}}}\sum_{j=1}^{N_{\text{SU}}}\Delta_{j} \begin{array}{l}\geq\Delta_{\text{th}}\ :\ \text{anomaly}\\ <\Delta_{\text{th}}\ :\ \text{no anomaly}\end{array} \tag{8}\]
with the threshold \(\Delta_{\text{th}}\). It is derived from the statistics of the training data, e.g., the 90th percentile of \(\overline{\Delta}\) in the training data.
#### Iii-C2 One-class support vector machine (OCSVM)
The approach of OCSVM is to identify a hypersphere that encloses all (or most) of the data points [22]. This hypersphere is fitted during the training phase. In the test phase, samples are categorized as normal if they fall within the hypersphere or as anomaly otherwise. The default parameters of the OCSVM implementation remain untouched.
#### Iii-C3 Local outlier factor (LOF)
Instead of a binary decision whether a sample is an anomaly or not, LOF assigns each sample a score that can be interpreted as the outlier degree of the given sample. This score is also referred to as LOF and it is calculated by comparing the density around the point under consideration with the density of points in the neighborhood. The user-specified parameter \(k\) thereby defines how many of the closest points belong to the neighborhood of one point [23].
In the scope of this work, \(k\) is set to 100 to compensate for the sparsity of the data set due to the high dimensionality. Furthermore, the parameter _novelty_ is set to _True_ to ensure that the density is calculated based only on the training data. The threshold score about which an outlier is identified as anomaly is varied during the receiver operating characteristics (ROC) analysis.
Fig. 2: Exemplary radio maps as representation of the radio environment at \(\sigma=$4\,\mathrm{dB}$\). Measurements are only available at the SU positions indicated in (c)
## V Results
The performance of the algorithms under study for anomaly detection as well as the influence of the shadowing level is examined in this section.
To evaluate the performance, a data set is created according to the system model presented in Section III containing 20 000 samples overall. Thereby, 10 000 normal (i.e., not jammed) samples are used for training and another 10 000 samples for testing, whereby now half of the samples are anomalies and the other half are normal samples. Different data sets are created to investigate various shadowing levels. Unless otherwise stated in the explanations of the specific algorithms in Section IV-C, no hyperparameters of the algorithms are tuned, as this would require the evaluation on the test set and thereby mean a leakage of test data (i.e., knowledge of anomalies) into the training phase of the algorithms. After the training phase, the models are evaluated on the test set by means of the metrics presented in the next paragraph. This procedure is repeated three times to improve the reliability of the results.
The detection is regarded as binary classification problem, where we assign the _positive_ class to the anomalies, and the _negative_ class are the normal samples. To compare the performance of different algorithms, ROC curves are employed. The ROC curve depicts the tradeoff between the true positive rate (TPR) and the false positive rate (FPR) which are defined in Eq. (9) [24].
\[\text{TPR}=\frac{\text{TP}}{\text{TP}+\text{FN}}\qquad\text{FPR}=\frac{\text{ FP}}{\text{TN}+\text{FP}} \tag{9}\]
To create the ROC, the decision threshold of the algorithms is varied. For the specific algorithms, this means: for AED the threshold \(\Delta_{\text{th}}\), for OCSVM the radius of the hypersphere and for LOF the score which defines an outlier is varied. Through the parameters, it can be controlled whether algorithms shall act more 'conservative', i.e., identify anomalies only with high confidence at the cost of many missed detections, or more 'liberal', i.e., detecting most of the anomalies at the cost of many false alarms. Additionally, the ROC is characterized by the property that it does not change if the ratio between normal and abnormal samples changes [24]. Thus, the evaluation can be done without assuming a specific jamming probability.
The ROC curves of the three discussed algorithms (see Section IV-C) are shown in Fig. 2(a) for the shadowing levels \(\sigma=0\,\mathrm{dB}\) and \(\sigma=6\,\mathrm{dB}\). A detector without skill, i.e., which just performs random guesses on whether there is an anomaly, would result in the black dotted line. The better an algorithm performs, the more its ROC curve tends towards the upper left corner. Thus, one can conclude that LOF outperforms the other two algorithms at \(\sigma=0\,\mathrm{dB}\), whereas AED shows the best performance at \(\sigma=6\,\mathrm{dB}\).
Further analysis indicate that the density of the SUs also has a strong impact on the anomaly detection performance. Reducing the grid size to 5 m (i.e., \(N_{\text{SU}}=81\)) significantly improves the performance as shown in Fig. 2(b) exemplarily for \(\sigma=2\,\mathrm{dB}\). However, there is a tradeoff between increased detection performance on the one hand and increased hardware costs and computational complexity on the other hand, which has to be considered for practical application.
To allow a numeric comparison between the ROC curves of different algorithms, the area under the ROC curve (AUC) was established. It is obtained by integrating the area under a specific ROC curve. Visualizing the AUC values at different shadowing levels as in Fig. 4, one can see that the performance of all three algorithms degrades with an increasing shadowing level as it can be expected. Apart from \(\sigma=0\,\mathrm{dB}\), where LOF has the highest AUC score, AED is the best-performing algorithm at all other shadowing levels. This is achieved by exploiting the general knowledge that each jammer, no matter which type, has to transmit a signal and thereby increases the RSS. At high noise levels, the AUC scores of LOF and OCSVM approach the value of 0.5, i.e., there is not a big performance difference compared to a detector with no skill, whereas AED still has an AUC greater than 0.65.
Overall, it has been demonstrated in this section that anomalies in wireless networks - in particular jammers - can be detected employing the introduced DT approach and unsupervised learning. Nevertheless, as can be concluded from Fig. 4, an accurate modeling of the radio environment is required to achieve a reliable detection. Otherwise, it is challenging for the algorithms to infer whether high deviations between measured and expected RSS originate from model errors or from jammers.
Fig. 4: AUC scores for different shadowing levels at a grid size of 10 m
Fig. 3: ROC curves for the three algorithms
## VI Future Work
This paper presents a system architecture and an initial proof of concept on how a DT can be used for anomaly detection in wireless networks. Nevertheless, significant steps need to be taken before the concept can be successfully translated into practical application. Thus, we plan to address the following steps in our future research:
#### Vi-1 Detection methodology
For the initial proof of concept presented in this work, basic algorithms for unsupervised learning have been applied. For the future work, we plan to evaluate the performance of more sophisticated algorithms and improved preprocessing (e.g., dimensionality reduction).
#### Vi-2 Ray tracing
To achieve a high modeling accuracy for the DT, the log-distance path loss model shall be replaced with ray tracing to achieve a more accurate model of the path loss. At first glance, the high computational complexity of ray tracing might limit the real-time capability of the proposed system. However, in the literature it has already been demonstrated that ray-tracing might be replaced by ML with a comparable accuracy but a drastically reduced computational complexity [25].
#### Vi-3 Antenna pattern
In our future work, we will integrate different antenna patterns and hence also the orientation of the transmitter.
#### Vi-4 Real-world deployment
Employing ray tracing to build the DT enables us to test the proposed concept also in real-world environments. From this, we expect interesting insights, e.g., about the required modeling accuracy (geometry, materials, ray tracing) for deploying an effective anomaly detection system for the radio environment.
## VII Conclusion
Resilience of the wireless network is key for many of its applications. Therefore, it is essential to constantly monitor the network and identify anomalies to enable a quick reaction to (potentially) critical situations. This work contributes by presenting a novel approach for anomaly detection in wireless networks by employing a DT of the radio environment, an approach that allows to incorporate contextual information in the anomaly detection procedure. Furthermore, no prior knowledge on jammer characteristics is required. A system architecture is introduced and the suitability of the approach for anomaly detection is demonstrated based on simulations. This work serves an initial study of the concept and proposes numerous directions for future studies, e.g., including ray tracing for the DT to be able to cope with real-world scenarios instead of statistical models.
## Acknowledgment
This work was supported by the Federal Ministry of Education and Research, Germany (BMBF) as part of the projects "6G-CampuSens" under contract 16KIS207, "Industrial Radio Lab Germany (IRLG)" under contract 16KIS1010K, and "6G-life" under contract 16KISK001K. The authors alone are responsible for the content of the paper.
|
2310.02774
|
Graph Neural Networks and Time Series as Directed Graphs for Quality
Recognition
|
Graph Neural Networks (GNNs) are becoming central in the study of time
series, coupled with existing algorithms as Temporal Convolutional Networks and
Recurrent Neural Networks. In this paper, we see time series themselves as
directed graphs, so that their topology encodes time dependencies and we start
to explore the effectiveness of GNNs architectures on them. We develop two
distinct Geometric Deep Learning models, a supervised classifier and an
autoencoder-like model for signal reconstruction. We apply these models on a
quality recognition problem.
|
Angelica Simonetti, Ferdinando Zanchetta
|
2023-10-04T12:43:38Z
|
http://arxiv.org/abs/2310.02774v1
|
# Graph Neural Networks and Time Series as Directed Graphs for Quality Recognition
###### Abstract
Graph Neural Networks (GNNs) are becoming central in the study of time series, coupled with existing algorithms as Temporal Convolutional Networks and Recurrent Neural Networks. In this paper, we see time series themselves as directed graphs, so that their topology encodes time dependencies and we start to explore the effectiveness of GNNs architectures on them. We develop two distinct Geometric Deep Learning models, a supervised classifier and an autoencoder-like model for signal reconstruction. We apply these models on a quality recognition problem.
## 1 Temporal convolutional Networks
Convolutional neural networks (CNNs, see [14], [15], [16], [12], [13]) are deep learning algorithms employing so called _convolutional layers_: these are layers that are meant to be applied on grid-like data, e.g. images. For data organized in sequences, ld CNNs were developed ([10], [11]) and, more recently, TCNs have become popular in the study of time series (see [l] and the references therein). Throughout the paper, \(\mathrm{TS}(r,m)\) will denote the set of multivariate time series with \(m\) channels and length \(r\) in the temporal dimension. Given \(\mathbf{x}\in\mathrm{TS}(r,m)\) we will denote as \(\mathbf{x}(i)_{j}\) (or simply as \(\mathbf{x}_{ij}\) when no confusion is possible), for \(i=1,...,r\) and \(j=1,...,m\), the \(j\) the coordinate of the vector \(\mathbf{x}(i)\in\mathbb{R}^{m}\). For a given natural number \(n\), we shall denote as \([n]\) the ordered set \((1,...,n)\). Now, recall that given a filter \(K\in\mathbb{R}^{f}\), we can define a one-channel, one-dimensional (ID) convolution as an operator
\[\mathrm{conv}\mathrm{1D}:\mathrm{TS}(r,1)\rightarrow\mathrm{TS}(l,1)\] \[\mathrm{conv}\mathrm{1D}(\mathbf{x})_{j}=\sum_{i=1}^{f}K_{i} \mathbf{x}_{\alpha(j,i)}+b_{j}\]
where \(\alpha(j,-):[f]\rightarrow\mathbb{Z}\) are injective index functions, \(\mathbf{x}_{i}:=0\) if \(i\notin[r]\) and \(b\in\mathbb{R}^{l}\) is a bias vector. The numbers \(K_{i}\) are called the _parameters_ or _weights_ of the convolution. The most commonly used index functions are of the form \(\alpha(j,i)=(n+d\cdot i)+j\) for some integers \(n,d\). As a consequence, from now on we shall assume that the one dimensional convolutions we consider have this form. If, \(\alpha(j,i)\leq j\) for all \(i,j\) then the convolution is said to be _causal_ as it will look only 'backward'. These are the building blocks of TCNs, that are CNNs where only causal convolutions appear. If \(|d|>1\), the convolution is called _dilated_. One could define multi-channel (i.e. handling multivariate time series), ld convolutions in two steps. First, we define convolutions taking a multivariate time series to an univariate time series as operators \(\mathrm{conv}:\mathrm{TS}(r,n)\rightarrow\mathrm{TS}(l,1)\) as \(\mathrm{conv}(\mathbf{x})_{i}=\sum_{j=1}^{n}\mathrm{conv}\mathrm{1D}_{j}( \mathbf{x}(-)_{(j)})\) where \(\mathrm{conv}\mathrm{1D}_{j}\) are one-channel, one-dimensional convolutions. Then we can define ld convolutions transforming multivariate time series into multivariate time series as operators \(\mathrm{conv}:\mathrm{TS}(r,n)\rightarrow\mathrm{TS}(l,m)\) that are multi-channel ld convolutions when co-restricted at each
non temporal dimension of the output. The usefulness of TCNs in the context of time series arises from the fact that causal convolutions by design are able to exploit temporal dependencies, while not suffering from some of the algorithmic problems of RNNs such as LSTMs: for example they appear to be faster to train and more scalable (see [1] for a discussion).
## 2 Time series as Directed Graphs
### Generalities.
**Definition 2.1**.: A _directed graph_ (_digraph_) \(G\) is the datum \(G=(V_{G},E_{G},h_{G},t_{G})\) of two sets \(V_{G}\) (the _set of vertices_), \(E_{G}\) (the _set of edges_) and two functions \(h_{G},t_{G}:E_{G}\to V_{G}\) associating to each edge \(e\) its _head_\(h_{G}(e)\) and its _tail_\(t_{G}(e)\) respectively. A morphism \(\varphi:G\to H\) between two directed digraphs \(G\) and \(H\) is the datum of two functions \(\varphi_{V}:V_{G}\to V_{H}\), \(\varphi_{E}:E_{G}\to E_{H}\) such that \(h_{H}\circ\varphi_{E}=\varphi_{V}\circ h_{G}\) and \(t_{H}\circ\varphi_{E}=\varphi_{V}\circ t_{G}\)
From now on, for simplicity we will assume that our digraphs have at most one edge connecting two different nodes (for each direction) and at most one self loop for each node. In this case, given a digraph \(G=(V_{G},E_{G},h_{G},t_{G})\) and an ordering of the vertices \((v_{i})_{i\in[|V_{G}|]}\), we can define the _adjacency matrix of \(G\)_ as the matrix defined by \(A_{ij}=1\) if there is there exists an edge having as tail \(v_{i}\) and head \(v_{j}\) or \(A_{ij}=0\) otherwise. If the adjacency matrix of a graph is symmetric, we say that our graph is _undirected_. We can assign _weights_ to the edges of a graph by letting the entries of the adjacency matrix to be arbitrary real numbers. When speaking about weighted digraphs, we shall always assume that there is an edge having as tail \(v_{i}\) and head \(v_{j}\) if and only if \(A_{ij}\neq 0\). We speak in this case of _weighted digraphs_.
**Definition 2.2**.: A _digraph with features of dimension \(n\)_ is the datum \((G,F_{G}=(h_{v})_{v\in V_{G}})\) of a (weighted or unweighted) digraph \(G\) and vectors \(h_{v}\in\mathbb{R}^{n}\) of _node features_ for each vertex \(v\in V_{G}\). For a given digraph \(G\), we shall denote as \(\operatorname{Feat}(G,n)\) the set of all digraphs with features of dimension \(n\) having \(G\) as underlying digraph.
Real-world graph-structured datasets usually come in form of one or more digraphs with features. Given a digraph with features \((G,F_{G}=(h_{v})_{v\in V_{G}})\) and a digraph morphism \(\varphi:H\to G\), we can pullback the features of \(G\) to obtain a graph with features \((H,\varphi^{*}F_{G}=(h_{\varphi_{V}(v)})_{v\in V_{H}})\). This defines a function \(\varphi^{*}:\operatorname{Feat}(G,n)\to\operatorname{Feat}(H,n)\). Graph Neural Networks (GNNs, [23]) are models that are used on graph-structured data using as building blocks the so called _graph convolutions_ ([4], [6]): given a graph, they update each node feature vector combining the information contained in the feature vectors of adjacent nodes. In general, a graph convolution is a function \(\operatorname{gconv}:\operatorname{Feat}(G,n)\to\operatorname{Feat}(G,m)\) that is permutation invariant in a sense that we shall make precise below. graph convolutions update node features of a digraph using a _message passing mechanism_ that can be written in the following general form
\[h^{\prime}_{v_{i}}=\sigma(\psi(h_{v_{i}},\oplus_{v_{j}\in N^{\alpha}(v_{i})} \varphi(h_{v_{i}},h_{v_{j}},A_{ij},A_{ji})) \tag{2.3}\]
where \(\sigma\) is an activation function, \(\alpha\in\{h,t,u\}\), \(N^{h}(v_{i})=\{v_{j}\in|V_{G}|\mid A_{ji}\neq 0\}\), \(N^{t}(v_{i})=\{v_{j}\in|V_{G}|\,|A_{ij}\neq 0\}\),\(N^{u}(v_{i})=N^{h}(v_{i})\cup N^{t}(v_{i})\), \(\oplus\) denotes a permutation invariant function and \(\psi\), \(\varphi\) denote differentiable functions (weaker regularity assumptions can be made). Many popular message passing mechanisms are a particular case of the following one:
\[h^{\prime}_{v_{i}}=\sigma(\sum_{v_{j}\in N^{\alpha}(v_{i})}c_{f^{\alpha}(i,j) }A_{f^{\alpha}(i,j)}Wh_{v_{j}}+l_{i}A_{ii}Bh_{v_{i}}) \tag{2.4}\]
here \(\sigma\) is an activation function, \(f^{\alpha}(i,j)=(i,j)\) if \(v_{j}\in N^{t}(v_{i})\) and \((j,i)\) if \(v_{j}\in N^{h}(v_{i})\), \(c_{ij}\), \(l_{i}\) are optional normalization coefficients, \(W,B\) are matrices of weights. For digraphs, the choice of \(\alpha\) should be thought of as whether a node embedding should be updated by looking at the information of the nodes that are sending to it a signal, by looking at the information of the nodes that are receiving from it a signal or both. These are three different semantics, all justifiable depending on the problem at hand. Two important graph convolutions arise as particular cases of the previous formula: Kipf and Welling's graph convolution for undirected graphs (see [9]) and GraphSage convolution ([7]) as it appears in the popular package PyTorch Geometric ([17], [5]). Notice that message passing mechanisms as in (2.3) are _permutation invariant_ in the sense that they do not depend on the ordering given to the set of vertices and only depend on the topology of the underlying graph structure. We remark that in [9] and [7] the above convolutions are described only for undirected graphs, but the formulas also make sense for digraphs. In fact the standard
implementations used by practitioners are already capable to handle digraphs and are being used also in that context. For example, some papers introducing attention mechanisms (e.g. GAT, see [22]) explicitly introduce this possibility. However, in the digraph case the underlying mathematical theory is somewhat less mature (see [20] and the reference therein, for the roots of the mathematics behind the directed graph Laplacian the reader is referred to [2]).
### Graph Neural Networks for time series as directed graphs.
There are many ways to turn time series into digraphs with features. To start with, we introduce two basic examples.
**Example 1:** A multivariate time series \(\mathbf{x}\in\mathrm{TS}(n,m)\cong\mathbb{R}^{n\times m}\) can be seen as an unweighted digraph with features as follows. To start with, we consider a set of \(n\times m\) nodes \(v_{ij}\), with \(i=1,...,n\) and \(j=1,...,m\). Then, we create edges \(v_{ij}\to v_{lk}\) only if \(l\geq i\) and the edge is not a self loop (i.e. edges receive information only from the present and the past). We assign the scalar \(\mathbf{x}(i)_{j}\) as feature for each node \(v_{ij}\). This construction results in an unweighted digraph with features \((G_{\mathbf{x}},F_{\mathbf{x}}=(\mathbf{x}_{ij}\in\mathbb{R}))\). One can modify the topology of the graph just constructed. For example, one could create edges \(v_{ij}\to v_{lk}\) only if the edge is not a self-loop, \(l\geq i\) and \(l-i=0,1\) or if the edge is not a self-loop, \(l\geq i\), and \(l-i\) is both divisible by a given positive integer \(d\) and is smaller than \(k\cdot d\) for a given positive integer \(k\). This construction results in the directed graph structure pioneered in [24]): see figure 1.
**Example 2**: A multivariate time series \(\mathbf{x}\in\mathrm{TS}(n,m)\cong\mathbb{R}^{n\times m}\) can be seen as a one dimensional time series with \(m\) channels. In this case, the time series can be turned into a digraph with features \((G_{\mathbf{x}},F_{\mathbf{x}}=(\mathbf{x}(i)\in\mathbb{R}^{m}))\) by considering a directed graph of \(n\) nodes \(v_{i}\), \(i=1,...,n\) and edges \(v_{i}\to v_{l}\) are added if the edge is not a self-loop and \(l\geq i\), \(l-i=1\) or and \(l-i\) is both divisible by a given positive integer \(d\) and is smaller than \(k\cdot d\) for a given positive integer \(k\). We assign the vector \(\mathbf{x}(i)\in\mathbb{R}^{m}\) as a vector of features for each node \(v_{i}\). This completes the construction of the desired directed digraph with features.
These examples are of course just a starting point and one can take advantage of further domain knowledge to model the topology of the graph in a more specific way. For instance, one could use auto-correlation functions, usually employed to determine the order of ARMA models (see [21]), to choose the right value for parameters like \(k\) or \(d\). As proved in Lemma 2.5 under certain hypotheses, ordinary convolutions on time series can be seen as transformations between graphs, that is graph convolutions, however the latter evidently carry a very different meaning than ordinary TCNs and can be more flexible. Thus thinking of a time series as a digraph opens to a whole new set of possibilities to be explored. For example, they can be effective as temporal pooling layers when combined with other algorithms or they can leverage the message passing mechanisms that is thought to be more effective to solve the task at hand.
**Lemma 2.5**.: _Consider a convolution \(\mathrm{conv1d}:\mathbb{R}^{d}\cong\mathrm{TS}(d,1)\to\mathrm{TS}(d-r,1) \cong\mathbb{R}^{d-r}\), \((\mathrm{conv1d}(\boldsymbol{x}))_{i}=\sum_{j=0}^{r-1}K_{j}\boldsymbol{x}_{i+j}\). Then there exists a weighed digraph \(G\), a graph convolution \(\mathrm{gconv}:\mathrm{Feat}(G,1)\to\mathrm{Feat}(G,1)\) and a subgraph \(\iota:H\subseteq G\) such that \(\mathrm{Feat}(G,1)\cong\mathrm{TS}(d,1)\), \(\mathrm{Feat}(H,1)\cong\mathrm{TS}(d-r,1)\) and, under these bijections, \(\mathrm{conv1d}\) arise as the map \(\iota^{*}\circ\mathrm{gconv}:\mathrm{Feat}(G,1)\to\mathrm{Feat}(H,1)\)._
Proof.: Define \(G\) to be the digraph having \(d\) vertices \(v_{1},...,v_{d}\) and whose weighted adjacency matrix is given by \(A_{ij}=K_{j-i+1}\) if \(1\leq j-i+1\leq r\) and zero otherwise. Let \(\iota:H\subseteq G\) be its weighed subgraph consisting of the vertices \(v_{r},...,v_{d}\). We consider the graph convolution \(\mathrm{gconv}:\mathrm{Feat}(G,1)\to\mathrm{Feat}(G,1)\) arising from the message passing mechanism given by
Figure 1: One possible structure of a time-digraph, as described in Example 1. Here only adjacent connections and all the connections for node \(v\) are shown, and \(d=4\)
formula (2.4) with \(\alpha=h\), \(W=1\), \(l_{i}=0\), \(\sigma=1\) and \(c_{ij}=1\). We define the bijection \(\mathrm{TS}(d,1)\cong\mathrm{Feat}(G,1)\) as follows: for each \(\mathbf{x}\in\mathrm{TS}(d,1)\), \(\mathbf{x}(i)\) becomes the feature of the node \(v_{i}\) in \(G\) (and analogously for \(\mathrm{TS}(d-r,1)\cong\mathrm{Feat}(H,1)\)).
The previous lemma can be extended mutatis mutandis also to the case of multivariate time series and contains as a particular case the one of dilated convolutions. The process of learning the weights of a dilated convolution can be thought as the process of learning the weights of the adjacency matrix of a graph.
**Remark 2.6**.: Simple Laplacian-based graph convolutions on undirected graphs can be seen as 1-step Euler discretizations of a Heat equation (see fore example [6, 4]). In general, a GNN consisting of a concatenation graph convolutions can be thought as a diffusive process of the information contained in the nodes along the edges: in our context, directed graph convolutions applied to time series digraphs "diffuse" the information through time by updating at each step node features using the information coming from a 'temporal neighbourhood'.
## 3 Our Models
We propose two different types of models, both taking advantage of the time-digraph structure described in the previous section: a supervised classifier/regressor and an unsupervised method made of two separate steps, an autoencoder to reconstruct the time series, followed by a clustering algorithm applied to the reconstruction errors. The core of the algorithm is the same for the two approaches, so we will first focus on this core building block and then proceed to discuss the two models separately.
### Main Building Block
The main building block for all the models presented in this paper is a collection of layers that combines TCNs with GNNs. Inspired by what has already been done in the literature, we propose this main building block in two versions: an encoder and a decoder (they are sketched in Figure 2(a)). They can be used on their own or put together to construct an autoencoder model.
In the encoder version, the input is first given to a block of either \(n\) TCN layers or \(n\) GNN layers (we tested primarily Sage convolution-based layers as they appeared more effective after some preliminary tries) with skip connections, following the approach proposed in [19]. The effectiveness of skip-connections in the model developed in _op.cit_ is due to the fact that stacking together the outputs of dilated convolutions with different dilations allows to consider short and long term time dependencies at the same time. Skip connections have been used also in the context of GNN: for example, Sage convolutions ([7]) introduce them already in the message passing mechanism. This motivates the introduction of skip connections in GNNs handling digraphs with features coming from time series: in this context they do not only allow to bundle together information coming from "temporal neighbourhoods" of different radii as in the architecture developed in [19] but they also help to reduce oversmoothing that traditionally curses GNNs architectures (see [3] for a discussion and the references therein). The skip-connections block is described in figure (2(b)) and works as follows. The input goes through the first TCN/GNN layer, followed by a 1-dimensional convolution and an activation function. We call the dimension of the output of this convolution skip dimension. Then the output is both stored and
Figure 2: The structure of the main building block, both as an encoder and a decoder
passed to the next TCN/GNN layer as input and so on. In the architectures we have tested, the TCN/GNN layers are simply dilated 1d convolutions or a single graph convolutions (followed by an activation function), but more involved designs are possible. At the end all the stored tensors are stacked together and passed to a series of \(m\) graph convolutions, each one followed by an activation function. We tested the convolutions: GCN (cfr. [9]), Sage (cfr. [7]), GAT (cfr. [22]).
The graph convolutions are defined to encode in the embedding of a given node, at each pass, the information provided by all the nodes in its neighbourhood. Now, looking at how a time-digraph is defined, one sees that in our set up this translates to building the embedding of a given node, that is a data point of the time series, using the information given by the data points that are close in time (short-term memory behaviour) or at a certain distance \(d\) away in time (long-term memory behaviour), where \(d\) is set in the construction of the graph.
Finally the intermediate output produced by the graph convolutions is given to an optional 1d convolution with kernel size 1 to adjust the number of channels and then to either an average pooling layer or a max pooling layer that shrinks the temporal dimension of the graph. In other words, if we think about the time-graph as a time window of length \(T\), the pooling layers outputs a time window, and therefore a time-graph, of length \(T/s\), thus realizing the characteristic _bottleneck_ of an autoencoder as described for instance in [18, 19, 8]. We will refer to \(s\) as the shrinking factor.
The decoder version changes the order of the blocks we just described, in a symmetric way. It starts with an upsample of the input time series which is then passed to the the graph convolutions followed by the skip-connections block. It terminates with a 1d final convolution with kernel size 1 that reduces the number of channels, or hidden dimensions, thus giving as output a time series of the same dimensions as the initial input. In the case where the skip-connections block is built with GNN layers, this final convolution can be replaced by a final TCN layer.
### Regression/Classification
The classifier/regressor model uses the main building block described above as an encoder. The input graph is given to the encoder which predicts a suitable embedding for each of its nodes. At this point the embeddings of the single nodes are combined together into a vector that gives a representation of the whole graph. Recalling that each graph represent a time window, one can think of this first part as a way to extract a small number of features from each time window. These features are then fed to a multi-layer perceptron that outputs a vector of probabilities in the case we use it as a classifier or a value in the case it is used as a regressor instead. As for the way the node embeddings are combined together we explored a few possibilities in the context of classification, the two main options being a flattening layer and a mean pooling layer. For easiness of notation, from now on we will refer to these models as TCNGraph-Classifier/Regressor if the skip-connections block uses TCN layers, TGraphClassifier/Regressor if it is built with graph convolutions.
Figure 4: The structure of the classifier (or regressor)
Figure 3: The structure of the block with skip connections
### Autoencoders for unsupervised anomaly detection
The second architecture we propose is a an autoencoder model, thus it employs two main building blocks, first used as an encoder and then as a decoder and the output is the reconstruction of the given time series represented by the time graph. In our experiments we use the signal reconstruction obtained with this architecture for anomaly detection purposes. Let us briefly describe our method. The main idea is that the autoencoder model provides a good reconstruction of the input time series when the signal is normal and worst reconstructions on time windows where an anomaly appears (again we refer to [19] among others for a similar approach to anomaly detection), as it is constructed to remove noise from a signal. Thus, once we have the reconstructed times series, we compute both the Root Mean Square Error and the Mahalanobis score (see [19] for more details in a similar context), for each given time window, with respect to the original time series. In the case one has to deal with more than one time series bundled together in the time-digraph, there are simple methods to get a single score for each time window. Now we can treat these two measures of the reconstruction error as features of the time windows and use an unsupervised clustering algorithm (we tested both Kmeans and DBscan), to assign a binary label to each window, based on the cluster they fall into (see Section 4.2 for more details).
This approach gives a completely unsupervised method to handle anomaly detection of one or more time series. Again, from now on we will refer to these models as: TCNGraphAE if the skip-connections block uses TCN layers, TGraphAE if it is built with graph convolutions. and TGraphMixedAE if the encoder uses graph convolutions and the Decoder uses TCN layers. If the skip connections blocks consist only of dilated convolutions and we do not have a final graph convolution to filter the signal, we obtain an TCN autoencoder/Classifier with a structure similar to the one described in [19]. We call this latter models TCNAE and TCNClassifier. We regard these models as state of the art models in this context and we use them as a benchmark.
## 4 Experiments
For our experiments we used a database made of ECG signals. These signals have been recorded with a Butterflive medical device at a sampling rate of 512 Hz. A lowband Butterworth filter at 48 Hz was applied to each signal. Then every 5 second long piece of signal was manually labeled according to readability of the signal: label \(3\) was given to good quality signals, label \(2\) was given to medium quality signals and label \(1\) was given to low quality/unreadable signals. In total we had a database made of 10590 \(5\)-second-long sequences.
We turned the problem into a binary classification: label \(0\) was assigned to signals having label=1 and label \(1\) was assigned to signals having label=2,3. A Train/Valid/Test split was performed on the database with weights \(0.3/0.35/0.35\). Train, Valid and Test sets are made of signals coming from different recordings and the signals having label \(0\) are approximately the I8% of each set. For the final evaluation of our models, we run the models for \(10\) times, then, for each score, the best and the worst results were removed and the mean and standard deviation of the remaining \(8\) runs were computed and are reported in Tables I, \(2\).
### Supervised Classification
To test how graph convolutions perform for our classification problem using a supervised method, we applied a convolution smoother of window \(20\) to our dataset, and then we performed a downsample of ratio 4 (i.e. we kept one point every 4). We subdivided each Train/Valid/Test set in non overlapping slices of 5 seconds and we applied a min-max scaler to each sequence independently.
Each 5 second long time series \(\mathbf{x}\in\mathrm{TS}(640,1)\) was given a simple directed graph structure as in Example 2, consisting of one node per signal's point (resulting in 640 nodes). We call \(k\cdot d\) the _lookback window_. We used l28 as our lookback window (l second) and we set \(d\) to be equal to \(4\). We used the Adam optimizer to train all our models. Results are displayed in Table I. Further details about the models used are contained
Figure 5: The structure of the autoencoder
in the Appendix. For the graph convolutions involved in model TCNGraphClassifier the underlying message passings used \(\alpha=t\) as in Formula 2.4: this results in the time dependencies to be read in the reversed direction by these layers.
### Unsupervised Anomaly Detection with Autoencoders
For our unsupervised experiments, the data was pre-processed and prepared as in the supervised case with the only difference that the signals were divided into pieces of one second and that the lookback window was set to 25. Then, we trained the autoencoder to reconstruct these 1 second long signals. \(d\) was set to be either \(4\) or \(8\) depending on the model.
We used the following training procedure: first each model has been trained on the train set for \(50\)-\(100\) epochs. Then the worst reconstructed \(20\)% of signals was discarded. This decreased the percentage of unreadable data in the training set from approximately \(18\)% to \(6\)% for each model. Each model was then retrained from scratch for \(150\)-\(325\) epochs on the refined training set. The rationale of this choice is that, for autoencoders to be used effectively as anomaly detectors, the 'anomalies' should be as few as possible to prevent overfitting the anomalies (see [19], where signals with too many anomalies were discarded). We found that using an autoencoder trained for less epochs can be employed effectively to reduce the proportion of anomalies in the training set. The trained models were then used to compute, for each signal in the Valid set, the reconstruction loss and the Mahalanobis scores as in [19]. Both the resulting scores were then averaged and normalized to provide one mean reconstruction error and one mean Mahalanobis score for each labeled \(5\) second slice of signal. We thus obtained a set of pairs of scores, one for each 5 second long signal in the Valid set: we will refer to it as the errors Valid set.
We used the error Valid set as a feature space to train two unsupervised cluster algorithms: Kmeans and DBscan. For dbscan we set the minimum number of points in a cluster to be equal to 4, as customary for 2 dimensional data, and we used as epsilon the mean of the distances of the points plus 2 times their standard deviation. For both, to get the final labels on the Test set, we used two different techniques. One option we considered is to obtain the final labels for the Test set repeating exactly the procedure
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{Positive class = label 1} & \multicolumn{3}{c}{Positive class = label 0} \\ & **Precision** & **Recall** & **Accuracy** & **Precision** & **Recall** & **Accuracy** \\ \hline TGraphClassifier & \(0.965\pm 0.002\) & \(0.991\pm 0.002\) & \(0.962\pm 0.003\) & \(0.941\pm 0.012\) & \(0.806\pm 0.011\) & \(0.962\pm 0.003\) \\ TCNGraphClassifier & \(0.939\pm 0.013\) & \(0.988\pm 0.006\) & \(0.936\pm 0.010\) & \(0.912\pm 0.044\) & \(0.653\pm 0.083\) & \(0.936\pm 0.010\) \\ TCNClassifier & \(0.975\pm 0.003\) & \(0.994\pm 0.002\) & \(0.973\pm 0.003\) & \(0.962\pm 0.011\) & \(0.863\pm 0.017\) & \(0.973\pm 0.003\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of the classifiers. Best scores are colored in purple.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{Positive class = label 1} & \multicolumn{3}{c}{Positive class = label 0} \\ & **Precision** & **Recall** & **Accuracy** & **Precision** & **Recall** & **Accuracy** \\ \hline _Kmeans_, _approach A_ & & & & & & \\ \hline TGraphMixedAE & \(0.973\pm 0.006\) & \(0.974\pm 0.011\) & \(0.955\pm 0.006\) & \(0.854\pm 0.051\) & \(0.847\pm 0.035\) & \(0.955\pm 0.006\) \\ TGraphAE & \(0.960\pm 0.007\) & \(0.993\pm 0.006\) & \(0.959\pm 0.003\) & \(0.952\pm 0.040\) & \(0.765\pm 0.044\) & \(0.959\pm 0.003\) \\ TCNGraphAE1 & \(0.967\pm 0.003\) & \(0.998\pm 0.002\) & \(0.968\pm 0.002\) & \(0.985\pm 0.014\) & \(0.806\pm 0.016\) & \(0.969\pm 0.002\) \\ TCNGraphAE2 & \(0.965\pm 0.002\) & \(0.997\pm 0.001\) & \(0.966\pm 0.002\) & \(0.976\pm 0.007\) & \(0.796\pm 0.012\) & \(0.966\pm 0.002\) \\ TCNAE1 & \(0.966\pm 0.012\) & \(0.995\pm 0.005\) & \(0.964\pm 0.009\) & \(0.966\pm 0.032\) & \(0.798\pm 0.074\) & \(0.964\pm 0.009\) \\ TCNAE2 & \(0.949\pm 0.005\) & \(0.999\pm 0.001\) & \(0.951\pm 0.004\) & \(0.995\pm 0.006\) & \(0.692\pm 0.031\) & \(0.954\pm 0.004\) \\ \hline _Dbsscan_, _approach B_ & & & & & & \\ \hline TGraphMixedAE & \(0.984\pm 0.005\) & \(0.944\pm 0.012\) & \(0.939\pm 0.007\) & \(0.745\pm 0.037\) & \(0.909\pm 0.028\) & \(0.939\pm 0.007\) \\ TGraphAE & \(0.968\pm 0.006\) & \(0.989\pm 0.006\) & \(0.962\pm 0.002\) & \(0.933\pm 0.034\) & \(0.813\pm 0.038\) & \(0.962\pm 0.002\) \\ TCNGraphAE1 & \(0.971\pm 0.004\) & \(0.991\pm 0.005\) & \(0.966\pm 0.003\) & \(0.940\pm 0.028\) & \(0.829\pm 0.022\) & \(0.966\pm 0.003\) \\ TCNGraphAE2 & \(0.979\pm 0.007\) & \(0.985\pm 0.006\) & \(0.967\pm 0.001\) & \(0.913\pm 0.031\) & \(0.877\pm 0.043\) & \(0.967\pm 0.001\) \\ TCNAE1 & \(0.971\pm 0.007\) & \(0.985\pm 0.011\) & \(0.962\pm 0.007\) & \(0.913\pm 0.057\) & \(0.833\pm 0.043\) & \(0.962\pm 0.007\) \\ TCNAE2 & \(0.973\pm 0.006\) & \(0.988\pm 0.005\) & \(0.966\pm 0.002\) & \(0.925\pm 0.027\) & \(0.846\pm 0.039\) & \(0.966\pm 0.002\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the autoencoder algorithms. Best scores are colored in purple and the second best in blue.
used for the Valid set (approach A). The second technique we used goes as follows: first, we trained an SVM classifier on the errors Valid set, labeled using the clustering provided by the unsupervised method. Then, we obtained an errors Test set applying the procedure described above for the Valid set, but using the normalizers fitted on the errors Valid set. Finally, we used the trained SVC to predict the labels of the Test signals (approach B). We report the results in table 2, while the clusters obtained for models TGraphAE and TCNAEI are displayed in Figure 7.
The signals reconstructed by these models are displayed in Figure 6. Both models reconstruct good signals in a comparable way and fail to properly reconstruct the bad signals, as expected, despite their small number of parameters. Notice that these methods are fully unsupervised and do not require the use of even a few labeled samples. As in the supervised setting, the graph convolutions involved in models TGraphMixedAE, TCGraphAEI and TCNGraphAEI use \(\alpha=t\) in their underlying message passings as in Formula 2.4. As a consequence, in these models, time dependencies were read in the right direction by the TCNs and in the reversed one by the graph convolutions, resulting in parameter-efficient "bidirectional" structures.
### Discussion
In the case of the supervised classification, a TCN classifier without the use of graph convolutions proved to be the best performing one. This is probably due to the effect of the final flattening layer that may provide the best mechanism in this context to link the encoder to the final MLP. The graph based model had a worse performance but achieved its best using a mean pooling mechanism, as it can be expected. However the graph based classifier obtained good results and had less than half of the parameters than the TCN classifier (see Table 3), thus exhibiting the more expressive power of the graph convolutions.
In the case of the unsupervised classification, the best
Figure 6: The reconstructed signals with TGraphAE (a) and TCNAEI (b). For each subfigure, the top image represents the reconstruction of a good signal (label 1) and the bottom one is the reconstruction of a bad signal (label 0).
performing models were on average the TCN based ones where graph convolutions were added right before and after the bottleneck. This gives a good indication of the fact that graph convolutions applied to digraphs with features can serve as good layers to filter signal coming from different layers. Also the second best performing model overall consists of a Graph based encoder and a TCN decoder, strengthening the hypothesis that graph convolution can be used to improve the effectiveness of ordinary models or to reduce considerably the number of parameters of state of the art architecture without decreasing too much their performance. It has to be noted that in the best performing models, the message passing mechanisms of the graph convolutions were particular cases of formula 2.4 with \(\alpha=t\): as a consequence, these layers learn time dependencies as if the features learnt by the first part of the encoder where reversed in time. Therefore two types of time dependencies were combined in the same algorithm in a parameter-efficient way, mimicking the behaviour of other 'bidirectional' models (such as bidirectional LSTMs).
Summing up, GNN applied to digraphs with features coming from time series showed their effectiveness in improving established algorithms and also their potential to replace them. Moreover, effective fully unsupervised pipelines can be devised to solve anomaly detection and quality recognition problems using the models described in this paper. We plan to continue the study of GNNs applied to time digraphs with features in the context of multivariate time series, constructing more complex time digraph structures and using more capable message passing mechanisms.
### Reproducibility Statement
The experiments described in this paper are reproducible with additional details provided in Appendix A. This section includes a list of all the hyperparameters used to train our models (see Table 3) and more details on the algorithms' implementation.
### Acknowledgements
The authors wish to thank Prof. Rita Fioresi for many stimulating discussions on the content of this paper and A. Arigliano, G. Faglioni and A. Malagoli of VST for providing them with the labeled database used for this work.
A. Simonetti wishes to thank professor J. Evans for his continuous support. The research of F. Zanchetta was supported by Gnsaga-Indam, by COST Action CaLISTA CA2109, HORIZON-MSCA-2022-SE-01-01 CaLIGOLA, MNESYS PE, GNSAGA and has been carried out under a research contract cofounded by the European Union and PON Ricerca e Innovazione 2014-2020 as in the art. 24, comma 3, lett. a), of the Legge 30 dicembre 2010, n. 240 e s.m.i. and of D.M. 10 agosto 2021 n. 1062.
Figure 7: The clusters obtained with TGraphAE and TCNAEI, as follows: (a) TGraphAE-dbscan, (b) TGraphAE-kmeans, (c) TCNAEI-dbscan, (d) TCNAEI-kmeans. For each subfigure, in the top image the points are colored based on their true label and in the one on the bottom they are colored based on the predicted cluster (label 1 in orange and label 0 in blue).
## Appendix A Models' hyperparameters and details
The specific hyperparameters of the models described in the previous sections are listed in table 3. Here is a description of this table, to understand all the names and abbreviations there appearing.
_Num Channels_ gives the number of layers used in the skip connections block, together with the output dimension of each layer in the form of a list; the number after the comma indicates the channel dimension of the signal in the bottleneck resulting after the application of the (ID convolution) graph convolution at the end of the encoder.
_Skip dims_ gives the list of the skip dimensions as described in Section 5.1.
_GConv type_ gives the information on the graph convolution that follows the skip connections block, if present: it gives the type of convolution used and the list of its hidden dimensions - in case the convolution is a GAT layer, it also specifies the number of heads chosen (GAT2H, for instance, means that 2 heads were selected).
As for _Pool type_, _Downsample_, _Upsample_, we indicate the type of layer used referring to their standard notation (see for example pytorch geometric libraries).
Finally for dilated convolutions, we used a kernel size of \(7\) for autoencoders models and \(8\) for the supervised models. When skip connections blocks consist of a sequence of dilated convolutions, we used increasing dilations \(2^{0},2^{1},2^{2},...,2^{n}\) where \(n\) is the number of convolutions appearing in the considered block. We used SiLU activation functions and we employed both batch normalization and dropout between layers. In model TGraphClassifier, a final ID dilated convolution with dilation \(1\) was applied after the decoder.
|
2303.13999
|
Near coincidence of metal-insulator transition and quantum critical
fluctuations: Electronic ground state and magnetic order in
Fe$_{1-x}$Co$_{x}$Si
|
We present a detailed study of the electronic and magnetic ground state
properties of Fe$_{1-x}$Co$_{x}$Si using a combination of macroscopic and
microscopic experimental techniques. From these experiments we quantitatively
characterize the metal-insulator transition and magnetic/non-magnetic quantum
phase transition occurring at low doping levels in Fe$_{1-x}$Co$_{x}$Si. From
our study, we find a surprising closeness of the critical composition of the
metal-insulator transition at $x_{\mathrm{MIT}} = 0.014$ and the quantum phase
transition at $x_{\mathrm{LRO}} \sim 0.024-0.031$. It suggests that these
effects are cooperative and depend on each other.
|
J. Grefe, P. Herre, Y. Hilgers, F. Labbus, N. Lüer-Epping, N. Radomski, M. A. C. de Melo, F. J. Litterst, D. Menzel, S. Süllow
|
2023-03-24T13:53:14Z
|
http://arxiv.org/abs/2303.13999v2
|
# Near coincidence of metal-insulator transition and quantum critical fluctuations:
###### Abstract
We present a detailed study of the electronic and magnetic ground state properties of Fe\({}_{1-x}\)Co\({}_{x}\)Si using a combination of macroscopic and microscopic experimental techniques. From these experiments we quantitatively characterize the metal-insulator transition and magnetic/non-magnetic quantum phase transition occurring at low doping levels in Fe\({}_{1-x}\)Co\({}_{x}\)Si. From our study, we find a surprising closeness of the critical composition of the metal-insulator transition at \(x_{\rm MIT}=0.014\) and the quantum phase transition at \(x_{\rm LRO}\sim 0.024-0.031\). It suggests that these effects are cooperative and depend on each other.
## I Introduction
The class of \(B20\) materials (Fe,Co,Mn)Si has been studied for decades, representing model compounds in various contexts of modern solid state physics. The materials crystallize in the cubic \(B20\) crystal structure [1] (Fig. 1). It is also labeled the _FeSi_-structure, as early on the compound was the most prominent representative of this crystallographic lattice that lacks inversion symmetry [2]. Moreover, FeSi was the first material to attract attention with respect to its electronic and magnetic properties, with initial reports on a "semiconductive and metallic" ground state [3; 4] in the presence of an unusual magnetic behavior from "correlated magnetic excitations" [5]. MnSi, instead, was characterized as a ferromagnetic metal, while CoSi was reported as semimetallic diamagnet [4].
Subsequently, these observations were substantially refined. By now, MnSi has been established as a helimagnetic metal (for a review, see Pfleiderer et al. [6]), where helimagnetism arises from the action of the Dzyaloshinskii-Moriya interaction [7] induced by the lack of inversion symmetry in the lattice. As result of the interplay of complex magnetic couplings and anisotropies a novel magnetic state, the _skyrmion lattice_, emerges in certain parameter ranges of the magnetic phase diagram [8]. For CoSi, recently the description was complemented by the realization that for a crystal structure lacking inversion symmetry in the presence of spin-orbit coupling it gives rise to new topological electronic states [9; 10; 11].
Regarding FeSi, for a long time the central scientific issue was the nature of the small-gap semiconducting ground state [12; 13]. It was proposed that this may be understood as signatures of a Kondo insulating state, _i.e._, a semiconducting state arising as result of strong electronic correlations. Subsequent experimental tests of this concept have not produced clear-cut evidence in favor of this scenario [14; 15]. Instead, it appears that single electron band structure modeling is sufficient to account for the observed electronic ground state. More recently, topological aspects of the band structure of FeSi have attracted attention [16]. In the present context, with respect to the electronic properties we will consider the bulk material FeSi as intrinsically gapped material, _i.e._, an insulator in the traditional sense, which has metallic surface states [17]. If these surface states reflect the character of a 3D topological insulator will not be addressed with our study.
Another line of inquiry regarding these \(B20\) compounds are alloying studies. With a full elemental solubility, alloying studies on Fe\({}_{1-x-y}\)Co\({}_{x}\)Mn\({}_{y}\)Si allow to investigate both zero temperature (quantum phase) transitions of the electronic and magnetic ground states. Special focus lies upon the phase diagram of Fe\({}_{1-x}\)Co\({}_{x}\)Si,
Figure 1: Cubic \(B20\) crystal structure for FeSi, blue spheres are Fe, grey spheres are Si (lattice parameter \(a\sim 4.48\,\)Å).
where a multitude of studies have been carried out in the course of 50 years of research [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 38; 39; 40; 41; 43; 50; 54; 55; 56; 57].
The general findings with respect to magnetic order are well-established [18; 19; 22; 23; 24; 25; 26; 29; 30; 34; 35; 36; 38; 41; 43; 50; 54; 55; 56; 57]: The series starts with the paramagnetic small gap insulator FeSi. Already alloying in the percentage range with Co closes the gap and induces the onset of long-range helimagnetic order below \(T_{\rm HM}\), with a maximum ordering temperature in the range of a few 10 K. Magnetic order is fully suppressed at \(x=0.8\), and the series further transforms with \(x\) into the topological semimetal CoSi. In the magnetically ordered regime it is possible to identify field induced skyrmionic phases [44; 46; 49; 51; 52; 54]. Since the parameter range of the formation of skyrmions in Fe\({}_{1-x}\)Co\({}_{x}\)Si is very different from that of MnSi, it has enriched the possibilities to quantitatively study the physics of skyrmionic spin textures.
In detail, however, the magnetic and electronic phase diagram of Fe\({}_{1-x}\)Co\({}_{x}\)Si is far from well-established. To illustrate this point, in Fig. 2 we summarize the helimagnetic transition temperatures \(T_{\rm HM}\) for samples of the series Fe\({}_{1-x}\)Co\({}_{x}\)Si reported in the literature. Here, the variations of the transition temperatures for a given composition exhibit a very large variation (even in the maximum of the \(T_{\rm HM}\)-dome by 30%), which in other scientific contexts (high-\(T_{\rm C}\) materials, quantum phase transitions etc.) would be considered unacceptable.
As pointed out by Bauer, Garst and Pfleiderer [54], part of the problem are the different definitions and criteria chosen in literature to define the phase transition temperature \(T_{\rm HM}\). In Ref. [54] this was illustrated by using different criteria on their selected samples Fe\({}_{1-x}\)Co\({}_{x}\)Si to extract \(T_{\rm HM}\), leading to a large error bar in the determination of \(T_{\rm HM}\) (up to \(\pm\)7 K). Still, Fig. 2 demonstrates that it does not fully account for the scatter of the data, and other effects need to be considered. These could be the use of poly- vs. single-crystalline samples, metallurgical treatment, inaccuracies in determining the correct stoichiometry (FeSi has a homogeneity range of formation), or - given that the literature reports span a range of 50 years - simple thermometry issues etc. in certain studies. At any rate, at this point a full, thorough and reproducible determination by well established techniques and criteria of the phase diagram of Fe\({}_{1-x}\)Co\({}_{x}\)Si is lacking. In particular, scientific topics with regard to the relationship of the metal-insulator transition (MIT) occurring in Fe\({}_{1-x}\)Co\({}_{x}\)Si at [18; 34]\(x\sim 0.005-0.018\) and the quantum phase transition (QPT) into a long-range magnetically ordered state at larger Co concentrations are simply not accessible with the published data.
In this situation, we have set out to re-investigate the phase diagram of Fe\({}_{1-x}\)Co\({}_{x}\)Si. Our particular focus lies on the small-\(x\) range, _i.e._, \(x\leq 0.15\), that is the range that encloses the MIT and the QPT. For our set of single-crystalline samples we perform a thorough characterization by various bulk experimental techniques accompanied by the microscopic technique Mossbauer spectroscopy. Taken together, we aim to shed light in particular on the physical phenomena occurring at small values \(x\), that is the regime of magnetic quantum criticality and metal-insulator transition.
## II Experimental details
In the choice of our samples, we restrict ourselves entirely on single-crystalline specimens obtained by the Czochralski-method using a three-arc oven as described previously [14; 15]. In the low-doping regime, between different samples, we choose particularly small variations of \(x\) down to 0.01, to accurately define the details of the magnetic phase diagram and electronic ground state properties. After growth, the samples have been oriented by means of Laue x-ray diffraction and bar-shaped samples have been cut along the cubic main axis from the crystals.
For each of the crystals some material has been ground to powder and checked by powder x-ray diffraction for phase homogeneity, crystal structure and lattice parameters. In the powder diffraction experiments no secondary phases have been detected and the crystal structure was verified as cubic \(B20\) lattice. As an example, in Fig. 3 we depict the x-ray diffraction pattern for FeSi, including a Rietveld refinement of the data. Results of similar quality are observed for the other samples, including some with larger \(x\) to cover the full phase diagram. With the similarity in x-ray scattering cross sections of Fe and Co, no reliable analysis of the actual composition of our specimens can be carried out. Therefore, in the refinements we have used the nominal composition.
From the x-ray analysis we obtain the evolution of the cubic lattice parameter as functio
Figure 2: Helimagnetic ordering temperatures \(T_{\rm HM}\) of Fe\({}_{1-x}\)Co\({}_{x}\)Si, as taken from literature (references listed in the legend); for details see text.
Surprisingly, the data do not simply follow Vegard's law. With the lattice parameter of FeSi, \(a=4.48411(34)\,\mathrm{\SIUnitSymbolAngstrom}\), significantly larger than that of CoSi, \(a=4.44225(50)\,\mathrm{\SIUnitSymbolAngstrom}\), a (close to) linear shrinking of the lattice parameter would be expected with Co doping [58]. Instead, in the low-doping regime we observe a shallow minimum of the lattice parameter. So far, this anomaly has been overlooked, as this alloying range has not been considered in detail in x-ray analysis (see for instance Refs. [34; 38]). The second main free structural parameter in the \(B20\) structure, the \(x,y,z\) position of (Fe/Co) and Si, exhibits a smooth increase from \((x=0)\sim 0.137\) to \((x=1)\sim 0.144\) (experimental error \(\sim 0.002\)).
To characterize the single crystals Fe\({}_{1-x}\)Co\({}_{x}\)Si regarding their electronic and magnetic properties we have carried out a standardized set of experiments. We report on the resistivity, magnetization, susceptibility and Mossbauer spectra, all in the \({}^{4}\)He-temperature range, _i.e._, 1.6 to \(300\,\mathrm{K}\). For the resistivity we used a standard 4-probe \(ac\)-setup. Magnetization and susceptibility have been measured using a commercial SQUID system in fields up to \(5\,\mathrm{T}\).
Mossbauer experiments have been performed in a standard transmission geometry employing a \(50\,\mathrm{mCi}\)\({}^{57}\)Co source in Rh-matrix. The samples obtained from several grinding and polishing runs of single-crystalline plates have been almost spherical platelets (surface perpendicular to [100]) with maximum planar dimensions of \(6\,\mathrm{mm}\) diameter and thickness of about \(50\,\mathrm{\SIUnitSymbolAngstrom}\). The Mossbauer drive was run in sinusoidal mode minimizing the velocity error. The measurements were carried out in a bath cryostat in under-pressure mode enabling experimental temperatures down to \(1.7\) K. Mossbauer spectra were analyzed using the Mosswinn \(4.0\)i software [59]. The spectra were fitted with two sites using the mixed magnetic and quadrupole static Hamiltonian (single crystal) theory, with the same quadrupole splitting \(QS\), isomer shift \(IS\) and the hyperfine magnetic field \(B_{\mathrm{HF}}\) for both sites and differing only in the angles between \(QS\) and \(B_{\mathrm{HF}}\), between \(QS\) and the gamma ray and \(B_{\mathrm{HF}}\) and the gamma ray. The values of isomer shifts are given relatively to \(\alpha\)-Fe at room temperature.
## III Results
Prior to a detailed investigation of the electronic and magnetic phase diagram of Fe\({}_{1-x}\)Co\({}_{x}\)Si, we need to establish the relevance of the metallic surface states for the interpretation of the experimental data. To this end, we utilize the argument put forth for topological insulators that the resistivity may be modeled as superposition of surface and volume electrical conduction [60]. This observation implies that the relevance of surface conduction depends on the surface-to-volume ratio of a given sample [17]. We prepared two specimens of our single crystal FeSi for resistivity measurements: first, a bar-shaped sample of dimensions \(5\times 1\times 1\,\mathrm{mm}^{3}\), and secondly, a sample of similar length and width, but with a thickness polished down to \(35\,\mathrm{\SIUnitSymbolAngstrom}\), _i.e._, with a surface-to-volume ratio increased by a factor of about 30.
In Fig. 5 (a) we compare these two samples with respect to the normalized resistivity \(\rho/\rho_{300\,\mathrm{K}}\) in a log-log-representation. From the figure it is evident that the normalized resistivity of the thin plate deviates from that of the bar-shaped sample below about \(100\,\mathrm{K}\). This finding is qualitatively in line with the observation of Fang et al. [17] of a metal-to-semiconductor transition in differently sized single crystals FeSi. It verifies the existence of a sig
Figure 3: Powder x-ray diffraction pattern as function of angle for FeSi. Rietveld refinement carried out using the \(B20\) structure with lattice parameter indicated.
Figure 4: Cubic lattice parameter \(a\) of the \(B20\) structure of Fe\({}_{1-x}\)Co\({}_{x}\)Si, obtained from powder x-ray diffraction. In the inset the evolution of the lattice parameter for the low doping regime is enlarged.
nificant electrical surface conductivity in FeSi, which at low temperatures partially masks the insulating behavior of the bulk of the sample. As we show below, doping with Co in Fe\({}_{1-x}\)Co\({}_{x}\)Si substantially increases bulk conductivity, and thus reduces the relevance of surface conductivity. Effectively, we find that we can disregard conduction from such surface states in a resistivity measurement at least for doping with Co of more than one percent.
We have also measured the magnetic susceptibility for our two samples FeSi, plotted in Fig. 5 (b). It has previously been noted that even for single-crystalline FeSi there is always a low-temperature upturn of the susceptibility [5; 13]. It is usually associated to magnetic (Fe) impurities, although it has been impossible to suppress or diminish this impurity contribution by different preparation techniques. As can be seen, also for our bar-shaped crystal FeSi we observe the typical behavior [5] with a broad susceptibility maximum around \(500\,\mathrm{K}\) and the Curie tail at low temperatures. Remarkably, our thin plate sample has a substantially (an order of magnitude at low \(T\)) increased Curie-like susceptibility background. First, this observation may suggest that polishing the sample damages the surface to the effect that free Fe particles are produced, giving rise to a larger Curie tail. Secondly, and more exotically, these magnetic particles will reside on the surface of the sample, that is in the spatial range of the conducting surface states. It raises the question about the interplay of electronic and magnetic properties in particular at the surface of FeSi, and the possibility that the existence and residual coupling of magnetic moments is associated to a local metallic environment.
Having thus characterized the relevance of metallic surface states, we proceed with the zero field resistivity of our single-crystalline bar-shaped samples Fe\({}_{1-x}\)Co\({}_{x}\)Si. In Fig. 6 we plot the resistivity along the cubic main axis [100] on a logarithmic and linear scale as function of temperature \(T\). Globally, the behavior is in full accordance with previous observations: FeSi itself exhibits a gapped behavior, implying it to be in an insulating state at \(T=0\,\mathrm{K}\). To quantify the charge gap, we fit the high temperature data \(>200\,\mathrm{K}\) (to minimize the influence of the surface states) by \(\propto\exp\left(\Delta_{\mathrm{g}}/2k_{\mathrm{B}}T\right)\). This approach yields a gap \(\Delta_{\mathrm{g}}\sim 700\,\mathrm{K}\), in good agreement with for instance Ref. [13] (fit not included in the graph).
Alloying with Co induces a MIT, apparent from the drastic change of the overall behavior of \(\rho\) from insulating to (badly) metallic, with a residual resistivity of \(250\,\mathrm{\SIUnitSymbolMicro m}\mathrm{cm}\) for \(x=0.15\) at lowest temperatures. For \(x\geq 0.02\), the low temperature resistivity now has a metallic character \(\mathrm{d}\rho/\mathrm{d}T>0\), while a broad resistive maximum in an intermediate temperature range \(\sim 50\,\mathrm{K}\) has been associated to a remembrance of the narrow gap band structure of FeSi [31; 41] (Fig. 6). Notably, from around \(200\,\mathrm{K}\) upwards the resistivity for all samples is of similar magnitude, implying that all gap-related features in the resistivity are either overcome by thermal excitations over and/or closing of the gap.
To quantify the MIT we examine the conductivity \(\sigma(T)=\rho^{-1}(T)\) plotted for Fe\({}_{1-x}\)Co\({}_{x}\)Si in Fig. 7. From these data we extract the zero temperature conductivity \(\sigma(T\to 0)=\sigma_{0}\) presented in Fig. 8 as function of alloying \(x\) (red left scale). This plot visualizes the fundamental change in behavior from insulating to metallic around \(x=0.01\), in agreement with the Refs. [18; 34; 38; 64], with those \(\sigma_{0}\)-data included in the plot (orange stars). Increasing the Co concentration beyond the MIT significantly increases the conductivity, with the absolute value of \(\sigma_{0}\) increasing by an order of magnitude with varying \(x\) from 0.01 to 0.02 (Fig. 7).
It was reported [18] that at temperatures \(<1\,\mathrm{K}\) there is a residual zero-temperature conductivity \(\sigma_{0}\sim 4\,(\Omega\mathrm{m})^{-1}\). As we do not cover this temper
Figure 5: (a) Comparison of the normalized resistivity \(\rho/\rho_{300\,\mathrm{K}}\) of single crystalline FeSi measured on a bar-shaped and a thin-plate sample. (b) Susceptibility for the same two samples FeSi; for details see text.
Figure 6: Temperature dependent zero field resistivity \(\rho(T)\) of Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(0\leq x\leq 0.15\), (a) plotted on a logarithmic and (b) linear scale; for details see text.
well also be the case for our crystals. In view of recent studies [17; 61] on FeSi and SmB\({}_{6}\) and our own findings, it appears to reflect conducting surface states.
To parametrize the MIT accurately, we draw on the observation of a similarity to classical semiconductors by fitting within critical scaling theory [62; 63; 64] our doping dependence of the zero-temperature conductivity with \(\sigma_{0}(x)=\sigma(0)(x-x_{\rm MIT})^{\nu}\). Certainly, the finite metallic surface conductivity might slightly affect the outcome of our fitting procedure. Still, from a fit to the data we obtain values \(\sigma(0)=8.2(1.2)\cdot 10^{5}\,(\Omega\mathrm{m})^{-1}\), \(x_{\rm MIT}=0.014(4)\) and \(\nu=0.50(6)\), consistent with critical scaling theory and similar to previous reports [18; 34; 38; 64] (fit included in the figure as solid red line).
As the next step, knowledge of the underlying magnetic state is necessary. Therefore, in Fig. 9 we plot the susceptibility \(\chi\) (on a logarithmic scale) and inverse susceptibility \(\chi^{-1}\) (on a linear scale) of single-crystalline Fe\({}_{1-x}\)Co\({}_{x}\)Si on different temperature scales measured in \(0.01\,\mathrm{T}\). For some of the samples (\(x=0.01,0.02,0.03\)) we observe a weak structure in \(\chi(T)\) in an intermediate temperature range \(\sim 50-150\,\mathrm{K}\). Zero-field cooled vs. field cooled measurement routines reveal a slight history dependence of these signatures, suggesting that they arise from a small amount of ferro-/ferrimagnetic particles in our samples. Using a toy model for a simple estimate, we might assume that these spurious signals arise for instance from single crystal grain boundaries which might locally produce small grain boundary Fe inclusions. Then, already less than \(0.04\,\mathrm{\char 37}\) of such grain boundary clusters would be sufficient to account for the history dependence of \(\chi(T)\). Therefore, these weak additional magnetic signatures are extrinsic and we will not consider them further.
Starting with FeSi, the well-known paramagnetic susceptibility, with a maximum at higher temperatures \(\sim 500\,\mathrm{K}\) is observed, together with a low-temperature Curie-like upturn, so far attributed to a minute amount of magnetic impurities [13]. Using the argument from Ref. [13], the Curie tail would be accounted for by less than \(0.2\,\mathrm{\char 37}\) per formula unit of \(S=\frac{3}{2}\) impurity moments.
The maximum in the susceptibility of FeSi was attributed to an activated behavior across an energy barrier \(\Delta_{\rm m}\) in the spin excitation spectrum [13]. For temperatures \(T\ll\Delta_{\rm m}/k_{\rm B}\) it allows to fit the susceptibility by \(\chi(T)=(C/T)\cdot\exp{(-\Delta_{\rm m}/k_{\rm B}T)}\), with \(C\) as a constant that in principle measures the spin of the magnetic moments. Accordingly, we can fit the data for FeSi at temperatures above \(\sim 150\,\mathrm{K}\) with a gap of \(515\,\mathrm{K}\), in agreement with Ref. [13] (not shown).
With Co alloying, the high temperature susceptibility maximum broadens and/or shifts to lower temperatures, and has been replaced by an essentially Curie
Figure 8: Composition dependence \(x\) of the zero temperature conductivity \(\sigma_{0}\) (left scale, red bullets and orange stars), magnetic ordering temperature \(T_{\rm HM}\) (first right scale, blue diamonds), induced magnetic moment \(M_{\rm ST}\) at \(5\,\mathrm{T}\) (second right scale, green squares) and Curie-Weiss temperature \(\Theta_{\rm CW}\) (upper panel) of Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(0\leq x\leq 0.15\). In the \(\sigma_{0}\)-plot we have included the values determined by Chernikov et al. [18] and Manyala et al. [34; 38; 64] for their polycrystalline samples (orange stars); for details see text.
Figure 7: Temperature dependent zero field conductivity \(\sigma(T)\) of Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(0\leq x\leq 0.15\). Arrows denote magnetic ordering temperatures \(T_{\rm HM}\), for details see text.
Figure 9: (a) Temperature dependent susceptibility \(\chi(T)\), plotted on a logarithmic scale, and (b) inverse susceptibility \(\chi^{-1}(T)\) of Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(0\leq x\leq 0.15\); for details see text.
Weiss-like susceptibility already at 5 % Co-doping. If we assume that a shift of the maximum to lower temperatures produces such behavior, a corresponding gap fit applied to Fe\({}_{0.99}\)Co\({}_{0.01}\)Si leads to a substantially reduced gap of \(\sim 358\) K (although the matching with the experiment is substantially worse than for FeSi). It would imply that also with respect to the magnetic properties a minute amount of Co doping is sufficient to suppress gap-like features in the spin excitation spectra.
At low temperatures all samples exhibit a Curie-Weiss-like upturn of the susceptibility. If we take the approach that this Curie-Weiss-like behavior is associated to magnetic moments with a residual magnetic coupling, then, according to the Curie-Weiss law, the extrapolated intercept of the inverse susceptibility with the temperature axis, the Curie-Weiss temperature \(\Theta_{\rm CW}\), is a measure of the coupling strength. Within this concept, we find all samples Fe\({}_{1-x}\)Co\({}_{x}\)Si with \(x\geq 0.02\) to have a positive \(\Theta_{\rm CW}\), corresponding essentially to a finite ferromagnetic coupling of these samples (see \(\chi^{-1}(T)\) in Fig. 9). As shown in the purple upper panel of Fig. 8, where we display the \(x\) dependence of \(\Theta_{\rm CW}\), in this compositional range a linear rise of \(\Theta_{\rm CW}\) with \(x\) attests to the strengthening of the magnetic coupling.
Conversely, for FeSi and Fe\({}_{0.99}\)Co\({}_{0.01}\)Si the same construction leaves us with antiferromagnetic Curie-Weiss temperatures \(\Theta_{\rm CW}\) of about -20 K. As pointed out, these samples have spin excitation gaps much larger than the corresponding values \(\Theta_{\rm CW}\), implying that here we consider diluted magnetic moments in an insulator, _i.e._, a different type of magnetic coupling. Taking these observations together, based on the global behavior of the susceptibility alone, in the metallic regime of the alloying phase diagram we find a finite ferromagnetic coupling, implying that we should observe signatures of long-range magnetic order for these compositions.
To verify this point firmly, we have analyzed the susceptibility and magnetization data for Fe\({}_{1-x}\)Co\({}_{x}\)Si to extract the ordering temperature \(T_{\rm HM}\) using various approaches as follows: a.) for our susceptibility data \(\chi(T)\) we have determined the second temperature derivative, choosing the inflection point as \(T_{\rm HM}\); b.) we have performed a modified Arrott-plot analysis to derive \(T_{\rm HM}\); c.) we have parametrized the critical behavior close to \(T_{\rm HM}\) within the framework of the Heisenberg model [55]; d.) we have used the Inoue-Shimizu model [65] in the generalization by Brommer [66] as extension of the Landau-description of phase transitions to establish \(T_{\rm HM}\).
The experimental basis of these analyses are susceptibility data (see above) and magnetization measurements \(M(H)\) on our samples Fe\({}_{1-x}\)Co\({}_{x}\)Si. As an example, in Fig. 10 we present \(M(H)\) at the base temperature of 1.7 K in fields up to 5 T. Globally, the figure illustrates the expected behavior: for \(x\leq 0.01\) the magnetization is basically flat and close to zero, but for larger \(x\) rises continuously and almost linearily with concentration. Given that the helimagnetic order in Fe\({}_{1-x}\)Co\({}_{x}\)Si is easily field-polarized, the basic field dependence of \(M(H)\) for \(x\geq 0.02\) is essentially that of a soft ferromagnet. Notably, only in the insulator-to-metal crossover range \(0.01\to x\to 0.03\) there is some curvature in the doping evolution of \(M(H)\). This is visualized in Fig. 8, where we include the doping dependence of the induced magnetic moment \(M_{\rm 5T}\) at 5 T and 1.7 K (green outer right scale).
In the following, we present the different types of analysis and corresponding results for an exemplary case Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(x=0.05\), with an extended description of the analysis in the supplement [67]. Aside from the four approaches we have attempted related types of analysis such as the common Arrott-plot or using different types of criticality (Ising etc.). In result, we find that these approaches either work very badly (Arrott-plot) or do not improve the data parametrization as compared to the approaches discussed here in detail. Overall, the different types of data analysis lead to slightly different ordering temperatures \(T_{\rm HM}\), which we will discuss in more detail below.
We start with the determination of \(T_{\rm HM}\) from the inflection point of the susceptibility \(\chi(T)\) for Fe\({}_{0.95}\)Co\({}_{0.05}\)Si (Fig. 11 (a)). The inverse susceptibility \(\chi^{-1}(T)\) in a field of \(\eta_{0}H=0.01\) T suggests a magnetic coupling strength somewhat below 10 K (Fig. 9). For low magnetic fields and ignoring demagnetization effects, the inflection point of \(\chi(T)\) for a material with a ferri-/ferromagnetic susceptibility signature represents an approximation of the onset of long-range (sublattice) magnetic order. It is derived by numerically calculating the zero intercept of the second derivative \({\rm d}^{2}\chi(T)/{\rm d}T^{2}\) included in Fig. 11 (a). This way, from the figure we obtain a transition temperature \(T_{\rm HM}^{\rm susz}\left(\chi\right)=4.86\) K.
Next, as a simple mean-field Arrott-plot analysis does not properly parametrize the experimental data, we have performed a modified Arrott-plot analysis [68]. In this approach, by plotting \(M^{y}\) vs. \((H/M)^{z}\), with \(H\) the mag
Figure 10: Magnetization \(M\) as function of field \(H\) measured at 1.7 K of Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(0\leq x\leq 0.15\); for details see text.
netic field strength, the free parameters \(y\) and \(z\) are varied to maximize the data range of a linear dependence \(M^{y}=A+B\cdot(H/M)^{z}\), with \(A,B\) the derived free fit parameters. In the spirit of the Arrottt-plot analysis, the temperature where \(A\) becomes zero is then taken as \(T_{\rm HM}\). It is a phenomenological approach to incorporate the scaling laws of critical phenomena in a comparatively simple data handling procedure. In our case, we find as optimum solution a plot \(M^{2.43}\) vs. \((H/M)^{0.37}\) for our magnetization data on Fe\({}_{0.95}\)Co\({}_{0.05}\)Si (Fig. 11 (b)). This in turn leads to an ordering temperature \(T_{\rm HM}^{\rm mod}=3.48\,\)K.
The concept of scaling laws acting close to a critical temperature \(T_{\rm HM}\) is made explicit by observing that \(\left(\frac{H}{M}\right)^{\frac{1}{\gamma}}=a\left(\frac{T-T_{\rm HM}}{T_{\rm HM }}\right)+b\cdot M^{\frac{1}{\gamma}}\), with critical exponents \(\gamma,\beta\). Choosing 3D-Heisenberg criticality, we use \(\gamma=1.386\) and \(\beta=0.365\), resulting in a corresponding plot \(M^{\frac{1}{\beta}}\) vs. \(\left(\frac{H}{M}\right)^{\frac{1}{\gamma}}\) in Fig. 12 (a). From this procedure we obtain \(T_{\rm HM}^{\rm Heisenberg}=5.87\,\)K.
Finally, for the Landau parametrization of magnetic phase transitions the free energy \(F\) of a magnetic material is expanded in multiples of the square of the magnetization \(M^{2}\). This approach can be refined for mixed compounds etc. by coupling of the different subsystems [65; 66]. It results in calculating a Lagrange multiplier \(\lambda\) from the magnetization derivative of the free energy of the system, \({\rm d}F/{\rm d}M=\lambda\), with \(\lambda=c_{1}M+c_{3}M^{3}+c_{5}M^{5}\) parametrized using the magnetization. As set out in detail in Ref. [66], the minimization procedure yields the parameters \(c_{1}(T)\), \(c_{3}(T)\) and \(c_{5}(T)\), with the minimum of \(c_{1}(T)\) defining \(T_{HM}\) and the sign of \(c_{3}(T)\) detailing the character of the phase transition (1st or 2nd order). We have carried out this analysis for our samples Fe\({}_{1-x}\)Co\({}_{x}\)Si, with the temperature dependence of \(c_{1}(T)\) depicted in Fig. 12 (b) for the sample \(x=0.05\). From this Inoue-Shimizu analysis we obtain a transition temperature \(T_{\rm HM}^{\rm IS}=5.89\,\)K.
Combining the results from the different types of analysis for our example Fe\({}_{0.95}\)Co\({}_{0.05}\)Si, we thus obtain a set of transition temperatures \(T_{\rm HM}\) ranging from \(3.48\,\)K to \(5.89\,\)K. For further discussion, we decide to take the highest of the four determined values as ordering temperature \(T_{\rm HM}^{*}\). For graphic representation in Fig. 8 (blue inner right scale) we include the alloying dependence \(T_{\rm HM}(x)\) of Fe\({}_{1-x}\)Co\({}_{x}\)Si in the form of \(T_{\rm HM}^{*}-\Delta T_{\rm HM}\), with \(\Delta T_{\rm HM}\) chosen so that all values \(T_{\rm HM}\) derived from the different types of analysis are within the error bar (for \(x=0.05\): \(\Delta T_{\rm HM}=2.41\,\)K). Regarding the coefficient \(c_{3}(T)\) specifying if a transition is of 1st or 2nd order, we find that in the extended Inoue-Shimizu model the value of \(c_{3}(T)\) changes sign within the range of the uncertainty \(\Delta T_{\rm HM}\) for all samples. We therefore can not draw a definite conclusion on the nature of the magnetic transition based on the sign of \(c_{3}(T)\).
The dependence \(T_{\rm HM}(x)\) now verifies our observation that for samples Fe\({}_{1-x}\)Co\({}_{x}\)Si close to \(x=0.02\) long range magnetic order develops. A fit for instance of the transition temperatures with \(T_{\rm HM}^{*}\propto\left(x-x_{\rm LRO}\right)^{\eta}\) yields \(x_{\rm LRO}=0.026(2)\) and \(\eta=0.92(4)\) (solid blue line in Fig. 8). A similar fit, but now taking \(T_{\rm HM}\) as mean value of the determined temperature range with a symmetric error bar \(\Delta T_{\rm HM}\) yields \(x_{\rm LRO}=0.031(5)\) and \(\eta=0.92(13)\). Altogether, the analysis results in a close-to-linear (\(\eta\sim 0.9-1\)) concentration \(x\) dependence of the ordering temperature \(T_{\rm HM}\) and a critical concentration \(x_{\rm LRO}\) of onset of magnetic order just one percent above the concentration \(x_{\rm MIT}\) of the MIT. Actually, if we consider the error bars of \(x_{\rm LRO}\) and \(x_{\rm MIT}\), the two concentrations almost overlap.
As pointed out above, we have prepared thin slices of single-crystalline samples Fe\({}_{1-x}\)Co\({}_{x}\)Si through polishing with a thickness of a few ten um to perform Mossbauer spectroscopy as a microscopic probe of the magnetic and
Figure 12: (a) Determination of the transition temperature \(T_{\rm HM}\) of Fe\({}_{0.95}\)Co\({}_{0.05}\)Si from the magnetization \(M(H)\) assuming 3D-Heisenberg criticality. (b) Parameter \(c_{1}\) as function of temperature from an analysis of \(M(H)\) using the Inoue-Shimizu model to extract \(T_{\rm HM}\); for details see text.
electronic properties. In Fig. 13 we present an overview of the experimental results on these samples at a base temperature of 1.7 K. Here we plot the normalized intensity \(I/I_{0}\), with \(I_{0}\) the count intensity far away from the absorption lines. Since it was not possible to define the actual sample thickness in the polishing process to an accurately defined value, there is effectively a slight thickness variation between different samples by a factor of up to 2. This in turn leads to slight variations in the depth of the absorption pattern that can be seen in this raw data plot. Still, since the depth of the absorption pattern after all varies by only up to 20 %, all experiments have been carried out in the same absorber thickness limit, which from the density calculation is the thin absorber limit.
Qualitatively, for all samples a symmetric doublet spectrum with a finite isomer shift is detected. For FeSi, in previous Mossbauer spectroscopy experiments on polycrystalline powder, this absorption spectrum was attributed to a quadrupole splitting from a non-zero electric field gradient, consistent with the Fe site symmetry in the \(B20\)-structure [3]. Upon alloying with Co, for small concentrations \(x\leq 0.03\) the doublet spectrum persists, while starting with \(x=0.04\) a broadening of the doublet is observed. The broadening stems from helical magnetic order, as this behavior is consistent with the magnetic phase diagram depicted in Fig. 8, where at an experimental temperature of 1.7 K static local magnetic fields at the Fe site ought to be first observable at \(x=0.04\).
Following the procedure set out in the Refs. [3; 20; 47] we analyze the data starting with FeSi. In our fit we use the free parameters \(IS\) and \(QS\), with values \(IS=0.157\) mm/s, \(QS=0.744\) mm/s (experimental line width 0.421 mm/s) in good agreement with previous reports [3; 20; 47]. Next, we use the same fit protocol for the samples \(x\neq 0\), and in addition allow for a line broadening from \(B_{\rm HF}\) for the samples \(x\geq 0.04\). Strictly speaking, this way we model the helical magnetic state of Fe\({}_{1-x}\)Co\({}_{x}\)Si, \(x\geq 0.04\), as a ferromagnetic one. However, as can be seen from Fig. 13 and 14 the line broadening is small to the effect that the local field distribution in a helical magnet can not be distinguished in Mossbauer spectroscopy from a weak ferromagnetic one (see discussion of a similar situation in NbFe\({}_{2}\)[69; 70]). We note that in the process, even while allowing the experimental line width as a free fit parameter, it hardly varies with \(x\) around its value \(\sim 0.4\) mm/s. In result, we can fit the experimental data for all samples (solid lines in Fig. 13) and obtain the alloying \(x\) dependence of the fit parameters \(IS\), \(QS\) and \(B_{\rm HF}\) summarized in Fig. 14.
Regarding the alloying dependence of the isomer shift and quadrupole splitting, both quantities exhibit a slightly anomalous behavior at small alloying levels \(x\sim 0.02\): the quantity \(IS\) exhibits a maximum, while \(QS\) is slightly irregular. While the scatter is significant, we speculate that there might be an association to the irregular evolution of the lattice parameter in this range, and that for low concentrations \(x\) close to the metal-insulator transition there might be some as yet unresolved structural anomaly occurring in Fe\({}_{1-x}\)Co\({}_{x}\)Si. Aside from this observation, the general trend of \(IS(x)\) and \(QS(x)\) is consistent with the findings in previous works [20; 47].
The magnetic hyperfine field exhibits an alloying de
Figure 14: Compositional dependence \(x\) of \(IS\), \(QS\) and \(B_{\rm HF}\) for single-crystalline Fe\({}_{1-x}\)Co\({}_{x}\)Si at base temperature (1.7 K); for details see text.
Figure 13: Mössbauer spectra on single-crystalline Fe\({}_{1-x}\)Co\({}_{x}\)Si at base temperature (1.7 K); for details see text.
pendence fully consistent with the magnetic phase diagram derived from the bulk magnetic properties (Fig. 8). In the temperature range available for the experiment, static magnetic order in the volume of the samples is observable using a microscopic technique for alloying values \(x=0.04\) and above. Consistent with a neutron scattering study on Fe\({}_{1-x}\)Co\({}_{x}\)Si [30], the derived internal magnetic fields in the sub-Tesla-range reflect very small ordered magnetic moments \(\sim 0.01-0.1\,\mathrm{\SIUnitSymbolMicro_{B}}/(\mathrm{Fe}/\mathrm{Co})\)-atom and thus weak magnetic order inherent to the vicinity to a magnetic quantum critical point.
Next, we characterize the magnetic order parameter by studying the hyperfine field for selected samples. In Fig. 15 we plot the Mossbauer spectra taken for Fe\({}_{0,85}\)Co\({}_{0,15}\)Si as function of temperature. Starting around \(23\,\mathrm{K}\), the doublet spectra broaden due to the onset of magnetic order. Following the above fitting routine, from the data we extract the temperature dependence of \(B_{\mathrm{HF}}\) depicted in Fig. 16. A critical fit to the data close to the ordering temperature \(B_{\mathrm{HF}}\propto(T_{\mathrm{HM}}-T)^{\gamma}\) yields \(T_{\mathrm{HM}}=23.03(7)\,\mathrm{K}\) and \(\gamma=0.27(4)\) (solid line in Fig. 16). Most importantly, the value of \(T_{\mathrm{HM}}\) experimentally obtained from the microscopic probe Mossbauer spectroscopy is in good agreement with the values derived from the magnetization/susceptibility analysis ranging from \(24.12\,\mathrm{K}\) to \(25.83\,\mathrm{K}\), thus validating the analysis of the bulk magnetic data.
## IV Discussion
Summarizing our experimental findings for Fe\({}_{1-x}\)Co\({}_{x}\)Si, we have established a close coincidence of the metal-insulator transition at a composition \(x_{\mathrm{MIT}}=0.014(4)\) and a quantum critical magnetic-non-magnetic transition at \(x_{\mathrm{LRO}}\sim 0.026-0.031\). If there truly is a difference between \(x_{\mathrm{MIT}}\) and \(x_{\mathrm{LRO}}\), it would represent a very small section \(0.014<x\lesssim 0.031\) of the phase diagram that constitutes a regime of a low carrier metal with a large magnetic susceptibility and quite unusual physical properties such as the possible formation of magnetic polarons etc. (Fig. 8). However, the question that needs to be addressed first is if \(x_{\mathrm{MIT}}\) and \(x_{\mathrm{LRO}}\) can experimentally be firmly distinguished, _i.e._, the two transitions can be considered to be distinct.
As detailed in Ref. [70] for the quantum critical/weakly ferromagnetic system NbFe\({}_{2}\), given that both MIT and LRO in Fe\({}_{1-x}\)Co\({}_{x}\)Si occur in the very dilute Co alloying limit we may assume that a small statistical distribution of local compositions exists in our samples. As worked out in Ref. [70] for a basic atomic mixing model, for an alloying value \(x=0.014\) (\(0.026-0.031\)) we may expect a distribution of local compositions of \(\pm 0.002\) (\(0.003\)).
In an experimental study such as ours this might slightly affect the actually determined compositional value \(x\) of any such transition. For instance, for a magnetic-non-magnetic transition a distribution of local compositions will tend to promote short range order at the expense of long-range magnetic order. In other words, the experimentally determined value \(x_{LRO}\) will be shifted towards the LRO side of the phase diagram, _i.e._, might be slightly too large in our case. Conversely, for a metal-insulator transition studied by means of conductivity measurements, in samples with a local compositional distribution a percolative metallic conductivity path may form, masking insulating behavior in the bulk of the samples. Therefore, the value \(x_{\mathrm{MIT}}\) will be shifted towards the insulating side of the MIT, _i.e._, might be slightly too small in our case.
Hence, for Fe\({}_{1-x}\)Co\({}_{x}\)Si a systematic shift of \(x_{\mathrm{MIT}}\) (\(x_{\mathrm{LRO}}\)) to smaller (larger) compositional values may occur. Then, correcting for this systematic shift and in
Figure 16: Temperature dependence of \(B_{\mathrm{HF}}\) of single-crystalline Fe\({}_{0,85}\)Co\({}_{0,15}\)Si. The solid line indicates a critical fit to the data close to \(T_{\mathrm{HM}}\); for details see text.
Figure 15: Temperature dependent Mössbauer spectroscopy data for single-crystalline Fe\({}_{0,85}\)Co\({}_{0,15}\)Si; for details see text.
cluding the experimental error detailed above we would then arrive at values \(x_{\rm MIT}=0.016(4)\) and \(x_{\rm LRO}\sim 0.023(2)-0.028(5)\), _i.e._, matching values within error bars. Therefore, for all practical purposes we have to conclude that MIT and LRO critical compositions are probably experimentally not clearly distinguishable. This observation raises the question about the mechanism(s) behind these (joint) MIT/QPT in Fe\({}_{1-x}\)Co\({}_{x}\)Si, which relates back to the issue of the magnetic character of the small gap semiconductor FeSi.
Two main concepts have been put forth to account for the magnetic behavior of FeSi. On the one hand, spin fluctuation theory has been invoked to account for the basic magnetic properties [71; 72; 73; 74]. Taken in combination with band structure calculations [75] it was reported that a single electron picture captures the essential properties of the small gap semiconductor FeSi. Further, extending the band structure calculations by incorporating a Coulomb interaction \(U\) revealed an instability of the band structure towards a metallic magnetic state, with the proposal of a field induced MIT to occur in FeSi [76; 77].
This concept was further worked out [78; 79; 80] to account for the properties of the alloying series FeSi\({}_{1-x}\)Ge\({}_{x}\). For this series a first-order insulator-to-ferromagnetic metal transition was reported for \(x\approx 0.25\), which was interpretted as result of a tuning of the strength of the Coulomb interaction \(U\) with \(x\). It reflects the instability of the band structure of FeSi towards a metallic magnetic state noted in Ref. [76].
On the other hand, the concept of FeSi as a Kondo insulator was proposed, raising the prospect of novel correlation physics apparent in FeSi [81; 82; 12; 83]. Notably, also the first-order insulator-to-ferromagnetic metal transition was presented within this framework [84]. So far, however, direct tests of the Kondo insulator scenario have failed to produce firm evidence for this approach. Only more recently, attempts have been undertaken to merge the different views into a combined picture of correlation effects in a band insulator by using more advanced theoretical tools such as density functional and dynamical mean-field theory [85].
In the context of these previous observations our findings regarding the electronic and magnetic properties of Fe\({}_{1-x}\)Co\({}_{x}\)Si stand out in various aspects. At this point, four types [86] of "controlled" doping experiments (that is isoelectronic alloying or changing electron count by one) have been performed on FeSi, that is [63; 64; 78; 63; 64; 79; 84]: Fe\({}_{1-x}\)Co\({}_{x}\)Si, Fe\({}_{1-x}\)Mn\({}_{x}\)Si, FeSi\({}_{1-x}\)Al\({}_{x}\) and FeSi\({}_{1-x}\)Ge\({}_{x}\). For these series, both Fe\({}_{1-x}\)Mn\({}_{x}\)Si and FeSi\({}_{1-x}\)Al\({}_{x}\) exhibit MITs for lowering alloying levels, while FeSi\({}_{1-x}\)Ge\({}_{x}\) transforms in a 1st order transition into a ferromagnetic metal with alloying.
With respect to the MIT the behavior of Fe\({}_{1-x}\)Co\({}_{x}\)Si is quite similar to Fe\({}_{1-x}\)Mn\({}_{x}\)Si and FeSi\({}_{1-x}\)Al\({}_{x}\). For all systems the critical concentrations \(x_{\rm MIT}\) for the MITs is in the low percentage range. The \(x\)-dependence of the zero-temperature conductivity \(\sigma_{0}\) can be parameterized within critical scaling theory - in other words, the MITs appear to behave in a rather common fashion. In contrast, regarding the transition from a non-magnetic to a magnetic state, the behavior of Fe\({}_{1-x}\)Co\({}_{x}\)Si is in stark contrast to that of FeSi\({}_{1-x}\)Ge\({}_{x}\). While regarding the alloying dependence the first series exhibits the typical behavior of quantum criticality, for the latter the transition is discontinuous as function of \(x\) and of 1st order nature. Most remarkably, in Fe\({}_{1-x}\)Co\({}_{x}\)Si the two critical transitions MIT and QPT (almost) coincide regarding their alloying dependence, strongly suggesting a common cause of their appearance.
More specifically, qualitatively, the \(x\) dependence of the QPT clearly bears resemblance to related phenomena in weak ferromagnets close to a magnetic instability [88] and which conceptually is in principle accounted for by the self-consistent renormalization theory of spin fluctuations [89]. The experimental data suggest that upon approaching the QPT from the LRO side the ordering temperature \(T_{\rm HM}\) and ordered moment \(\upmu_{\rm ord}\) vanish/become very small. The unusal aspect is that quantum criticality must occur in the limit of a very small carrier density as result of the MIT. Such behavior may be qualitatively in line with the modeling put forth in Ref. [85].
This modeling for FeSi implies that spin and charge response are closely linked, this way reproducing the strong temperature dependence of various physical properties and the concomitant crossover from low temperature insulating to high temperature metallic behavior. Applying this view to our sample series Fe\({}_{1-x}\)Co\({}_{x}\)Si, Co alloying appears to suppress both spin and charge gap in a similar and quite dramatic fashion. This way, with the closing of the spin and charge gaps the ground state of the system transforms via a QPT into a LRO state. Taken together, with the detailed experimental description of the associated properties presented here, we believe that Fe\({}_{1-x}\)Co\({}_{x}\)Si lends itself for a thorough microscopic theoretical study of the underlying physical mechanisms, this in particular in the parameter range of a vanishing carrier density.
###### Acknowledgements.
We acknowledge fruitful discussions with U. K. Rossler and J. Aarts.
|
2308.00329
|
Inclusive, prompt and non-prompt $\rm{J}/ψ$ identification in
proton-proton collisions at the Large Hadron Collider using machine learning
|
Studies related to $\rm{J}/\psi$ meson, a bound state of charm and anti-charm
quarks ($c\bar{c}$), in heavy-ion collisions, provide genuine testing grounds
for the theory of strong interaction, quantum chromodynamics (QCD). To better
understand the underlying production mechanism, cold nuclear matter effects,
and influence from the quark-gluon plasma, baseline measurements are also
performed in proton-proton ($pp$) and proton-nucleus ($p$--A) collisions. The
inclusive $\rm{J}/\psi$ measurement has contributions from both prompt and
non-prompt productions. The prompt $\rm{J}/\psi$ is produced directly from the
hadronic interactions or via feed-down from directly produced higher charmonium
states, whereas non-prompt $\rm{J}/\psi$ comes from the decay of beauty
hadrons. In experiments, $\rm{J}/\psi$ is reconstructed through its
electromagnetic decays to lepton pairs, in either $e^{+}+e^{-}$ or
$\mu^{+}+\mu^{-}$ decay channels. In this work, for the first time, machine
learning techniques are implemented to separate the prompt and non-prompt
dimuon pairs from the background to obtain a better identification of the
$\rm{J}/\psi$ signal for different production modes. The study has been
performed in $pp$ collisions at $\sqrt{s} = 7$ and 13 TeV simulated using
PYTHIA8. Machine learning models such as XGBoost and LightGBM are explored. The
models could achieve up to 99\% prediction accuracy. The transverse momentum
($p_{\rm T}$) and rapidity ($y$) differential measurements of inclusive,
prompt, and non-prompt $\rm{J}/\psi$, its multiplicity dependence, and the
$p_{\rm T}$ dependence of fraction of non-prompt $\rm{J}/\psi$ ($f_{\rm B}$)
are shown. These results are compared to experimental findings wherever
possible.
|
Suraj Prasad, Neelkamal Mallick, Raghunath Sahoo
|
2023-08-01T07:08:17Z
|
http://arxiv.org/abs/2308.00329v2
|
Inclusive, prompt and non-prompt \(\mathrm{J/\psi}\) identification in proton-proton collisions at the Large Hadron Collider using machine learning
###### Abstract
Studies related to \(\mathrm{J/\psi}\) meson, a bound state of charm and anti-charm quarks (\(c\bar{c}\)), in heavy-ion collisions, provide genuine testing grounds for the theory of strong interaction, quantum chromodynamics (QCD). To better understand the underlying production mechanism, cold nuclear matter effects, and influence from the quark-gluon plasma, baseline measurements are also performed in proton-proton (\(pp\)) and proton-nucleus (\(p\)-A) collisions. The inclusive \(\mathrm{J/\psi}\) measurement has contributions from both prompt and non-prompt productions. The prompt \(\mathrm{J/\psi}\) is produced directly from the hadronic interactions or via feed-down from directly produced higher charmonium states, whereas non-prompt \(\mathrm{J/\psi}\) comes from the decay of beauty hadrons. In experiments, \(\mathrm{J/\psi}\) is reconstructed through its electromagnetic decays to lepton pairs, in either \(e^{+}+e^{-}\) or \(\mu^{+}+\mu^{-}\) decay channels. In this work, for the first time, machine learning techniques are implemented to separate the prompt and non-prompt dimuon pairs from the background to obtain a better identification of the \(\mathrm{J/\psi}\) signal for different production modes. The study has been performed in \(pp\) collisions at \(\sqrt{s}=7\) and 13 TeV simulated using PYTHIA8. Machine learning models such as XGBoost and LightGBM are explored. The models could achieve up to 99% prediction accuracy. The transverse momentum (\(p_{\mathrm{T}}\)) and rapidity (\(y\)) differential measurements of inclusive, prompt, and non-prompt \(\mathrm{J/\psi}\), its multiplicity dependence, and the \(p_{\mathrm{T}}\) dependence of fraction of non-prompt \(\mathrm{J/\psi}\) (\(f_{\mathrm{R}}\)) are shown. These results are compared to experimental findings wherever possible.
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
## I Introduction
Over the last couple of decades, two of the world's most powerful particle accelerators, the Large Hadron Collider (LHC), CERN, and the Relativistic Heavy-Ion Collider (RHIC), Brookhaven National Laboratory, USA have studied the hot and dense state of deconfined partons, known as the quark-gluon plasma (QGP) by colliding heavy-ions at ultra-relativistic speeds. These studies are crucial to understand the physics of the early Universe, and the phase transition between the partonic and hadronic matter. Due to the nature of the strong interaction, QGP is extremely short-lived. Therefore, to study the properties of QGP, several indirect signatures are investigated. One such signature is the melting of heavy quarkonia (\(q\bar{q}\)) in QGP, also known as the quarkonia suppression, where the color force responsible for binding the quarks into hadrons is screened in the presence of deconfined partons [1; 2; 3; 4; 5; 6]. The production of heavy quarkonia pairs (\(c\bar{c}\) and \(b\bar{b}\)) follow the perturbative QCD (pQCD) calculations, whereas the evolution to a bound colorless state is a nonperturbative process. Due to their high mass, heavy-quarks are produced via partonic interactions in the early stages of the collision, and experience the full evolution of QGP. Thus, they are sensitive probes to study the properties of QGP and the theory of strong interaction [7].
\(\mathrm{J/\psi}\) is the lightest charm vector meson, which is the bound state of a charm and an anti-charm quark (\(c\bar{c}\)). The studies related to \(\mathrm{J/\psi}\) meson, in heavy-ion collisions provide genuine testing grounds for QCD [8; 9]. To better understand the underlying production mechanism, cold nuclear matter effects, and influence from the quark-gluon plasma, baseline measurements are also performed in proton-proton (\(pp\)) and proton-nucleus (\(p\)-A) collisions [10; 11]. The inclusive \(\mathrm{J/\psi}\) production can have contributions from three sources. The first one is the direct prompt production, in which \(\mathrm{J/\psi}\) is produced directly from the hadronic/nuclear collisions; the second one is the indirect prompt production via feed-down from directly produced higher charmonium states (_i.e._ from \(\chi_{c}\) and \(\psi\)(2S)), and the third one is the non-prompt production which comes from the decay of beauty hadrons [12; 13]. Since the rest mass of \(\mathrm{J/\psi}\) is larger than the other decay daughters of beauty hadron, the momentum of \(\mathrm{J/\psi}\) is closer to the decaying beauty mesons, thus non-prompt \(\mathrm{J/\psi}\) gives a better handle to study the production of these beauty mesons [14]. Another important implication of separating the non-prompt \(\mathrm{J/\psi}\) from prompt \(\mathrm{J/\psi}\) comes from the fact that their spin state polarization is conceptually and effectively different [15; 16]. The measurement of non-prompt \(\mathrm{J/\psi}\) can also provide direct determination of the nuclear modification of beauty hadrons.
In experiments, \(\mathrm{J/\psi}\) is reconstructed through its electromagnetic decay to lepton pairs, in either \(e^{+}+e^{-}\) or \(\mu^{+}+\mu^{-}\) decay channels. By reconstructing the invariant mass spectra of these lepton pairs (\(m_{ee}\) or \(m_{\mu\mu}\)), one can extract the signal for inclusive \(\mathrm{J/\psi}\) by fitting a suitable signal function and subtracting the background continuum. Usually, a Crystal Ball Function [17] is used as the signal function. To further estimate the non-prompt contribution in the inclusive \(\mathrm{J/\psi}\) signal, one has to rely
on the non-prompt production topology. As the beauty hadrons undergo weak decay, the resulting J/\(\psi\) will originate from a decay vertex that is displaced from the primary interaction vertex. For this, the pseudoproper decay length (\(c\tau\)) of the candidate is estimated, which is given in Eq. 2. The \(c\tau\) Probability Density Functions (p.d.f.) for the prompt (\(F_{\rm prompt}(c\tau)\)) and non-prompt (\(F_{\rm B}(c\tau)\)) production can be obtained from Monte Carlo separately. By using an unbinned 2-dimensional likelihood fit as described in detail in Refs. [8, 12], the ratio of the non-prompt to inclusive J/\(\psi\) production (\(f_{\rm B}\)) can be estimated, which can be used to calculate the non-prompt and prompt production cross-sections (\(\sigma_{\rm J/\psi}\)), as given below.
\[\begin{split}\sigma_{\rm non-prompt~{}J/\psi}&=f_{ \rm B}\cdot\sigma_{\rm J/\psi},\\ \sigma_{\rm prompt~{}J/\psi}&=(1-f_{\rm B})\cdot \sigma_{\rm J/\psi}\end{split} \tag{1}\]
Machine learning techniques are in use in the field of nuclear and particle physics over the last couple of decades [18, 19]. Recently, with the advancement of superior hardware and smart algorithms, it has gained its rightful popularity in the big data community. By construction, machine learning is trained to learn the mapping from the input features to the output class. The algorithm helps to learn the correlations between the input and output by optimizing the model parameters on the training data. This is practically useful when the mapping function is not trivial, or sometimes it can not be defined. In such cases, machine learning helps to do the mapping in a faster and more efficient manner, without compromising the quality of the result. The successful application of machine learning techniques in collider experiments is well proven by now. It has been used to tackle many varieties of problems. Some of them include the impact parameter estimation [20, 21, 22, 23, 24], particle identification and track reconstruction [25, 26, 27], jet tagging [28, 29, 30, 31], anisotropic flow measurements [32, 33, 34], etc. Interested readers may refer to some of the recent reviews on machine learning in high energy physics [35, 36, 37, 38]. In this work, for the first time, machine learning techniques are implemented to separate the prompt and non-prompt dimuon pairs from the background to obtain a better identification of the J/\(\psi\) signal for different production modes. The study has been performed in \(pp\) collisions at \(\sqrt{s}=7\) and 13 TeV simulated using PYTHIA8. Machine learning models such as XGBoost and LightGBM are explored. Some of the motivations of this work are as follows. This technique provides a faster and more efficient method to identify the inclusive, prompt, and non-prompt J/\(\psi\) signal than the conventional template fitting method discussed above. It can be applied to identify J/\(\psi\) meson in the entire range of transverse momentum (\(p_{\rm T}\)) and rapidity (\(y\)), thus allowing us to probe the production fraction (\(f_{\rm B}\)) of non-prompt J/\(\psi\) easily for very fine bins in \(p_{\rm T}\) and \(y\). This method has another advantage, as it can directly identify the dimuon pairs, hence it can tag them to one of the three sources, prompt, non-prompt, or background. This identification of the dimuon level tags can help in studying many aspects of charmonia and bottomonia production, which are almost impossible using conventional methods. One such application would be the effect of polarization on prompt and non-prompt J/\(\psi\) production. Apart from these motivations, the novelty of this work also lies in the fact that the attempt to separate prompt versus non-prompt production for J/\(\psi\) is never attempted before using the machine learning approach.
The paper is organized as follows. It begins with a brief introduction in Sec. I. The methodology, including the data generation using PYTHIA8, and the description of the machine learning models, are described in Sec. II. The training, evaluation, and quality assurance of the models are discussed in Sec. III followed by the results and discussions in Sec. IV. Finally, the paper concludes by summarizing the findings in Sec. V.
## II Methodology
The descriptions of pQCD-based particle production, such as jets, charm, and bottom hadrons, etc., are well explained by the PYTHIA8 Monte Carlo model. In the current work, we use the PYTHIA8 event generator to simulate the data sets required to train the machine learning model to identify the prompt and non-prompt dimuon signals from the background dimuon pairs. This section provides a brief description of PYTHIA8, along with the different models used in the study.
### Pythia8
PYTHIA is a pQCD-based Monte Carlo event generator used to generate ultra-relativistic \(pp\) collisions at RHIC and LHC collision energies. PYTHIA8 contains a library of soft and hard processes and models for initial- and final-state parton showers, multiple parton-parton interactions, beam remnants, string fragmentation, and particle decays [39, 40]. PYTHIA8 is an improved version of PYTHIA6 where \(2\to 2\) hard processes are implemented along with MPI-based scenarios to produce the charm and beauty hadrons. In this study, we have used the 4C-tune of PYTHIA8 (see Ref. [41] for details) version 8.308 to simulate 20 billion events with inelastic and non-diffractive components (HardQCD:all = on) of the total collision cross section in \(pp\) collisions at \(\sqrt{s}=13\) TeV and 1 billion minimum bias events in \(pp\) collisions at \(\sqrt{s}=7\) TeV. The simulation involves a \(p_{\rm T}\) cut-off of \(p_{\rm T}>0.5\) GeV/c (using PhaseSpace:pTHatMinDiverge available in PYTHIA) to avoid the divergence of QCD processes that may occur in the limit \(p_{\rm T}\to 0\). Since this study involves charm and beauty quark production, we have allowed all the charmonia and bottomonia production processes (using
"Charmonium:all=on" and "Bottomonium:all=on") in PYTHIA8. In addition, we have allowed the spread of the interaction vertex according to a simple Gaussian distribution (Beams:allowVertexSpread=on) where offset and sigma of the spread of the vertices in each of the cartesian axes are taken from Ref. [42], and are mentioned in Table 1. Here, \(V_{x}\), \(V_{y}\), and \(V_{z}\) are the beam interaction vertex distance from the global origin (0,0,0) in the \(x\), \(y\) and \(z\) directions, respectively. We have put an additional cut in the z-vertex, as \(|V_{z}|<10\) cm, to be consistent with the experiments. The produced J/\(\psi\) are allowed to decay in the dimuon channel only, _i.e._\(J/\psi\to\mu^{+}+\mu^{-}\) and all other decay modes of J/\(\psi\) are switched off.
Figure 1 shows the comparison of transverse momentum spectra for inclusive, prompt and non-prompt J/\(\psi\) using PYTHIA8 with the corresponding measurements reported by LHCb [43]. All the track cuts for muons and dimuon pairs are kept the same as reported in Ref. [43]. A factor of 0.47 is multiplied in the PYTHIA8 estimated inclusive and prompt J/\(\psi\) yields as it overestimates the experimental data. However, PYTHIA8 follows the experimental trend of \(p_{\mathrm{T}}\) spectra up to \(p_{\mathrm{T}}<6\) GeV/c, and starts to deviate towards the higher values of \(p_{\mathrm{T}}\). One can intuitively note that the yield of J/\(\psi\) from b-hadron decays is almost ten times lower than the prompt production; however, this difference in production yield between prompt and non-prompt J/\(\psi\) gets smaller towards the high-\(p_{\mathrm{T}}\) values. The overall trend produced by PYTHIA8 with the tunes and settings mentioned above is reasonable when compared to the experiment. The scaling factors are only applied in this plot to match the trend of the experimental data. For all other plots in this work, no such scaling is used and the results are directly from PYTHIA8.
### Machine learning models
The realm of ultra-relativistic collisions at the LHC and RHIC produces complex and non-linear systems which demand powerful analysis techniques. These analysis techniques may sometimes require superlative computational facilities yet provide results with significant uncertainties. On the other hand, with the advent of machine learning tools, one can extract insightful results from a vast amount of experimental data with ease and less uncertainty by learning the correlation between the input and target variables. In collider physics experiments, ML models can be exploited in many aspects. One of the complex problems in collider physics experiments is understanding different underlying physical processes that contribute to particle production. However, the final state particles sometimes carry some distinguished kinematic signatures that can help identify their production mechanism and parent particles. For example, in experiments, identifying prompt and non-prompt J/\(\psi\) meson relies on the statistical separation method, which is already described in Sec. I. However, using machine learning, one can train a model using some of the kinematic features of the decay daughters to reject the uncorrelated pairs easily and identify the signal and the source of the parent J/\(\psi\). Many such popular ML models include gradient-boosting decision trees-based regressions and classifications due to their simplicity, robustness, and efficiency in handling extensive data [44; 45]. The name gradient boosting comes because it uses the gradient descent algorithm and boosting method [45]. In this study, we apply gradient-boosted decision tree-based ML techniques to segregate prompt and non-prompt from the uncorrelated dimuon pairs using the kinematics of all the final state dimuon (\(\mu^{+}+\mu^{-}\)) pairs, which are discussed below.
#### iii.2.1 XGBoost
XGBoost (XGB) [46] stands for Extreme Gradient Boosting, and it is one of the most popular and widely
\begin{table}
\begin{tabular}{|l|l|l|} \hline & mean (mm) & sigma (mm) \\ \hline \(V_{x}\) & -0.35 & 0.23 \\ \hline \(V_{y}\) & 1.63 & 0.27 \\ \hline \(V_{z}\) & -4.0 & 40.24 \\ \hline \end{tabular}
\end{table}
Table 1: Offset and sigma values of the primary interaction vertex from the origin.
Figure 1: Comparision of PYTHIA8 results for inclusive, prompt, and non-prompt production of J/\(\psi\) meson with the experimental measurements [43] in _pp_ collisions at \(\sqrt{s}=13\) TeV. A constant multiplication of 0.47, 0.47, and 1.0 is performed to the PYTHIA8 results for inclusive, prompt, and non-prompt production, respectively.
used ML algorithms due to its efficiency in handling large data sets and outstanding performance in classification and regression problems. It is an upgraded version of the gradient-boosting decision trees (GBDT). It has several enhancements, such as parallel computing and tree pruning, to speed up the training process. This lets it handle large datasets in a reasonable amount of time. XGB also provides a wide variety of hyperparameters that can be optimized for better model performance [47].
#### ii.2.2 LightGBM
Light Gradient Boosting Machine (LightGBM or LGBM) [48] is another enhanced version of the GBDT with improved speed and performance. Along with parallel computing, it uses a leaf-wise splitting of the tree rather than level-wise to increase the model's speed and reduce memory usage. Traditional level-wise splitting of a tree leads to the formation of unnecessary nodes that contain the tiniest information, and these nodes use up memory but do not contribute to the overall learning process. In contrast, splitting a tree leaf-wise leads to the most informative split faster and thus reduces the number of nodes formed, making the training process faster [49].
## III Training and Evaluation
In this section, we discuss our machine-learning models in detail. We begin with the description of the inputs to the models, then preprocessing of the data set, and discuss the model architecture. Finally, we discuss the training and evaluation process with the required quality
Figure 3: Confusion matrix for both XGB (top) and LGBM (bottom) representing the accuracy and discrepancy in the true and prediction for prompt, non-prompt, and background dimuon pairs.
Figure 2: Learning curve (loss versus number of decision trees) for both training (blue) and validation (orange) for both XGB (top) and LGBM (bottom).
assurance figures.
### Input to the machine
The training of the ML models requires a data set with well-correlated input and target variables. Here, the invariant mass of the reconstructed dimuon pairs (\(m_{\mu\mu}\)) can significantly help in separating the uncorrelated background from the signal dimuons coming from the J/\(\psi\) meson. On the other hand, prompt and non-prompt production of J/\(\psi\) can have different production topologies. The production of the prompt J/\(\psi\) would be closer to the primary vertex, whereas the J/\(\psi\) formed from the weak decays of b-hadrons would have a displaced decay vertex with a finite decay length with respect to the primary interaction vertex. One such quantity that is used to differentiate the topological production of the J/\(\psi\) by taking the production vertex into account is the pseudoproper decay length defined in Eq. 2 below [50].
\[c\tau=\frac{c\;m_{\mathrm{J/\psi}}\;\vec{L}\cdot\vec{p_{\mathrm{T}}}}{|\vec{p_{ \mathrm{T}}}|^{2}}. \tag{2}\]
Here, \(\vec{L}\) is a vector pointing from the primary vertex to the J/\(\psi\) decay vertex. \(c\) is the velocity of light, \(m_{\mathrm{J/\psi}}\) is the mass of J/\(\psi\) meson taken from the Particle Data Group (PDG) [51]. For each dimuon pair, we require its invariant mass (\(m_{\mu\mu}\)), transverse momentum (\(p_{\mathrm{T},\mu\mu}\)), pseudorapidity (\(\eta_{\mu\mu}\)), and the pseudoproper decay length (\(c\tau\)) as the input to the models. All these inputs can be obtained in experiments as well. Now, following Eq. 2, we need the quantity \(\vec{L}\) from PYTHIA8, which is obtained using the method described below. One can calculate the secondary decay vertex for the dimuon pairs using the Eq. 3.
\[S_{x}=\frac{(t_{1}+x_{1}m_{1}/p_{x,1})-(t_{2}+x_{2}m_{2}/p_{x,2})}{m_{1}/p_{x,1} -m_{2}/p_{x,2}} \tag{3}\]
Here, \(S_{x}\) stands for the reconstructed secondary vertex in \(x\)-direction, for two particles with mass \(m_{1}\) and \(m_{2}\), which fly off from the secondary vertex to a distance \(x_{1}\) and \(x_{2}\), in time \(t_{1}\) and \(t_{2}\) with momentum \(p_{x,1}\) and \(p_{x,2}\). Similarly, one can also obtain a similar expression for \(S_{y}\) and \(S_{z}\). After obtaining the secondary vertex coordinates, one can estimate \(\vec{L}=\vec{V}-\vec{S}\). Here, \(\vec{V}=(V_{x},V_{y},V_{z})\) is the primary vertex coordinates defined in Sec. II.1 and \(\vec{S}=(S_{x},S_{y},S_{z})\) is the secondary vertex position for the reconstructed dimuon pairs, obtained using Eq. 3.
The target labels for the prompt, non-prompt J/\(\psi\), and the background dimuon pairs are represented with the numeric tags as 0, 1, and 2, respectively. For the training of the model, the input features are obtained for the opposite sign dimuon pairs in the whole pseudorapidity and transverse momentum range in the minimum bias \(pp\) collisions at \(\sqrt{s}=13\) TeV using PYTHIA8.
### Preprocessing and training
Classification models require to be trained on a similar number of training instances for each of the output classes. We call these instances examples of training. Any imbalance in the examples during the training may bias the output towards the majority class. This is often regarded as the "class imbalance problem", and the model shows high accuracy just by predicting the majority class. In this study, the majority class is the background followed by the prompt J/\(\psi\). The ratio of background:prompt:non-prompt is \(\approx\) 20:10:1. Thus, the models will favor the training mainly towards the background data, and thus will mostly misclassify the prompt and the non-prompt J/\(\psi\). To overcome this data sample imbalance, sampling techniques like undersampling and oversampling are used. Undersampling removes some instances of the majority class while oversampling adds some instances to the minority class to balance the data points present in each class. Nevertheless, a drawback of undersampling is that it leads to data loss since the instances from the majority class are discarded. Therefore, we prefer to balance the data sets by oversampling. A random oversampling technique from the _imblearn_ library [52] is implemented on the training set wherein both the minority classes (prompt and non-prompt) are resampled to match that of the majority class (background). We use 90% of the entire data as training and the rest 10% as testing. Further, the resampling is per
Figure 4: Training importance scores (%) of pseudoproper decay length (\(c\tau\)), reconstructed dimuon mass (\(m_{\mu\mu}\)), transverse momentum (\(p_{\mathrm{T},\mu\mu}\)) and pseudorapidity (\(\eta_{\mu\mu}\)) for LGBM (orange), and XGB (blue).
formed on the training set, which solves the class imbalance issue, and then 10% of the data from the training sample is used as the validation set.
Now, we proceed to define the model architecture and the training process. Model parameters such as the loss function, learning rate, sub-sample, number of trees, and maximum depth are tuned for each model. The best parameters are selected through a grid search method, which is listed in Table 2.
In Table 2, the learning rate is a hyperparameter that governs the pace with which the model learns and updates its weights. The subsample indicates the fraction of the data that the model will sample before growing trees, which occurs in every boosting iteration and prevents overfitting. Increasing the maximum depth would make the model more complex. Objective indicates the function that guides the training process, which quantifies the model's performance and reduces the prediction error. In both models, we have used _softmax_ objective for the multiclass classification, available as _'multi:softmax'_ and _'multiclass'_, for XGB and LGBM, respectively [47, 49]. The metric is the function that evaluates the model's performance in each training iteration. In both models, we have used the _logloss_ metric function for the multiclass classifications, the definition of which can be found in Refs. [47, 49]. All the other hyperparameters are kept as their default values for both models.
### Quality assurance
Figure 2 shows the learning curve for XGB (top) and LGBM (bottom) for both training and validation, i.e., the evolution of the loss as a function of the number of decision trees. For good training, the loss decreases with the increase in the number of decision trees and saturates at a particular loss value, indicating that the
\begin{table}
\begin{tabular}{|l|l|l|} \hline & XGB & LGBM \\ \hline Learning rate & 0.3 & 0.1 \\ \hline Sub-sample & 1.0 & 1.0 \\ \hline No. of trees & 60 & 60 \\ \hline Maximum depth & 3 & 3 \\ \hline Objective & _softmax_ & _softmax_ \\ \hline Metric & _mlogloss_ & _multiilogloss_ \\ \hline \end{tabular}
\end{table}
Table 2: Parameters used in XGB and LGBM with corresponding values obtained through the grid search method.
Figure 5: The top panel shows the transverse momentum spectra for the inclusive, prompt and non-prompt J/\(\psi\) in _pp_ collisions at \(\sqrt{s}=13\) TeV measured in the midrapidity (\(|y|<0.9\)) and forward rapidity (\(2.5<y<4\)), and _pp_ collisions at \(\sqrt{s}=7\) TeV in the midrapidity (\(|y|<0.9\)) using PYTHIA8 along with the predictions from the XGB and LGBM models. The middle panel shows the ratio of XGB to PYTHIA8, and the bottom panel shows the ratio of LGBM to PYTHIA8.
Figure 6: Fraction of J/\(\psi\) produced from b-hadron decays (\(f_{\text{B}}\)) as a function of transverse momentum at the midrapidity in minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV using PYTHIA8 and the predictions from XGB and LGBM, compared with the corresponding experimental results from ALICE [50].
training must be stopped now. Another essential training benchmark can be deduced by looking at the difference between the curves for the training and validation simultaneously. For reasonable training, the learning curves for the training and validation should be close; however, a big difference between them can arise due to overfitting or underfitting. One can infer from Fig. 2 that the loss values for validation and training decrease with the increase in the number of trees and saturates at around 25 trees for XGB and at around 45 trees for the LGBM. In addition, for both XGB and LGBM, the curves for validation and training lie on top of each other, indicating no overfitting by the models.
Another essential benchmark of the classification models can be inferred from the confusion matrix or sometimes called as error matrix. Each row of a typical confusion matrix represents the instances of a true class, while each column represents the instances of a predicted class. The confusion matrix as a whole represents the confusion by the model to predict different classes. In Fig. 3, the normalized confusion matrix is shown for XGB and LGBM with the three output classes, _i.e._, prompt, non-prompt, and the background. Both XGB and LGBM have similar predictions; the backgrounds and the non-prompt dimuon pairs are identified correctly with 100% accuracy; however, the models misidentify 2% of the dimuons coming from the prompt J/\(\psi\) as the non-prompt dimuons. As the ratio of prompt to non-prompt is around 10:1, this discrepancy in the identification has less effect on the prompt; but, it may enhance the non-prompt production yield. Initially, this 2% misclassification yield coming from the prompt J/\(\psi\) to the non-prompt J/\(\psi\), was suspected to be contributed from the indirect prompt production, which are the decays from higher excited states of charmonia. This is because, they might not have produced and decayed exactly at the primary vertex, and therefore may have traveled a finite pseudoproper decay length before decaying. This probable cause is discarded as a similar prediction is obtained while dealing with data set having only indirectly produced prompt J/\(\psi\). So, this misclassification error is inherited in the model itself.
Figure 4 shows the percentage importance score of each feature during training for both XGB and LGBM models. In the context of decision trees, the importance score for a feature is defined as the number of times the feature is used to split a node. The importance score shown in the figure indicates how useful or valuable each feature is during the construction of the boosted decision trees. As one can infer from the figure, the input features that carry the most information about the production species of the reconstructed dimuon pairs are \(m_{\mu\mu}\) and \(c\tau\), and hence, these are the crucial features for this classification task. In the LGBM model, the order of relative importance to the classification task is \(m_{\mu\mu}>c\tau>p_{\mathrm{T},\mu\mu}>\eta_{\mu\mu}\). In contrast, XGB requires only \(m_{\mu\mu}\) and \(c\tau\) to make a prediction, whereas the model discards the contribution of \(p_{\mathrm{T},\mu\mu}\) and \(\eta_{\mu\mu}\). Another aspect to learn from this figure is that for the same classification task, different models can learn from the same input features with different importance scores. However, for this classification task, \(m_{\mu\mu}\) and \(c\tau\) hold the highest importance scores in both models.
## IV Results
Figure 5 shows the transverse momentum (\(p_{\mathrm{T}}\)) spectra for the inclusive, prompt and non-prompt J/\(\psi\) in minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV in midrapidity (\(|y|<0.9\)) and forward rapidity (\(2.5<y<4\)). Additionally, the \(p_{\mathrm{T}}\)-spectra for _pp_ collisions at \(\sqrt{s}=7\) TeV in midrapidity (\(|y|<0.9\)) are also added. These results include PYTHIA8 (true), and the predictions from both the trained models _i.e._ XGB and LGBM, which are trained with minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV data. Here, J/\(\psi\rightarrow\mu^{+}+\mu^{-}\) channel is used to reconstruct the \(p_{\mathrm{T}}\)-spectra. At first glance, one notices that the J/\(\psi\) produced from the b-hadron decays have a significantly lower yield in the low-\(p_{\mathrm{T}}\) region than the prompt J/\(\psi\). However, this difference in their production yield tends to decrease as one moves towards high-\(p_{\mathrm{T}}\). These observations using PYTHIA8 are consistent with the experimental measurements [50; 53; 54]. It is seen that both the machine learning models, XGB and LGBM, can accurately identify the inclusive and prompt dimuons originating from J/\(\psi\), and thus, their predictions for the \(p_{\mathrm{T}}\)-spectra match well with the results obtained from PYTHIA8 (true). However, some discrepancy arises when both XGB and LGBM models try to identify the dimuon pairs coming from the non-prompt J/\(\psi\). Both models consistently overestimate the yield of non-prompt J/\(\psi\). The predictions from the LGBM model are slightly worse at low-\(p_{\mathrm{T}}\) for the midrapidity case as compared to the XGB model, whereas in the intermediate to high-\(p_{\mathrm{T}}\), both the models are fairly comparable in accuracy. As discussed earlier in the description of Fig. 3, this overestimation of the yield of the non-prompt J/\(\psi\) predicted by both the models is a direct consequence of the misidentification of the dimuons coming from the prompt J/\(\psi\) as the non-prompt dimuons.
In addition, both XGB and LGBM models are found to be robust for the energy dependence predictions of inclusive, prompt, and non-prompt J/\(\psi\)\(p_{\mathrm{T}}\)-spectra as seen in Fig. 5 for _pp_ collisions at \(\sqrt{s}=7\) TeV. It is important to note that the models are trained with \(\sqrt{s}=13\) TeV data, while they can still make predictions for \(\sqrt{s}=7\) TeV. While XGB retains its accuracy of prediction in the entire \(p_{\mathrm{T}}\) range for the inclusive and prompt J/\(\psi\) in _pp_ collisions at \(\sqrt{s}=7\) TeV, a similar discrepancy for the non-prompt case is observed in _pp_ collisions at \(\sqrt{s}=7\) TeV as seen in _pp_ collisions at \(\sqrt{s}=13\) TeV. On the other hand, although LGBM retains its accuracy for the inclusive and prompt J/\(\psi\), it starts to deviate much from the true values towards the lower transverse momentum regions. The success of the models in learning and predicting the energy dependence of inclusive, prompt, and
non-prompt production demonstrates the robustness and accuracy of the models. This could be attributed to the fact that most of its learning comes from the invariant mass and the pseudoproper decay length of the dimuon pairs, which are independent of the collision energy.
Figure 6 represents the fraction of J/\(\psi\) produced from b-hadron decays (\(f_{\rm B}\)) as a function of transverse momentum at midrapidity in minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV using PYTHIA8. The results are compared with the predictions from XGB and LGBM. The experimental data from ALICE [50] are added. Here, the trend of \(f_{\rm B}\) as a function of \(p_{\rm T}\) is similar to the experimental observations, where the value of \(f_{\rm B}\) is found to be increasing with \(p_{\rm T}\) in the range \(5.0\leq p_{\rm T}\leq 20.0\) GeV/c. It is seen that the value of \(f_{\rm B}\) remains almost flat and is independent of \(p_{\rm T}\) in \(p_{\rm T}<5.0\) GeV/c and \(p_{\rm T}>20.0\) GeV/c range. By using the machine learning models, we can directly identify the source of the dimuon pairs and hence, it becomes easy to estimate \(f_{\rm B}\) in very fine bins of \(p_{\rm T}\), that leads to this observation. As the production fraction of non-prompt J/\(\psi\) becomes larger in high-\(p_{\rm T}\), it is natural to observe that the difference in the \(p_{\rm T}\)-spectra between prompt and non-prompt J/\(\psi\) becomes smaller in high-\(p_{\rm T}\) as seen in Fig. 5.
Figure 7 represents the rapidity spectra for inclusive, prompt, and non-prompt J/\(\psi\) in minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV and \(\sqrt{s}=7\) TeV using PYTHIA8 including the predictions from XGB and LGBM models. The inclusive and prompt J/\(\psi\) are found to have a flat and rapidity independent yield in the region \(|y|<2.5\), after which the yield starts to decrease. On the other hand, for the non-prompt case, the yield is independent of rapidity only for a smaller rapidity coverage, _i.e._, \(|y|<1.0\). These features of the rapidity spectra for different production modes of J/\(\psi\) using PYTHIA8 are consistent with the experimental measurements reported in Ref. [50]. Interestingly, the predictions from both XGB and LGBM agree with the PYTHIA8 values for the inclusive and prompt J/\(\psi\) values, while the values for non-prompt J/\(\psi\) are slightly overestimated. Such a study using a broad range of rapidity is to demonstrate the usefulness and validity of the machine learning models used. However, an experimental measurement involving muons is not practical at the mid-rapidity, where the experiment is a multi-purpose one which deals with particle identification, like the ALICE at the LHC and the STAR at the RHIC.
One can observe the magnitude of disagreement in the XGB and LGBM predicted values for the non-prompt J/\(\psi\) yield with the true values from the simulation is similar to the \(p_{\rm T}\)-spectra shown in Fig. 5 for both the collision energies. For the case of non-prompt J/\(\psi\), the yield ratio of XGB to PYTHIA8 is almost a constant with a value of 1.3; however, the yield ratio of LGBM to PYTHIA8 is slightly higher in the midrapidity and decreases slowly while moving to forward rapidity. These observations are similar for both collision energies.
We suspect these discrepancies in the prediction for the non-prompt J/\(\psi\) are due to the same misidentification of prompt as non-prompt as discussed already in Section III.3. However, this discrepancy in the values for the prompt and non-prompt J/\(\psi\) can be fixed by considering the magnitude of mispredictions in Fig. 3. This is discussed in detail in the Appendix (Sec. VI).
Figure 8 depicts the normalized \(p_{\rm T}\)-integrated J/\(\psi\) yield for the inclusive, prompt, and non-prompt J/\(\psi\) as
Figure 7: Top panel shows the rapidity spectra for inclusive, prompt, and non-prompt production of J/\(\psi\) in minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV (left) and \(\sqrt{s}=7\) TeV (right) using PYTHIA8 and including the predictions from XGB and LGBM models. The middle panel shows the ratio of XGB to PYTHIA8, and the bottom panel shows the ratio of LGBM to PYTHIA8.
a function of normalized charged particle density at mid-pseudorapidity using PYTHIA8 which includes the predictions from XGB and LGBM models for _pp_ collisions at \(\sqrt{s}=13\) TeV and \(\sqrt{s}=7\) TeV. Figure 8 also includes the ALICE data comparison for inclusive J/\(\psi\) yield (measured in the di-electron channel at the mid-rapidity) in _pp_ collisions at \(\sqrt{s}=13\) TeV measured in the V0 region (multiplicity measurement), _i.e._, \(-3.7<\eta<-1.7\) and \(2.8<\eta<5.1\)[55]. The normalized yields for inclusive, prompt, and non-prompt J/\(\psi\) from PYTHIA8 are found to increase with the increase in the normalized charged particle density for both the collision energies. The increase in yield is significantly enhanced for the non-prompt J/\(\psi\), which is consistent with the values reported in Ref. [56; 57]. While PYTHIA8 slightly overestimates the experimental data, it almost maintains the overall trend of the normalized yield for the inclusive J/\(\psi\). Towards higher multiplicities in the final state, J/\(\psi\) from b-decays show an increasing trend with non-linear behavior. The slopes of these multiplicity-dependent yields of inclusive, prompt and non-prompt J/\(\psi\) show energy dependence with higher slopes at higher collision energies. The predictions from XGB and LGBM give an overall good estimation for the PYTHIA8 while deviating around 10% towards the lower multiplicity for the non-prompt J/\(\psi\) cases for both collision energies.
## V Summary
In this work, an effort is made to disentangle the inclusive, prompt, and non-prompt J/\(\psi\) from the uncorrelated background dimuon pairs using machine learning tools. We use experimentally available inputs for the models. The J/\(\psi\) meson are reconstructed in the \(\mu^{+}+\mu^{-}\) decay channel. For each dimuon pair, we require its invariant mass (\(m_{\mu\mu}\)), transverse momentum (\(p_{\rm T,\mu\mu}\)), pseudorapidity (\(\eta_{\mu\mu}\)), and the pseudoproper decay length (\(c\tau\)) as the input to the models. We use XGBoost and LightGBM models for this classification task. The training of the models is performed with the minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV simulated with PYTHIA8. The predictions from both the models are tested for _pp_ collisions at \(\sqrt{s}=13\) TeV and _pp_ collisions at \(\sqrt{s}=7\) TeV. Both the models show accuracy up to 98%; however, they mis-identify 2% of the prompt J/\(\psi\) as the non-prompt. The transverse momentum (\(p_{\rm T}\)) and pseudorapidity (\(\eta\)) differential measurements of inclusive, prompt, and non-prompt J/\(\psi\), its multiplicity dependence, and the \(p_{\rm T}\) dependence of fraction of non-prompt (\(f_{\rm B}\)) are shown. These results are compared to experimental findings wherever possible.
This study presents a unique method to separate the production of prompt and non-prompt J/\(\psi\) from the uncorrelated background dimuon pairs. As the models do not include any fitting to the \(p_{\rm T}\) differential spectra, it can be applied to identify each dimuon pairs separately having any value of \(p_{\rm T}\) in any rapidity range and thus allow us to probe the production fraction, \(f_{\rm B}\) of non
Figure 8: Top panel shows the normalized \(p_{\rm T}\)-integrated inclusive, prompt and non-prompt J/\(\psi\) yield as a function of normalized charged particle pseudorapidity density at the mid pseudorapidity region with multiplicity selection at the V0 region (V0M) for minimum bias _pp_ collisions at \(\sqrt{s}=13\) TeV (left) and \(\sqrt{s}=7\) TeV (right) using PYTHIA8 and includes the predictions from XGB and LGBM models, and comparison with experimental data measured at ALICE [55]. The middle panel shows the ratio of XGB to PYTHIA8, and the bottom panel shows the ratio of LGBM to PYTHIA8.
prompt J/\(\psi\) even in fine bins of \(p_{\rm T}\) and \(\eta\). The direct identification of dimuon pairs as prompt or non-prompt can help study many aspects of charmonia and bottomonia production, which are almost impossible using conventional methods. One such application would be the effect of polarization on prompt and non-prompt J/\(\psi\) production.
In addition, ALICE has reported the non-linearity in the normalized J/\(\psi\) yield at the midrapidity in the dielectron channel towards higher final state normalized multiplicity [58]. As seen in this present study, such behavior is an outcome of the non-prompt J/\(\psi\) both at the mid- and forward rapidities. The present method can be used in the experiments to separate prompt from non-prompt J/\(\psi\) and hence study the related production dynamics.
## Acknowledgements
S.P. acknowledges the financial support from UGC, the Government of India. The authors sincerely acknowledge the DAE-DST, Government of India funding under the Mega-Science Project - "Indian participation in the ALICE experiment at CERN" bearing Project No. SR/MF/PS-02/2021-IITI (E-37123). The authors gratefully acknowledge the usage of resources of the LHC grid Tier-3 computing facility at IIT Indore. The authors would like to thank Mr. Preet Bhanjan Pati, Master's student from IISER Tirupati for the initial exploration of this work, the preliminaries of which formed his master's thesis.
|
2303.05123
|
Dominating Set Database Selection for Visual Place Recognition
|
This paper presents an approach for creating a visual place recognition (VPR)
database for localization in indoor environments from RGBD scanning sequences.
The proposed approach is formulated as a minimization problem in terms of
dominating set algorithm for graph, constructed from spatial information, and
referred as DominatingSet. Our algorithm shows better scene coverage in
comparison to other methodologies that are used for database creation. Also, we
demonstrate that using DominatingSet, a database size could be up to 250-1400
times smaller than the original scanning sequence while maintaining a recall
rate of more than 80% on testing sequences. We evaluated our algorithm on
7-scenes and BundleFusion datasets and an additionally recorded sequence in a
highly repetitive office setting. In addition, the database selection can
produce weakly-supervised labels for fine-tuning neural place recognition
algorithms to particular settings, improving even more their accuracy. The
paper also presents a fully automated pipeline for VPR database creation from
RGBD scanning sequences, as well as a set of metrics for VPR database
evaluation. The code and released data are available on our web-page~ --
https://prime-slam.github.io/place-recognition-db/
|
Anastasiia Kornilova, Ivan Moskalenko, Timofei Pushkin, Fakhriddin Tojiboev, Rahim Tariverdizadeh, Gonzalo Ferrer
|
2023-03-09T09:12:21Z
|
http://arxiv.org/abs/2303.05123v3
|
# Dominating Set Database Selection for Visual Place Recognition
###### Abstract
This paper presents an approach for creating a visual place recognition (VPR) database for localization in indoor environments from RGBD scanning sequences. The proposed approach is formulated as a minimization problem in terms of dominating set algorithm for graph, constructed from spatial information, and referred as _DominatingSet_. Our algorithm shows better scene coverage in comparison to other methodologies that are used for database creation. Also, we demonstrate that using DominatingSet, a database size could be up to 250-1400 times smaller than the original scanning sequence while maintaining a recall rate of more than 80% on testing sequences. We evaluated our algorithm on 7-scenes and BundleFusion datasets and an additionally recorded sequence in a highly repetitive office setting. In addition, the database selection can produce weakly-supervised labels for fine-tuning neural place recognition algorithms to particular settings, improving even more their accuracy.
The paper also presents a fully automated pipeline for VPR database creation from RGBD scanning sequences, as well as a set of metrics for VPR database evaluation. The code and released data are available on our web-page -- [https://prime-slam.github.io/place-recognition-db/](https://prime-slam.github.io/place-recognition-db/).
## I Introduction
Visual place recognition (VPR) is an essential component nowadays for solving the localization problem purely from image data. The applications are immense and not just from the Robotics community but also from the Computer Vision community is actively investigating new approaches. One approach to a VPR system requires two components: (i) a database, that contains a set of images and their corresponding 3D poses, and (ii) an algorithm that finds the most resembling image in the database for a query image and estimates its pose with respect to the database image. This paper studies VPR from the perspective of selecting a small subset of images (database) from a sequential stream of data from a sensor that generates an accurate estimate of the VPR.
Scanning the environment requires a collection of visual data and its corresponding locations, that in indoors is usually solved by SLAM [1, 2] or Bundle Adjustment [3] algorithms. The problem that arises after the scanning is the choice of images to be taken into the database since the amount of observations after the scanning is usually huge (thousands of observations and more) and it is highly redundant. The database should have a small size to optimally use computational and memory resources during localization. In contrast, images in the database should cover the whole scene for a correct VPR.
The VPR approach is especially reasonable to apply indoors, where GNSS usage is not available and other global localization equipment, such as radio or Wi-Fi beams, is difficult and expensive to maintain. In the last years, a lot of attention was gained on developing robust place recognition algorithms using available benchmarks and datasets, such as
Mapillary [4], Nordland [5], Pittsburgh [6], Tokyo24/7 [7], RobotCar Seasons [8, 9].
The problem of database creation for VPR is solved implicitly in SLAM pipelines on the step of keyframe selection. However, it is solved steered by a different criteria than creating an optimal database as we propose. In general, existing works do not cover the question about the resources required for the database and how the database's size and properties correlate with quality of modern VPR algorithms.
In this work, we propose an approach for VPR database creation for indoor environments that employs RGBD information and chooses the optimal subset of images from the scanning sequence. To achieve this, we formulate a minimization problem for VPR databases and propose a solution for it based on the _dominating set_ algorithm [10] on graphs and spatial information. A byproduct of this solution is the clusterization of localized images around the selected one, and its application to related tasks such as automatic creation of weakly-supervised images of the same "place" for fine-tuning neural algorithms to a specific environment. We demonstrate that the database size could be 250-1400 times smaller than the size of its original scanning sequence whereas the recall of image retrieval task on testing sequences is more than 80%.
The main contributions of the paper are as following:
* formal definition of an optimal VPR database in terms of size and coverage and approach for its calculation;
* fully automated pipeline for VPR database creation from a RGBD scanning sequence released as a publicly available library;
* set of metrics for VPR database evaluation;
* fully automated end-to-end methodology from sequence scanning to VPR fine-tuning for a specific environment.
## II Related Work
VPR algorithms solve the problem of finding the best image match from a _database_ describing an environment given the so-called _query_ observation captured in the same environment. In general, this problem is known as _Information Retrieval_, and it is common in many fields, such as Natural Language Processing, Computer Vision or Robotics.
VPR requires a way to define a _descriptor_ of an image and a measure of the _similarity_ between a pair of descriptors (images). One of the classic approaches to solve this task requires the calculation of local image features that are then aggregated into a global image descriptor using bag of words [11], VLAD (vector of locally aggregated descriptors) [12, 13], or differentiable NetVLAD [14, 15]. Local image features can be calculated using handcrafted algorithms[16, 17, 18] or by employing learnable approaches for keypoint detection and description [19, 20, 21].
The calculation of this global description is an active topic of research, and related algorithms are capable of achieving unprecedented performance. For instance, Hloc [22] learns to simultaneously predict global and local features or CosPlace [23] provides a learned global descriptor without an intermediate step on local feature aggregation. We will make use of state-of-the-art global image descriptors for VPR as a tool. Arguably, it is only after the powerful methods for VPR that the size of the dataset can be drastically reduced, as we are proposing in this paper.
The development of existing VPR approaches is mainly focused on urban outdoor environments and datasets [4, 5, 6, 7, 8, 9]. The main reason is the availability of training data: in the outdoor scenarios reference poses can be directly obtained using GNSS technologies and even relatively huge pose errors (up to a meter) from the GNSS sensor can provide good enough associations to be used for building correspondences between images. In this case, a database and training correspondences are already provided by the creators of such datasets.
There are a couple of works targeting an indoor visual place recognition and the creation of a database for it. RISE [24] shows a way of using a 3D laser to build associations between database and query images using a spatial intersection of point clouds, and it also provides a custom dataset. Liu et al. [25] build a database of images for visual-magnetic localization by splitting building space into 60x60 cm cubes and capturing a database image and random query images for each cube.
Map database creation is also indirectly tackled in SLAM pipelines and referred to as _keyframe selection_ that are used as a basis for SLAM-graph optimization and loop closure. ORB-SLAM [1] and sequels [26, 27] use bag-of-words with local features and define an image as a keyframe when it observes a big enough amount of new local features with respect to the map. BundleFusion [28] considers data stream as chunks with an equal amount of RGBD images and takes one keyframe per chunk. Das et al. [29] provide two approaches based on image entropy with respect to a map in order to estimate will those keyframes improve the map or not. Alonso et al. [30] exploit image quality criteria based on blurriness, brightness, and criteria for semantic content based on CNN MiniNet. Sheng et al. [31] propose joint learning of keyframe detection and visual odometry. iMap [32] and NICE-SLAM [33] use depth overlap with respect to the map in order to measure the amount of new information that a keyframe can bring with respect to the map.
## III Methodology
The general scheme of the proposed methodology for creating an optimal database for VPR is depicted in Fig. 1. In the first step, the environment is scanned using RGBD sensors. Then, a 3D map of the environment is estimated using the approaches available for this task, for instance, RGBD ORB-SLAM [1]. In the next step, the 3D map is split into voxels and spatial overlap for every pair of images is estimated via the intersection between voxel sets of the images. This information allows us to build a graph that depicts the connections between the images based on the calculated overlap information. Finally, an optimal database for VPR is taken as a dominating set for this graph. Optionally, this methodology allows to split the remaining images from the
scanning sequence into database classes for VPR algorithm training/fine-tuning.
### _Problem formulation for optimal database_
Suppose we have a set of color images that scan the environment \(C=\{c_{1},\ldots,c_{N}\}\). Let \(s(c_{i},c_{j})\in[0,1]\) be an _overlap measure_ between two color images \(c_{i}\) and \(c_{j}\) which defines how the view scopes of the images intersect. The _Coverage loss_ function \(f(\cdot)\) for a subset of color images \(\widetilde{C}\subset C\) measures how many images are covered by this subset:
\[f_{C}(\widetilde{C})=\sum_{c\in C}\begin{cases}0&\text{if }\exists\widetilde{c} \in\widetilde{C}\text{ s.t. }s(c,\widetilde{c})>\mu\\ 1&\text{else}\end{cases} \tag{1}\]
where \(\mu\) is an overlap threshold. We are interested in finding a subset \(\widetilde{C}\) of minimal size that covers all frames in \(C\):
\[C_{db}=\min_{\begin{subarray}{c}\widetilde{C}\subset C\\ f_{C}(\widetilde{C})=0\end{subarray}}|\widetilde{C}|. \tag{2}\]
In this case, one can define \(C_{db}\) as an **optimal image database** that is a reduced representation of color information from the environment digitization.
When the overlap measure is symmetrical, i.e. \(s(c,\widetilde{c})=s(\widetilde{c},c)\), this formulation resembles a _minimum dominating set problem_ for a graph where the images are vertices that are connected if the overlap measure is greater than the given threshold. The original formulation of dominating set problem aims to find a minimal subset of graph vertices so that every vertex is either in the set or is connected to a vertex from it.
### _Spatial overlap measure_
In our approach, to estimate an overlap between two images we propose to use spatial information collected from the depth camera. An alternative approach could be to use color information only, for example by using local features and matches between them. Unfortunately, this approach tends to generate incorrect edges in cases when different locations have similar textures and patterns (visual aliasing).
Since depth information is available for every color frame, we can build a 3D map of the environment by using the image poses. Then, the 3D-map can be viewed as a set of voxels \(V=\{v_{1},..,v_{M}\}\). The sequence of color images \(C\) is associated with the voxels \(D=\{d_{1},\ldots,d_{N}\}\) where \(d_{i}=\{v_{i_{1}},..,v_{i_{k}}\}\) is a subset of voxels observed on frame \(c_{i}\). In this formulation, the overlap measure \(s(d_{i},d_{j})\) is defined based on the intersections of the voxel sets. Such measure provides a better estimation of overlap than when only the color information is considered. Finally, we are interested in finding the following subset \(\widetilde{D}\subset D\):
\[D_{db}=\min_{\begin{subarray}{c}\widetilde{D}\subset D\\ f_{D}(\widetilde{D})=0\end{subarray}}|\widetilde{D}|. \tag{3}\]
### _Overlap measure_
In order to define what is a good overlap measure let us consider examples depicted in Fig. 2. A database image is defined as good with respect to a query image when it covers a major part of the query image. Consequently, a database image is defined as bad when it covers a relatively small part of the query image. In general, this principle is not symmetric -- a query can occupy a minor part of a database image, but still this database image provides enough coverage for the query. So, formally this measure on voxel sets can be defined as:
\[s(d_{q},d_{db})=\frac{|d_{q}\cap d_{db}|}{|d_{q}|}, \tag{4}\]
where \(|\cdot|\) is set cardinality and \(d_{q}\) and \(d_{db}\) are voxel sets corresponding to the query and database images respectively.
As it is stated above, in a case when overlap measure is symmetric, minimization problem can be solved via dominating set algorithms and the available solvers [34]. In our work, we propose to use intersection over union (IoU) for voxel sets:
\[s_{IoU}(d_{q},d_{db})=\frac{|d_{q}\cap d_{db}|}{|d_{q}\cup d_{db}|} \tag{5}\]
This metric is symmetric and provides a greater amount of strict constraints to the graph since:
\[s_{IoU}(d_{q},d_{db})\leq s(d_{q},d_{db}) \tag{6}\]
A greater amount of strict constraints results in more edges in the graph that might increase the volume of the database resulted from the dominating set solution in comparison to the original problem formulation.
Fig. 2: Examples of different overlaps between a query and a database image interpreted as good or bad. _Left:_ the database image covers a major part of the query image. _Right:_ the query image is not well covered by the database image.
### _Graph processing_
To build such graph, we need to check an overlap for every pair of images. When the scanning sequence is huge, the intersection analysis between \(\frac{N(N-1)}{2}\) pairs of voxel sets results in heavy computational load. Also, there are a lot of pairs of sets that do not intersect with each other. To optimize this process, we propose the following algorithm:
1. associate every map voxel with indices of the frames that cover it;
2. for each voxel generate pairs of all frames that are associated with it;
3. for each pair count the number of voxels where the same pair occurs -- this value represents the amount of voxels in the corresponding intersection;
4. finally, calculate IoU for the constructed pairs using the values listed above.
This approach allows us to consider only those voxel sets that have intersections.
## IV Experimental results
In this section, we compare the quality of the VPR database built using our method with strategies for database creation proposed in other works. The evaluation is performed on popular RGBD sequences from 7-scenes and BundleFusion datasets. Also, we provide our evaluation sequence (Skoltech campus) that scans an office environment with repeated similar patterns and objects. Additionally, we demonstrate the performance of state-of-the-art VPR algorithms adapted for the database generated with our methodology.
### _Datasets_
To estimate the quality of a database and a VPR algorithm adapted for it, the evaluation of a dataset should contain more than one pass over the environment to resemble a real-world situation. Namely, it should be splittable into a "scanning" sequence that is used for database creation and VPR algorithm adaptation and a "test" sequence, on which the quality of place recognition is evaluated. To meet these requirements, we consider two largest scenes from 7-scenes dataset (RedKitchen and Office) [35, 36], two largest sequences from BundleFusion (office0, office1) [28]. For the 7-scenes dataset, the sequences marked as "train" in the original dataset are used as a scanning sequence, while others are used for tests. For BundleFusion we manually split the sequence into a scanning route and a test route.
Also, we have recorded a set of sequences in the Skoltech campus using Azure Kinect DK sensor. This environment contains repeated structures with similar design of desks, walls and doors and therefore provides more challenging conditions for a VPR algorithm in the indoor environment. Some examples of visual ambiguity in this scene are depicted in Fig. 3. To build a 3D map, the trajectory produced by RGBD ORB-SLAM [1] is used. Since some sensors do not provide full depth coverage per color frame as presented in Fig. 4, we implement a mechanism to extend it using 3D map reprojection to the exact frame.
### _Database size and reduction rate_
Our algorithm for database creation has two hyper-parameters: voxel size for map voxelization and overlap threshold for graph processing. In our experiments, voxel size is equal to 0.3 m, which provides a reasonable approximation of the spatial properties of environments for indoors sequences captured by RGBD sensors. The considered overlap thresholds for IoU measure are [0.1, 0.3, 0.5]. In Tab. I we present statistics on the scanning sequence size and the database size for different overlap threshold values. Also, the table reflects the reduction rate and the percentage of spatial coverage by the database. Spatial coverage is measured as a percentage of map voxels occupied by voxels covered by the database.
The results demonstrate that the scanning sequence can be reduced by up to 250-1400 times using our approach. The reduction rate depends on how slow and thoroughly the data from the environment is recorded. It can be noticed that spatial overlap for some constructed databases is not huge and covers 25-40% of voxels. Still, the invariant used in our approach guarantees that frames chosen as a database have enough overlap with other images that cover other untouched voxels.
### _Database overlap comparison with other strategies_
In our evaluation, we consider three other strategies used in previous works to create databases. The first one, _EveryNth_, takes every \(N\)th frame from the scanning sequence [5]. The second one, _CubeDivision_, splits the trajectory into cubes and takes one representative frame from the cube as a database image [24]. The third one is called _DistanceVector_, it splits the trajectory path into segments by distance and takes frames corresponding to the edges of these segments. To compare the environment coverage provided by a database, for each scene we generate databases using the described strategies. Hyper-parameters of the considered
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & \multicolumn{3}{c}{Overlap threshold} \\ & & 0.1 & 0.3 & 0.5 \\ \cline{3-5} Dataset & Sequence size & \multicolumn{3}{c}{Database size} \\ & & \multicolumn{3}{c}{Reduction rate} \\ & & \multicolumn{3}{c}{Spatial coverage} \\ \hline \multirow{3}{*}{7-Scenes Office} & \multirow{3}{*}{4800} & 5 & 15 & 80 \\ & & x960 & x320 & x60 \\ & & 43\% & 56\% & 75\% \\ \hline \multirow{3}{*}{7-Scenes RedKitchen} & \multirow{3}{*}{5600} & 4 & 25 & 97 \\ & & x1400 & x224 & x58 \\ & & 24\% & 54\% & 68\% \\ \hline \multirow{3}{*}{BundleFusion office0} & \multirow{3}{*}{3547} & 14 & 33 & 94 \\ & & x253 & x108 & x38 \\ & & 61\% & 72\% & 82\% \\ \hline \multirow{3}{*}{BundleFusion office1} & \multirow{3}{*}{3622} & 12 & 30 & 86 \\ & & x301 & x120 & x42 \\ & & 47\% & 65\% & 75\% \\ \hline \multirow{3}{*}{Sk campus} & \multirow{3}{*}{6849} & 20 & 46 & 105 \\ & & x342 & x148 & x105 \\ \cline{1-1} & & 50\% & 65\% & 72\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Statistics on the scanning sequence size and the size of the resulting databases
methods are tuned to have the same amount of frames in each database.
To estimate how well the constructed database covers the images from the scanning sequence, we aggregate statistics on the maximum overlap between each frame from the scanning sequence and database images. The measure defined in (4) is used to estimate the overlap. Statistics among different ways of database creation are depicted in Fig. 5. First of all, it can be noticed that on small-sized and middle-sized databases other methods except _DominatigSet_ (ours) have frames with zero or too small overlap. It means that those databases do not cover some parts of the scene at all, which may result in errors during the VPR operation. Secondly, despite, in general, the median threshold for DominatingSet is less than for other methods, its lower bound is always higher. This demonstrates more uniform distribution of database images among scanning sequence information.
### _Visual place recognition_
We also provide evaluations of state-of-the-art visual place recognition algorithms on databases generated with our methodology.
The following methods that show top performance on popular VPR datasets are considered: NetVLAD [15], CosPlace [23], SuperGlue-based approach [37] and a combination of NetVLAD and SuperGlue. The SuperGlue approach counts the number of keypoint matches between a query image and each image in the database and the image with maximum amount of matches is taken as the result of image retrieval. The combination of NetVLAD and SuperGlue approaches takes top-5 database predictions from NetVLAD and among them chooses the one with the maximum number of matches via SuperGlue.
For NetVLAD and CosPlace two types of models are considered during the evaluation: the original pre-trained models from the authors and models that we have fine-tuned
Fig. 4: Depth extension for Azure Kinect DK data. _Left, Middle:_ original data produced by the sensor (color and depth), where the original depth map does not cover the whole color frame — this can prevent a proper overlap estimation. _Right:_ depth extended via reprojection of the 3D map onto the color frame.
Fig. 5: Statistics on overlap between database images and images from scanning sequence for different database sizes.
Fig. 3: Examples of different scenes captured in Skoltech campus having similar structures. _Left:_ the map of the Skoltech campus sequence. _Right:_ pairs of images that are very similar visually but were captured in different locations.
for each database and scene. The following data preparation pipeline is used. We extract every fifth frame from the scanning sequence for the validation set and keep the remaining images for training. Since _DominatingSet_ provides labels of classes, the algorithm trains on this subset of data until an "early stop" happens.
All evaluations are performed on test sequences not included in the database creation pipeline and VPR algorithm fine-tuning. As evaluation metric Recall@1 is used as the classical metric for VPR task. Results are presented in Tab. II. In general, all methods provide high performance, comparable to state-of-the-art results for VPR on common benchmarks. SuperGlue-based and NetVLAD-SuperGlue approaches both demonstrate the superior performance except several cases on small-sized databases. Especially, SuperGlue has low performance on the Skoltech campus sequence where repetitive patterns might confuse local features matching. Fine-tuned NetVLAD and CosPlace models give improvement over pre-trained models. This enhancement is essential on small-sized databases and often outperforms SuperGlue on them. The most promising results are achieved on databases with high overlap threshold. In those cases database images better describe local peculiarities of the scene and consequently have more semantic overlap with query images.
### _Performance_
Finally, the performance of VPR approaches on databases created by DominatingSet has been estimated. For evaluation, a machine with the following characteristics was used: AMD Ryzen 7 4800H with Radeon Graphics (16) @ 2.900GHz, 16 GB RAM, NVIDIA GeForce GTX 1660 Ti Mobile. BundleFusion office0 sequence was used since it has median amount of frames in databases among all other sequences. For all neural networks, we set batch size to 1. For methods based on image descriptor matching (CosPlace and NetVLAD), descriptors for databases were precalculated. Performance statistics is presented in Tab. III. Methods based on global descriptors (CosPlace and NetVLAD) have almost equal performance. That does not depend on the database size, since they require only descriptor calculation for query image and then finding a match for it among database descriptors. SuperGlue, on the contrary, is not supposed to keep any pre-calculated information and should process query image with each database image to estimate amount of feature matches. Therefore its performance grows with respect to database size.
## V Conclusion
We have presented the _DominatingSet_ algorithm for selecting an optimal subset of images, the database, for VPR. To do so, our approach defines a measure of overlapping between pairs of frames in a 3D voxel map and embeds this information along with spatial information into a graph. The reduction in the size of the database is substantial (more than 200 times) and it is easily controlled by a single hyper-parameter setting of the similarity coefficient.
We have evaluated our method together with other methods for selecting images into a database in the task of VPR, showing promising results in all the five tested environments. As a byproduct, we have produced weakly-supervised labels of "places" for fine-tuning a neural place recognition algorithm, improving its accuracy even more.
|
2301.04444
|
Entanglement properties of a quantum-dot biexciton cascade in a chiral
nanophotonic waveguide
|
We analyse the entanglement properties of deterministic path-entangled
photonic states generated by coupling the emission of a quantum-dot biexciton
cascade to a chiral nanophotonic waveguide, as implemented by {\O}stfeldt et
al. [PRX Quantum 3, 020363 (2022)]. We model the degree of entanglement through
the concurrence of the two-photon entangled state in the presence of realistic
experimental imperfections. The model accounts for imperfect chiral
emitter-photon interactions in the waveguide and the asymmetric coupling of the
exciton levels introduced by fine-structure splitting along with time-jitter in
the detection of photons. The analysis shows that the approach offers a
promising platform for deterministically generating entanglement in integrated
nanophotonic systems in the presence of realistic experimental imperfections.
|
Eva M. González-Ruiz, Freja T. Ãstfeldt, Ravitej Uppu, Peter Lodahl, Anders S. Sørensen
|
2023-01-11T13:01:35Z
|
http://arxiv.org/abs/2301.04444v2
|
# Entanglement properties of a quantum-dot biexciton cascade in a chiral nanophotonic waveguide
###### Abstract
We analyse the entanglement properties of deterministic path-entangled photonic states generated by coupling the emission of a quantum-dot biexciton cascade to a chiral nanophotonic waveguide, as implemented by Ostfeldt et al. [PRX Quantum **3**, 020363 (2022)]. We model the degree of entanglement through the concurrence of the two-photon entangled state in the presence of realistic experimental imperfections. The model accounts for imperfect chiral emitter-photon interactions in the waveguide and the asymmetric coupling of the exciton levels introduced by fine-structure splitting along with time-jitter in the detection of photons. The analysis shows that the approach offers a promising platform for deterministically generating entanglement in integrated nanophotonic systems in the presence of realistic experimental imperfections.
## I Introduction
The generation of high-fidelity entanglement is key for the development of modern quantum technologies [1; 2]. Entangled states of photons have been widely generated probabilistically by employing spontaneous parametric down-conversion (SPDC) [3], but the probabilistic nature of this process is a major obstacle for scaling up to high photon numbers. The possibility of entanglement generation on demand is of utmost importance for a wide range of quantum information applications, such as measurement-based quantum computing [4; 5]. The biexciton cascade from quantum-dot (QD) photon sources has been investigated as an on-demand entanglement generator [6; 7; 8; 9]. The emitted states are, however, entangled in the polarisation degree of freedom, which is incompatible with implementations in integrated photonic circuits [10] that typically support only a single polarisation mode. This poses a challenge for future integration and scalability of quantum technologies [11] relying on biexciton-cascade entanglement sources.
A solution to the integration of the biexciton source into nanophotonic devices was presented in Ref. [12]. Here the photon emission from a cascaded-biexciton decay from InGaAs quantum dots was coupled to a chiral nanophotonic waveguide [13] (see Fig. 1). The polarisation-dependent directional emission enabled by chiral coupling of dipoles in these waveguides enable a promising route for on-chip, path-entangled photon generation. Two-photon excitation of the quantum dot prepares the system in the biexciton state \(|XX\rangle\) with energy \(\omega_{XX}+\omega_{X}\), which decays through two possible channels to the exciton levels \(|X_{\pm}\rangle\) (see Fig. 1(a)). In a homogenous medium, the biexciton decays radiatively to one of the exciton levels, emitting a photon with either right (\(\sigma_{+}\)) or left (\(\sigma_{-}\)) circular polarisation. The two exciton levels, \(|X_{+}\rangle\) and \(|X_{-}\rangle\), are degenerate with energy \(\omega_{X}\) and decay to the ground state \(|g\rangle\) emitting photons with opposite circular polarisation to that emitted during the biexciton decay due to angular momentum conservation. The two emitted photons are thus entangled in polarisation as there is no information regarding which decay path the system followed. To turn this into a chip-compatible, path-entangled photon source, the QD is placed in a single-mode chiral photonic crystal waveguide which allows converting the polarisation of the transition dipole moment to the emission direction of the photon, i.e. \(\sigma_{-}\) dipoles emit to the left (path A) and \(\sigma_{+}\) dipoles emit to the right (path B). The polarisation entangled state created by the biexciton cascade is thus translated into path encoding that can be used in integrated photonic circuits. Ref. [12] reported on experimental measurements of the dynamics by out-coupling the photons from the waveguide and frequency-filtering them in order to separate photons emitted on the biexciton and exciton transitions. The desired correlations were then measured through a Hanbury-Brown-Twiss (HBT) experiment [14], as shown in Fig. 1.
While an ideal QD that is precisely positioned at a chiral point could generate maximally entangled, path-encoded photon pairs, imperfections in the QD as well as in the chiral coupling could impact the degree of entanglement. In particular, intrinsic asymmetry of the QD could lead to coupling between the exciton states \(|X_{\pm}\rangle\) through a spin-flip oscillation with a frequency \(S\) that is known as the fine-structure splitting (FSS) of the QD. In this work we provide a full theoretical analysis of the entanglement properties of the path-entangled state accounting for all these imperfections. This analysis already successfully described the experimental findings in Ref. [12], but here we provide the full details of the theory and apply it to systematically analyse the impact of various errors on the degree of entanglement. In particular, the aforementioned FSS induces a frequency splitting of the exciton levels (see Fig. 1), which effectively creates
a time dependence of the entangled polarisation states. This can reduce the quality of entanglement when imperfect time detection of photons is taken into account. Moreover, since the photons emitted in the two different decay paths in Fig. 1(b) have different polarisations, the two paths may occur with different probabilities in photonic nanostructures, given by the polarisation dependent local density of states. These effects, together with imperfect chiral coupling to the waveguide, can reduce the amount of entanglement. The analysis and understanding of these effects will be important for further explorations of the biexciton cascade as an on-demand source of path-entangled photons in integrated quantum information platforms.
## II Analysis
We start our analysis by introducing the Hamiltonian of the system and a wavefunction ansatz for the state generated by means of the light-mater interaction with the QD. The state is then fully characterised through studying its evolution by solving Schrodinger's equation.
### The Hamiltonian and wavefunction ansatz
The biexciton level structure can be expressed in two different bases. In the linear polarisation basis (Fig. 1(b)), the emitted photons are linearly polarised (either horizontally or vertically, with \(\gamma_{x}\) and \(\gamma_{y}\) decay rates, respectively), while in the circular basis (Fig. 1(a)) the photons are circularly polarised (with right- and left-circularly polarised photons, and \(\gamma_{+}\) and \(\gamma_{-}\) decay rates, respectively). In the linear basis the two exciton levels have different energies, split by the FSS \(S\), while in the circular basis the levels are degenerate. In the latter basis, there is a time-dependent oscillation between the two exciton levels at a frequency \(S\).
The full system is described by the total Hamiltonian \(\hat{H}=\hat{H}_{0}+\hat{H}_{f}+\hat{H}_{int}\), which can be decomposed into the
Figure 1: Entanglement generation scheme and level structure of the quantum dot (QD). The QD (yellow semisphere) is placed in a chiral nanowaveguide, and is excited from above, perpendicularly to the nanostructure plane. The photons emitted by the QD can couple to the left (A path) or to the right (B path). The light is collected and frequency-filtered to separate between the biexciton (\(\omega_{XX}\)) and exciton (\(\omega_{X}\)) photons in order to measure the desired temporal correlations as a function of \(\tau\), the difference between the two emission times. The level structure of the QD can be expressed in two different bases: a _Circular basis_. The biexciton level \(|XX\rangle\), with energy \(\hbar\omega_{XX}\), emits two opposite circularly-polarised photons (circular right and left polarised with \(\gamma_{\pm}\) decay rates, respectively). In this picture, the two exciton levels \(|X_{+}\rangle\) and \(|X_{-}\rangle\) have the same energy \(\hbar\omega_{X}\), but are coupled at a frequency equal to the fine-structure splitting \(S\), which makes the state time-dependent. The exciton \(|X_{+}\rangle\) decays at rate \(\gamma_{-}^{\prime}\) to the ground state \(|g\rangle\), and so does \(|X_{-}\rangle\) at a rate \(\gamma_{+}^{\prime}\). b) _Linear basis_. The biexciton level \(|XX\rangle\) has the same energy as in the circular basis, but the two emitted photons have opposite linear polarisation (horizontal and vertical with \(\gamma_{x}\) and \(\gamma_{y}\) decay rates, respectively). The two exciton levels are no longer degenerate, but split into the exciton levels \(|X_{x}\rangle\) and \(|X_{y}\rangle\), that are stationary in time. The exciton level \(|X_{x}\rangle\) (\(|X_{y}\rangle\)) couples to the \(x\) (\(y\)) in-plane dipole component and has energy \(\omega_{x}+S/2\) (\(\omega_{x}-S/2\)). It decays to the ground state \(|g\rangle\) at a rate \(\gamma_{x}^{\prime}\) (\(\gamma_{y}^{\prime}\)).
free energy of the emitter- \(\hat{H}_{0}\), the free field- \(\hat{H}_{f}\) and the interaction \(\hat{H}_{\text{int}}\) Hamiltonians. These are given by
\[\begin{split}\hat{H}_{0}&=\hbar\left(\omega_{XX}+ \omega_{X}\right)\ket{XX}\bra{XX}\\ &+\hbar\left(\omega_{X}+\frac{S}{2}\right)\ket{X_{x}}\bra{X_{x} }+\hbar\left(\omega_{X}-\frac{S}{2}\right)\ket{X_{y}}\bra{X_{y}}\\ \hat{H}_{f}&=\hbar\int\left(\omega_{X,\mathbf{k}} \hat{a}_{\mathbf{k}}^{\dagger}\hat{a}_{\mathbf{k}}+\omega_{XX,\mathbf{k}}\hat{a }_{\mathbf{k}}^{\prime\dagger}\hat{a}_{\mathbf{k}}^{\prime}\right)d\mathbf{k} \\ \hat{H}_{\text{int}}&=-\frac{q}{m_{0}}\hat{\mathbf{A}} \cdot\hat{\mathbf{p}}\,,\end{split} \tag{1}\]
where we have chosen the Coulomb gauge with vector potential \(\mathbf{A}\). The QD is described by the coordinate \(\mathbf{r}\) with the conjugate variable or generalised momentum \(\mathbf{p}\), charge \(q\) and mass \(m_{0}\)[15]. Ideally, the energy of the biexciton (\(\ket{XX}\)) and exciton (\(\ket{X_{\alpha}}\) with \(\alpha=x,y\)) levels is given by \(\hbar\omega_{XX}\) and \(\hbar\omega_{X}\), respectively. The FSS \(S\), however, splits the exciton levels into \(\hbar(\omega_{X}\pm S/2)\) in the linear polarisation basis. Note that we express the total Hamiltonian in a linear polarisation basis as it simplifies the temporal dynamics of the system. The field annihilation operators \(\hat{a}_{\mathbf{k}}\) are momentum dependent, where \(\mathbf{k}\) expresses the corresponding wavevector,and the prime indicates whether it annihilates a biexciton (\(\hat{a}_{\mathbf{k}}^{\prime}\)) or an exciton (\(\hat{a}_{\mathbf{k}}\)) photon. The biexciton and exciton binding energies are assumed to be sufficiently different to treat them as two independent reservoirs. This assumption is motivated by the 2 - 3 meV energy splitting between the exciton and biexciton binding energies observed in QDs, which is over three orders of magnitude larger than the natural linewidths of these transitions [16].
To put the interaction Hamiltonian into a simpler form, the conjugate variable \(\mathbf{p}\) (proportional to the dipole operator) can be expressed in terms of the transition matrix elements \(\mathbf{\hat{p}}=\sum_{l,m}\bra{l}\hat{\mathbf{p}}\ket{m}\ket{l}\bra{m}\), where the indexes \(l\) and \(m\) represent the excited and ground states of the transition, respectively. This allows us to express the interaction Hamiltonian as
\[\hat{H}_{\text{int}}=\sum_{l,m,\mathbf{k}}\bra{l}\hat{\mathbf{p}}\ket{m}\cdot \mathbf{U}_{\mathbf{k}(\mathbf{r})}\hat{a}_{k}\ket{l}\bra{m}+\text{H.c.}\,, \tag{2}\]
where \(\mathbf{U}_{\mathbf{k}(\mathbf{r})}\) is the modefunction of the field. We consider that the field propagates in the waveguide along the \(x\) direction. Following Bloch's theorem we thus have \(\mathbf{U}_{k}(\mathbf{r})=\mathbf{e}_{k}(\mathbf{r})e^{ikx}\), where \(\mathbf{e}_{k}(\mathbf{r})\) is the Bloch function describing the electric field with wavenumber \(k\) at the QD position \(\mathbf{r}\), and the field only propagates in the \(x\) direction. Moreover we assume that the QD only interacts within a narrow frequency range around the resonance frequency with wavenumbers \(\pm k_{0}\) yielding
\[\hat{H}_{\text{int}}=\sum_{\begin{subarray}{c}l,m\\ k\omega\pm k_{0}\end{subarray}}\bra{l}\hat{\mathbf{p}}\ket{m}\cdot\mathbf{e}_ {k}(\mathbf{r})e^{ikx}\hat{a}_{k}\ket{l}\bra{m}+\text{H.c.}\,, \tag{3}\]
where for brevity we have taken only the non primed annihilation operators, with the sign of \(k\) indicating whether the field propagates to the right (\(+k_{0}\)) or to the left (\(-k_{0}\)). By assuming the same wavenumber in both directions, we implicitly assume time-reversal symmetry for the propagation of the field in the waveguide (i.e. without the QDs). This is valid as long as we can e.g. neglect the intrinsic Faraday effect of the waveguide. Since waveguides are very broad band this is typically an excellent approximation and does not exclude any possible violation of time-reversal symmetry of the QD if an external magnetic field was applied.
The polarisation of the emitted light is determined by the symmetry of the states, which results in the following matrix elements for the dipole forbidden transitions in the linear polarisation basis
\[\begin{split}\bra{XX}\hat{p}_{x}\ket{X_{y}}&= \bra{XX}\hat{p}_{y}\ket{X_{x}}\\ &=\bra{X_{x}}\hat{p}_{y}\ket{g}=\bra{X_{y}}\hat{p}_{x}\ket{g}=0 \,,\end{split} \tag{4}\]
as the \(x\) (\(y\)) component of the dipole only couples to the horizontally (vertically) polarised light. Moreover, the allowed transitions from the exciton levels have a dipole moment defined as \(P\),
\[\bra{X_{x}}\hat{p}_{x}\ket{g}=\bra{X_{y}}\hat{p}_{y}\ket{g}=P\,, \tag{5}\]
whereas the two possible biexciton decay transitions are given by [15]
\[\bra{XX}\hat{p}_{x}\ket{X_{x}}=\bra{XX}\hat{p}_{y}\ket{X_{y}}=\sqrt{2}P\,. \tag{6}\]
We now insert these dipole transitions in the interaction Hamiltonian from Eq. (3) and calculate its Fourier transform. For now we only consider the modes propagating to the right, yielding
\[\begin{split}\hat{H}_{\text{int}}=-P\cdot\bigg{[}\sqrt{2}\bigg{(} \epsilon_{k_{0},x}(\mathbf{r})\ket{XX}\bra{X_{x}}+\epsilon_{k_{0},y}(\mathbf{r} )\ket{XX}\bra{X_{y}}\bigg{)}e^{ik_{0}x_{0}}\hat{a}_{B}(x_{0})\\ +\bigg{(}\epsilon_{k_{0}^{\prime},x}(\mathbf{r})\ket{X_{x}}\bra{g }+\epsilon_{k_{0}^{\prime},y}(\mathbf{r})\ket{X_{y}}\bra{g}\bigg{)}e^{ik_{0}x_ {0}}\hat{a}_{B}^{\prime}(x_{0})\bigg{]}+\text{H.c.},\end{split} \tag{7}\]
where the position-dependent annihilation operator \(\hat{a}_{n}(x)\) is defined as
\[\hat{a}_{n}(x)=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}\hat{a}_{n,\pm k}e^{i(k-k _{0})x}dk\,, \tag{8}\]
with \(n=B(A)\) denoting fields propagating to the right (left) and the sign being positive (negative) for path \(B\) (\(A\)) and \(x_{0}\) is the position of the emitter. We note that since we separate the annihilation operator into left and right propagating modes (\(A\) and \(B\)) the limit of the integration is \(k=0\). In practice, however, we only expect the annihilation operator to give a contribution for \(k\approx\pm k_{0}\). We can therefore extend the limit of integration to \(-\infty\) yielding the commutator
\[[\hat{a}_{n}(x),\hat{a}_{n^{\prime}}^{\dagger}(x^{\prime})]=\delta_{n,n^{ \prime}}\delta(x-x^{\prime})\,. \tag{9}\]
We further note that with the definition in Eq. (8) we make the convention that both left and right propagating fields are traveling towards positive \(x\), i.e. the direction of the \(x\)-axis is reversed for the left propagating modes.
To relate the coupling of the right propagating modes with the left propagating modes we again invoke time-reversal symmetry of the waveguide modes. If the local electric field \(\epsilon_{k_{0},x}(\mathbf{r})\) is a solution for the waveguide, then by time-reversal symmetry the solution for a wave propagating in the opposite direction is given by \(\epsilon_{-k_{0}}(\mathbf{r})=\epsilon_{k_{0}}^{*}(\mathbf{r})\). This allows us to obtain the full interaction Hamiltonian by combining Eq. (7) with the corresponding expression for back-propagating waves. This results in
\[\hat{H}_{\text{int}} =-\hbar\sum_{\alpha}\left[\left(g_{A,\alpha}\hat{a}_{A}(0)+g_{B, \alpha}\hat{a}_{B}(0)\right)\left|XX\right\rangle\left\langle X_{\alpha}\right|\right. \tag{10}\] \[\left.+\left(g^{\prime}_{A,\alpha}\hat{a}^{\prime}_{A}(0)+g^{ \prime}_{B,\alpha}\hat{a}^{\prime}_{B}(0)\right)\left|X_{\alpha}\right\rangle \left\langle g\right|+\text{H.c.}\right],\]
where we have set \(x_{0}=0\) for simplicity and defined the complex coupling constants \(g_{n,\alpha}=|g_{n,\alpha,n}|e^{i\phi_{n,\alpha}}\) and their phases in relation to the local electric field components \(\epsilon_{\pm k_{0},i}\) as
\[g_{A,x} =\sqrt{2}P\epsilon_{k_{0},x}^{*}(\mathbf{r}), g_{A,y} =\sqrt{2}P\epsilon_{k_{0},y}^{*}(\mathbf{r}), \tag{11}\] \[g_{B,x} =\sqrt{2}P\epsilon_{k_{0},x}(\mathbf{r}), g_{B,y} =\sqrt{2}P\epsilon_{k_{0},y}(\mathbf{r}),\] \[g^{\prime}_{A,x} =P\epsilon_{k_{0},x}^{*}(\mathbf{r}), g^{\prime}_{A,y} =P\epsilon_{k_{0},y}^{*}(\mathbf{r}),\] \[g_{B,x} =P\epsilon_{k_{0},x}^{*}(\mathbf{r})\,. g^{\prime}_{B,y} =P\epsilon_{k_{0},y}^{*}(\mathbf{r})\,.\]
The coupling constants in Eq. (11) describe the light-matter interaction between the field and the waveguide including the chirality. In particular, their magnitude describes the coupling of a horizontally or vertically polarised photon (through the \(x\) and \(y\) components of the dipole, respectively) to the left or to the right paths. From Eq. (11) we note that \(|g_{A,\alpha}|=|g_{B,\alpha}|\) for \(\alpha=x,y\), so that linearly polarized dipoles always have the same coupling constant and hence the same decay rate in both directions \(A\) and \(B\). This does not, however, exclude that circular dipoles can have chiral interaction and predominantly decay in one direction. The existence of such chiral interactions is encoded in the relative phase of the coupling constants. From Eq. (11) we find that the phase difference \(\Phi\) between the phases of the \(x\) and \(y\) components of the electric field is
\[\Phi\equiv\phi_{x}-\phi_{y}=\phi_{A,x}-\phi_{A,y}=-\left(\phi_{B,x}-\phi_{B,y }\right)\,, \tag{12}\]
and similarly for the exciton phase difference \(\Phi^{\prime}\). Consider now a circularly polarized state \(\left|X_{\pm}\right\rangle=(\left|X_{x}\right\rangle\pm i\left|X_{y}\right\rangle )/\sqrt{2}\). We can calculate the coupling constants \(g^{\prime}_{n,+}\) for the decay of these states into the \(n=A,B\) directions from the interaction Hamiltonian (10), yielding
\[g^{\prime}_{n,\pm}=\frac{1}{\sqrt{2}}(g^{\prime}_{n,x}\mp ig^{\prime}_{n,y})\,. \tag{13}\]
If \(|g^{\prime}_{n,x}|=|g^{\prime}_{n,y}|=g^{\prime}\), the decay rate of the circular states into the two directions will thus fulfill
\[\gamma^{\prime}_{A,\pm} \propto|g^{\prime}_{A,\pm}|^{2}={g^{\prime}}^{2}(1\pm\sin\Phi^{ \prime}) \tag{14}\] \[\gamma^{\prime}_{B,\pm} \propto|g^{\prime}_{B,\pm}|^{2}={g^{\prime}}^{2}(1\mp\sin\Phi^{ \prime})\,.\]
For \(\Phi^{\prime}=\pi/2\) the \(x\) and \(y\) components of the field in the waveguide are phase-shifted corresponding to circular polarisation. Furthermore, whether the waveguide mode is left- or right-hand circularly polarized is linked to the propagation direction of the light. As a consequence, the system exhibits perfect chiral coupling with the circularly polarized states coupling only to a single propagation direction, i.e. \(\gamma^{\prime}_{A,+}\neq 0\) and \(\gamma^{\prime}_{B,+}=0\), with the directions reversed for the opposite circular state. Complete absence of chirality occurs when \(\Phi^{\prime}=0\), where the field in the waveguide is linearly polarized. Thus, the parameters \(\Phi\) and \(\Phi^{\prime}\) represent the degree of chirality of the system, which we employ in the subsequent sections of this article.
To describe the emission into the waveguide, it is convenient to change the Hamiltonian into the position basis. While the Fourier transform of the free energy term in the total Hamiltonian (1) is itself, Fourier transforming the free field term yields
\[\hat{H}_{f}=\sum_{n}\bigg{[}i\hbar\int\bigg{(}v_{gXX}\frac{\partial \hat{a}_{n}^{\dagger}(x)}{\partial x}\hat{a}_{n}(x)\\ +v_{gX}\frac{\partial\hat{a}_{n}^{\dagger}(x)}{\partial x}\hat{a} ^{\prime}_{n}(x)\bigg{)}dx\\ +\hbar\int\bigg{(}\omega_{X,k}\hat{a}_{n,k}^{\dagger}\hat{a}_{n,k}+ \hbar\omega_{XX,k}\hat{a}_{n,k}^{\prime\dagger}\hat{a}_{n,k}^{\prime}\bigg{)}dk \bigg{]}\,,\]
where the group velocities associated with the biexciton and exciton energy levels are given by \(v_{g,XX}=\partial\omega_{XX,k}/\partial k\) and \(v_{g,X}=\partial\omega_{XX,k}/\partial k\) respectively. Note that these two group velocities could be different due to the dispersion of the waveguide at the different emission wavelengths of the exciton and biexciton levels.
We can now write a wavefunction ansatz for the total state of the system in the real space domain. The state should describe that up to two photons can be emitted by the biexciton decay and that they couple into the left- or right-propagating waveguide modes. Based on the methods from Ref. [17] (with similar methods being developed in Refs. [18; 19; 20]) we use the following ansatz:
\[\begin{split}|\psi(t)\rangle=e^{-i(\omega_{XX}+\omega_{X})t}& (c_{XX}(t)\ket{XX}\ket{\emptyset}+\sqrt{v_{gXX}}\sum_{\alpha,n}\int dt_{XX}\psi_{ \alpha,n}(t,t_{XX})\hat{a}_{n}^{\dagger}(v_{gXX}(t-t_{XX}))\ket{X_{\alpha}} \ket{\emptyset}\\ &+\sqrt{v_{gXX}v_{gX}}\sum_{n,m}\iint dt_{XX}dt_{X}\psi_{n,m}(t, t_{XX},t_{X})\hat{a}_{n}^{\dagger}(v_{gXX}(t-t_{XX}))\hat{a}_{m}^{\prime\dagger}(v_{gX} (t-t_{X}))\ket{g}\ket{\emptyset})\,,\end{split} \tag{16}\]
where \(t_{X}\) and \(t_{XX}\) are the two emission times with \(t_{XX}<t_{X}\). This state describes that with an amplitude \(c_{XX}(t)\) the system is in the biexciton state with the field being in the vacuum state \(\ket{\emptyset}\). Since the system is initially excited to this state we have \(c_{XX}(t=0)=1\). The amplitude \(\psi_{\alpha,n}(t,t_{XX})\) describes the state after the emission of a photon in the direction \(n=A,B\) at time \(t_{XX}\) by the decay into the exciton state \(\ket{X_{\alpha}}\). Since the photon propagates in the waveguide, this is associated with a photon at position \(x=v_{gXX}(t-t_{XX})\). As this state still evolves in time the amplitude has an explicit dependence on time \(t\) with the amplitude vanishing before the emission, \(\psi_{\alpha,n}(t,t_{XX})=0\) if \(t\leq t_{XX}\). Finally, after the emission of both photons, the system is in the ground state \(\ket{g}\) and the two photons are emitted in directions \(n,m\) with amplitude \(\psi_{n,m}(t,t_{XX},t_{X})\). This amplitude vanishes unless \(t\geq t_{X}\geq t_{XX}\). It should be noted that both for the left and right propagation directions in the waveguide \(x\in[0,\infty]\), i.e. the reference frame is placed such that in both directions \(x\) is positive after the QD.
### Solving the Schrodinger equation
The wavefunctions \(\ket{\psi(t)}\) from Eq. (16) should be calculated to describe the state. We thus apply Schrodinger's equation \(i\hbar\partial\ket{\psi}/\partial t=\hat{H}\ket{\psi}\) to the wavefunction ansatz using the space-domain Hamiltonian. Following the procedure from Ref. [17], we obtain the set of coupled differential equations:
\[\begin{split}\dot{c}_{XX}(t)&=-\frac{i}{\sqrt{v_{g XX}\lambda}}h\sum_{\alpha,n}g_{\alpha,n}\psi_{\alpha,n}(t,t),\\ \dot{\psi}_{x,n}(t,t_{XX})&=\frac{iS}{2h}\psi_{x,n} (t,t_{XX})-\frac{ig_{x,n}^{*}c_{XX}(t)}{\sqrt{v_{gXX}h}}\delta(t-t_{XX}),\\ &-\frac{i}{\sqrt{v_{gX}h}}\sum_{m}g_{x,n}^{\prime}\psi_{n,m}(t,t_ {XX},t),\\ \dot{\psi}_{y,n}(t,t_{XX})&=-\frac{iS}{2h}\psi_{y,n} (t,t_{XX})-\frac{ig_{y,n}^{*}c_{XX}(t)}{\sqrt{v_{gXX}h}}\delta(t-t_{XX})\\ &-\frac{i}{\sqrt{v_{gX}h}}\sum_{m}g_{y,n}^{\prime}\psi_{n,m}(t,t_ {XX},t),\\ \dot{\psi}_{n,m}(t,t_{XX},t_{X})&=-\frac{i}{\sqrt{v_{ gX}h}}\sum_{\alpha}g_{\alpha,n}^{\prime*}\psi_{\alpha,n}(t,t_{XX})\delta(t-t_{X})\,. \end{split} \tag{17}\]
We then apply the Laplace transform to the nine equations in Eq. (17), with the system initially prepared in the biexciton state (\(c_{XX}(t=0)=1\)). Inverting the Laplace transformation now yields
\[\begin{split}\dot{\psi}_{x,n}(t,t_{XX})&=-\frac{ig_ {x,n}^{*}c_{XX}(t)}{\sqrt{v_{gXX}h}}\delta(t-t_{XX})\\ &-\left(\frac{-iS+\gamma_{x}^{\prime}}{2h}\right)\psi_{x,n}(t,t_{ XX})-\frac{\Gamma}{2h}\psi_{y,n}(t,t_{XX})\\ \dot{\psi}_{y,n}(t,t_{XX})&=-\frac{ig_{y,n}^{*}c_{ XX}(t)}{\sqrt{v_{gXX}h}}\delta(t-t_{XX})\\ &-\left(\frac{iS+\gamma_{y}^{\prime}}{2h}\right)\psi_{y,n}(t,t_{XX} )-\frac{\Gamma^{*}}{2h}\psi_{x,n}(t,t_{XX})\,,\end{split} \tag{18}\]
with the spontaneous emission rates given by
\[\gamma_{\alpha}^{(\prime)}=\sum_{n}\gamma_{\alpha,n}^{(\prime)}\equiv\sum_{n} \frac{|g_{\alpha,n}^{(\prime)}|^{2}}{v_{gX}}\,. \tag{19}\]
A coupling between the \(\ket{X_{x}}\) and \(\ket{X_{y}}\) states mediated by the local electric field of the waveguide is captured by the cross terms with coupling coefficient
\[\Gamma=\frac{g_{A,x}^{\prime*}g_{A,y}^{\prime*}+g_{B,x}^{\prime}g_{B,y}^{ \prime*}}{v_{gX}}\,, \tag{20}\]
which is real due to time-reversal symmetry (11). This coupling is important if e.g. the local electric field in the waveguide is diagonally polarized, which leads to \(\Gamma=\gamma_{x}^{\prime}=\gamma_{y}^{\prime}\).
When solving the coupled set of differential equations (18) it is convenient to work in a basis that diagonalizes the dynamics, i.e. where the equations decouple. For a rotationally symmetric system, this is the case for any basis, but it is no longer the case once the symmetry is broken. The FSS is induced by the asymmetry of the QD and is assumed to be in the \(x\) and \(y\)-directions such that Eqs. (18) decouple in that basis. On the other hand, the local waveguide field may have a different orientation, which also breaks the symmetry and thus leads to a coupling between the equations, i.e. \(\Gamma\neq 0\). In practice, however, we typically have \(S\gg\Gamma\), e.g. in the experimental implementation in Ref. [12] the fine structure splitting \(S\) was an order of magnitude larger than the exciton emission rate (\(\gamma_{x}^{\prime}+\gamma_{y}^{\prime}\))/2. The coupling between the exciton levels (\(\ket{X_{x}}\) and \(\ket{X_{y}}\)) can therefore be neglected and we set \(\Gamma=0\). We note that this assumption
may lead to inconsistencies in the obtained results due to incorrect normalization of the state in QDs with small FSS, i.e., \(S\) comparable to \((\gamma_{x}^{\prime}+\gamma_{y}^{\prime})/2\). In the subsequent sections, we use \(S=4(\gamma_{x}^{\prime}+\gamma_{y}^{\prime})/2\) for which we find that the magnitude differs from unity by \(<\)6%.
We now solve the two coupled differential equations from Eq. (18) by taking the aforementioned limit \(\Gamma=0\), such that the equations decouple. We can then straightforwardly solve them by again applying the Laplace transform, obtaining
\[\begin{split} c_{XX}(t)&=e^{-\frac{1}{2\hbar}( \gamma_{x}+\gamma_{y})t}\\ \psi_{x,n}(t,t_{XX})&=-i\sqrt{\gamma_{x,n}}e^{-\frac {1}{2\hbar}(\gamma_{x}+\gamma_{y})t_{XX}-\frac{1}{2\hbar}\left(\gamma_{x}^{ \prime}+iS\right)(t-t_{XX})-i\phi_{x,n}}\theta(t-t_{XX})\\ \psi_{y,n}(t,t_{XX})&=-i\sqrt{\gamma_{y,n}}e^{- \frac{1}{2\hbar}(\gamma_{x}+\gamma_{y})t_{XX}-\frac{1}{2\hbar}\left(\gamma_{x }^{\prime}-iS\right)(t-t_{XX})-i\phi_{y,n}}\theta(t-t_{XX})\\ \psi_{n,m}(t,t_{XX},t_{X})&=-e^{-\frac{1}{2\hbar}( \gamma_{x}+\gamma_{y})t_{XX}}\Bigg{(}\sqrt{\gamma_{x,n}\gamma_{x,m}^{\prime}} e^{-\frac{1}{2\hbar}\left(\gamma_{x}^{\prime}+iS\right)(t_{X}-t_{XX})-i\left( \phi_{x,n}+\phi_{x,m}^{\prime}\right)}\\ &\hskip 142.26378pt+\sqrt{\gamma_{y,n}\gamma_{y,m}^{\prime}}e^{- \frac{1}{2\hbar}\left(\gamma_{y}^{\prime}-iS\right)(t_{X}-t_{XX})-i\left( \phi_{y,n}+\phi_{y,m}^{\prime}\right)}\Bigg{)}\theta(t-t_{X})\theta(t_{X}-t_ {XX})\,.\end{split} \tag{21}\]
We now calculate the probability of detecting two photons simultaneously at the output of the waveguide in order to analyse the quality of the entanglement. To do so we correlate the biexciton and exciton photons with a time delay \(\tau\) in two different settings: when both are coupled to the forward or back-propagating direction (noted as \(A_{X}A_{XX}\) and \(B_{X}B_{XX}\) respectively) and when they couple to opposite directions (\(A_{X}B_{XX}\) and \(B_{X}A_{XX}\)):
\[\begin{split}& P_{n,m}(t,t_{XX},t_{XX}+\tau)=\\ &|v_{gX}||v_{gX}|\left\langle\psi(t)\right|\hat{a}_{n}^{\dagger} (v_{gXX}t)\hat{a}_{n}(v_{gXX}t)\\ &\hskip 142.26378pt\cdot\hat{a}_{m}^{\prime}\left(v_{gX}(t-\tau) \right)\hat{a}_{m}^{\prime}\left(v_{gX}(t-\tau)\right)|\psi(t)\rangle\;.\end{split} \tag{22}\]
With the wavefunction ansatz Eq. (16) and the results from Eq. (21) we obtain
\[P_{n,m}=|\psi_{n,m}(t,t_{XX},t_{XX}+\tau)|^{2}\\ =e^{-(\gamma_{x}+\gamma_{y})t_{XX}}\bigg{[}\gamma_{x,n}\gamma_{x,n}^{\prime}e^{-\gamma_{x}^{\prime}\tau}+\gamma_{y,n}\gamma_{y,n}^{\prime}e^{ -\gamma_{y}^{\prime}\tau}\\ +2\sqrt{\gamma_{x,n}\gamma_{y,n}\gamma_{x,m}^{\prime}\gamma_{y, m}^{\prime}}e^{-\frac{1}{2}(\gamma_{x}^{\prime}+\gamma_{y}^{\prime})\tau}\\ \cdot\cos\left(S\tau+(\phi_{x,n}-\phi_{y,n})+(\phi_{x,m}^{ \prime}-\phi_{y,m}^{\prime})\right)\bigg{]}\\ \cdot\theta(t-t_{XX}-\tau)\,. \tag{23}\]
### Entanglement generation
The state produced by the biexciton cascade coupled to the chiral waveguide has two different degrees of freedom: the path followed (to the left, \(A\), or to the right, \(B\)) and the respective times of emission of the biexciton (\(t_{XX}\)) and exciton (\(t_{X}\)) photons. We project this state in time space by fixing the two times of detection \(t_{X}-t_{XX}\equiv\tau>0\). Note that the characteristics of the state produced depends only on the time difference \(\tau\).
From our wavefunction ansatz in Eq. (16), we post-select the two-photon emission terms by conditioning on detecting photons at times \(t=t_{XX}\) and \(t=t_{X}\), thus obtaining the state
\[\begin{split}|\psi(\tau)\rangle=\frac{1}{\sqrt{N}}& (\psi_{AA}(\tau)\ket{AA}+\psi_{AB}(\tau)\ket{AB}\\ &+\psi_{BA}(\tau)\ket{BA}+\psi_{BB}(\tau)\ket{BB})\,,\end{split} \tag{24}\]
where,
\[\begin{split} N=&|\psi_{AA}(\tau)|^{2}+|\psi_{AB}( \tau)|^{2}+|\psi_{BA}(\tau)|^{2}+|\psi_{BB}(\tau)|^{2}\,,\end{split} \tag{25}\]
is the normalisation factor. Note that we dropped the explicit subscripts for exciton \(X\) and biexciton \(XX\) photons on the direction index. Instead, we utilize time-ordered emission in the simplified notation, i.e. the subscript \(AB\) should be read as \(A_{XX}B_{X}\).
In general the two possible delay channels do not have the same spontaneous emission rates, i.e., \(\gamma_{x}\neq\gamma_{y}\) due to differences of the local electric field components in the waveguide. However, to achieve a high degree of chirality in the waveguide, the two exciton decay rates have to be similar \(\gamma_{x}\approx\gamma_{y}\). This was also the case in the recent experiment in Ref. [12]. For most of the article we therefore set \(\gamma_{x}=\gamma_{y}\) and \(\gamma_{x}^{\prime}=\gamma_{y}^{\prime}\), but investigate the influence of differences in the rates in Sec. III.4. Moreover, the biexciton and exciton spontaneous emission rates are given by
\[\gamma_{x}+\gamma_{y}\equiv\gamma_{XX},\quad\gamma_{x}^{\prime}=\gamma_{y}^{ \prime}\equiv\gamma_{X}\,. \tag{26}\]
Since the biexciton decays twice as fast according to Eq. (6), if we assume identical group velocities we have
that \(\gamma_{X}=\gamma_{XX}/2\). In the rest of the article, we assume this relation between the spontaneous emission rates.
The difference in the phase of the transition dipoles for biexciton and exciton decays, \(\Phi\) and \(\Phi^{\prime}\) respectively, satisfies Eq. (12). Moreover as the optical wavelengths of the photons emitted from biexciton and the exciton decay channels are comparable, we can approximate the phase differences to be equal, i.e. \(\Phi=\Phi^{\prime}\). Under these assumptions, the total probability of detecting the first photon at time \(t=t_{XX}\) is
\[P(t=t_{XX})=(\gamma_{x}+\gamma_{y})e^{-(\gamma_{x}+\gamma_{y})t_ {XX}/\hbar}\\ =2\gamma_{X}e^{-2\gamma_{X}t_{XX}/\hbar}\,. \tag{27}\]
We can thus calculate the path-dependent, two-photon emission probabilities to be
\[P_{AA} =\frac{\gamma_{X}}{4}e^{-\gamma_{X}\tau/\hbar}\left(1+\cos\left( S\tau+2\Phi\right)\right) \tag{28}\] \[P_{BB} =\frac{\gamma_{X}}{4}e^{-\gamma_{X}\tau/\hbar}\left(1+\cos\left( S\tau-2\Phi\right)\right)\] \[P_{AB} =P_{BA}=\frac{\gamma_{X}}{4}e^{-\gamma_{X}\tau/\hbar}\left(1+\cos \left(S\tau\right)\right)\,.\]
A QD with \(S=0\) that is perfectly chiral coupled to the waveguide, i.e., \(\Phi=\Phi^{\prime}=\pi/2\), results in \(P_{AA}=P_{BB}=0\). In this case we thus have the ideal entangled state \((\left|AB\right\rangle+\left|BA\right\rangle)/\sqrt{2}\), where the emission direction of the two photons is perfectly anticorrelated, as shown with blue and orange lines in Fig. 2. Note that, for \(S=0\), our model can only accurately represent the perfect chiral coupling case and will lead to erroneous conclusions if \(\Phi\neq\pi/2\) since this leads to \(\Gamma\neq 0\). For the general case of \(S>0\), we can calculate the resulting entangled two-photon state by conditioning the solution in Eq. (21) on the detection of a photon at time \(t=t_{XX}\). For perfect chiral coupling the state is
\[\begin{split}\left|\psi(\tau)\right\rangle_{\Phi=\pi/2}& =\frac{1}{2}(\cos\left(\frac{S\tau}{2}\right)(\left|AB\right\rangle +\left|BA\right\rangle)\\ &\quad+i\sin\left(\frac{S\tau}{2}\right)(\left|AA\right\rangle+ \left|BB\right\rangle))\,,\end{split} \tag{29}\]
To understand the entanglement in this state we rewrite it as
\[\left|\psi(\tau)\right\rangle_{\Phi=\pi/2}=\frac{1}{2}(\left|A\right\rangle \left|\xi\right\rangle+\left|B\right\rangle\left|\xi^{\prime}\right\rangle)\,, \tag{30}\]
which is in fact a maximally entangled state, with \(\left|\xi\right\rangle=\cos\left(S\tau/2\right)\left|B\right\rangle+i\sin \left(S\tau/2\right)\left|A\right\rangle\) and \(\left|\xi^{\prime}\right\rangle=\cos\left(S\tau/2\right)\left|A\right\rangle+ i\sin\left(S\tau/2\right)\left|B\right\rangle\). For perfect chirality the entanglement is thus maximal regardless of the detection time, although the specific entangled state varies with the emission time, resulting in a time varying detection pattern in Fig. 2. In contrast, if the waveguide interaction is not chiral (\(\Phi=0,\pi\)) the state is given by
\[\begin{split}\left|\psi(\tau)\right\rangle_{\Phi=0,\pi}& =\frac{1}{2}\left(\left|AB\right\rangle+\left|BA\right\rangle+ \left|AA\right\rangle+\left|BB\right\rangle\right)\\ &=\frac{1}{2}\left(\left|A\right\rangle+\left|B\right\rangle \right)_{X}\left(\left|A\right\rangle+\left|B\right\rangle\right)_{XX}\,,\end{split} \tag{31}\]
which is a separable state. As a consequence all detection patterns of two photons are equally probable. In real experimental settings, the directional (chiral) coupling could lie in between these two extreme cases depending on the local electric field at the location of the QD within the waveguide. This imperfect chirality will lower the entanglement quality of the source, which is quantified in the next section.
## III Results
As we have seen in the previous section, the emitted two-photon entangled state depends on the time difference \(\tau\) between the biexciton and exciton emission times. We thus expect that any uncertainty in the emission times will affect the entanglement quality of the state. Moreover, imperfect chirality of the waveguide reduces the directionality of emission, thereby leading to non-perfect conversion into path encoding of the entangled state. In this section, we quantify the effect of imperfections on the entanglement quality of the state.
To this end, we employ the concurrence \(C\) as the entanglement measure to characterise the quality of the state. The concurrence of any quantum state with a density matrix \(\rho\) is given by [21]
\[C(\rho)=\max\{0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\}\,, \tag{32}\]
where \(\{\lambda_{i}\}\) are the square root of the eigenvalues of \(\rho\tilde{\rho}\)
Figure 2: Time correlations \(P_{AA}\) and \(P_{AB}\) as a function of the difference in the emission time \(\tau\). The blue and orange lines have been calculated with perfect symmetry between the exciton levels (\(S=0\)) while the yellow and purple lines have been obtained for \(S=4\gamma_{X}\). The waveguide coupling is perfectly chiral (\(\Phi=\pi/2\)) in all cases. _Inset_. Concurrence \(C\) of the state as a function of the difference in emission times \(\tau\). The concurrence remains unity at all times for any value of \(S\). This shows that the state is maximally entangled independently of the time \(\tau\), as also shown in Eq. (29).
in descending order and \(\tilde{\rho}=(\hat{\sigma}_{y}\otimes\hat{\sigma}_{y})\rho^{*}(\hat{\sigma}_{y} \otimes\hat{\sigma}_{y})\). We calculate the density matrix \(\rho\) that represents the path-encoded state obtained from the biexciton cascade to be
\[\rho(\tau)=\sum_{\begin{subarray}{c}n,n^{\prime}\\ m,m^{\prime}\end{subarray}}\psi_{n,m}(\tau)\psi_{n^{\prime},m^{\prime}}^{*}( \tau)\ket{n,m}\bra{n^{\prime},m^{\prime}}\,. \tag{33}\]
By calculating the resulting eigenvalues \(\{\lambda_{i}\}\), we obtain the concurrence using Eq. (32)
\[C(\tau)=\frac{2}{N}|\psi_{AA}(\tau)\psi_{BB}(\tau)-\psi_{AB}(\tau)\psi_{BA}( \tau)|\,. \tag{34}\]
Inserting the wavefunctions from Eq. (21) and approximating \(\gamma_{x}=\gamma_{y}\) as discussed earlier (cf. Eq. (26)), the dependence of \(C\) on the chiral phase \(\Phi\) and the time delay between biexciton and exciton emissions \(\tau\) is found to be
\[C(\Phi,\tau)=\frac{\sin^{2}\left(\Phi\right)}{1+\cos\left(S\tau\right)\cos^{2 }\left(\Phi\right)}\,. \tag{35}\]
We obtain perfect concurrence \(C=1\) when the waveguide is perfectly chiral (\(\Phi=\pi/2\)) as discussed above. Furthermore, if the waveguide is completely non-chiral (\(\Phi=0,\pi\)) the concurrence vanishes \(C=0\), agreeing with the separable state obtained in Eq. (31). In the following subsections we will independently analyse the effect of each of the imperfections in more detail.
### Fine-structure splitting
In this subsection, we analyse the effect of the FSS on the entanglement quality of the path-entangled state. Non-zero FSS leads to a spin-flip between the exciton levels (\(|X_{\pm}\rangle\)), and it is therefore convenient to describe the decay in the linear polarisation basis with \(x\)- and \(y\)-polarized states, \(|X_{x}\rangle\) and \(|X_{y}\rangle\) respectively (c.f. Fig. 1(b)). In this basis, the states are decoupled and the FSS induced spin-flip frequency \(S\) corresponds to an energy splitting between the exciton levels. The splitting makes the emitted photons distinguishable in energy, and crucially their frequencies are correlated with their polarisations. This leads to "which-way" information about the polarisation state, which means reduction in the degree of entanglement. To overcome this issue Ref. [22] has proposed using electro-optical modulators that rotate the polarisation of the biexciton and exciton photons separately to effectively erase the information gained from the splitting in the polarisation-encoded state. A phase modulator could similarly be applied to improve path-entangled states. Alternatively, narrow spectral filtering in between the two frequency components of either the exciton or biexciton emission can be implemented to erase the "which-path" information, however, at the expense of significantly reducing the entanglement generation rate [7]. Another approach is to implement QDs with improved symmetry in order to obtain a smaller splitting \(S\)[23].
The reference situation corresponds to an ideal system without fine structure splitting and perfect directional
Figure 3: a) Time correlations \(P_{AA}\), \(P_{AB}\), \(P_{BA}\), \(P_{BB}\) as a function of the difference in the emission time \(\tau\) with a FSS of \(S=4\gamma_{X}\) and waveguide chirality of \(\Phi=\pi/3\). We observe that \(P_{AA}\) and \(P_{BB}\) are out of phase with each other. This may seem surprising, since the two directions are naively the same, but occurs due to an interplay between the imperfect chirality and the sign of the FSS (see text). This effect was experimentally observed in Ref. [12]. _Inset_. Concurrence \(C\) of the state as a function of the difference in emission times \(\tau\). The concurrence oscillates between \(0\) and \(1\), as the state evolves. b) Colour map of the concurrence \(C\) of the state as a function of the difference in phase \(\Phi\) and the difference in emission times \(\tau\). The concurrence oscillates in time for a given chirality, with the exception of perfect chirality (\(\Phi=\pi/2\) which gives \(C=1\)) and non-chiral waveguide coupling (\(\Phi=0,\pi\) which gives \(C=0\)). The red “\(\times\)”-markers indicate the points investigated in Figure 4(a).
(chiral) coupling (\(S=0\) and \(\Phi=\pi/2\)). This situation is easily understood from the level structure in Fig. 1(a), where emission occurs with two oppositely polarized circular dipoles (\(\sigma_{-}\) and \(\sigma_{+}\)). With perfect chiral coupling these decay in opposite directions creating that maximally entangled state \((|AB\rangle+|BA\rangle)/\sqrt{2}\). As a consequence, the probability of detecting both photons on the same side of the waveguide vanishes (blue line in Fig. 2). The probability of detecting one photon at each of the opposite ends of the waveguide decays exponentially with the exciton spontaneous emission rate \((\gamma_{x}^{\prime}+\gamma_{y}^{\prime})/2\) (orange line in Fig. 2) as expected from the lifetime of the exciton states.
We now consider a scenario where the FSS creates an asymmetry between the exciton levels (\(S\neq 0\)), while the chiral coupling is still ideal (\(\Phi=\pi/2\)). This generates an oscillation between two maximally entangled states as discussed below Eq. (30). The corresponding probabilities of the various detection patterns is shown with the yellow and purple lines in Fig. 2. The amplitude of oscillations decays exponentially with the time constant set by the exciton spontaneous emission rate. As discussed in the previous section, although the emitted state changes over time, it remains maximally entangled, i.e., \(C(\tau\geq 0)=1\), and it is a superposition of standard Bell states.
### Imperfect chirality
We now analyse the joint effect of both imperfect chirality (\(\Phi\neq\pi/2\)) and non-zero FSS (\(S\neq 0\)). An example of the detection probability for this situation is shown in Fig. 3(a). Curiously, the probabilities \(P_{AA}\) and \(P_{BB}\) are out of phase, meaning that with a given time delay there is a difference in the probabilities of detecting two photons at the two ends of the waveguide. This effect happens due to an interplay of the imperfect chirality and the FSS. A decay from the biexciton state and subsequent detection of the photon at one end creates a coherent superposition between the two exciton states \(|X_{x}\rangle\) and \(|X_{y}\rangle\) with a phase \(\mp\Phi\) depending on where the photon was detected. The subsequent dynamics induced by the FSS \(S\) may then evolve the state towards or away from the relative phase \(\pm\Phi\), which gives the maximal emission in the same direction.
With non-perfect chirality, the concurrence \(C\) of the path-entangled, bi-photon state emitted by the biexciton cascade is reduced since the imperfect chirality limits the directional coupling of the QD emission. The dependence of \(C(\tau)\) on \(\Phi\) is shown in Fig. 3(b). We observe that \(C\) is independent of \(\tau\) only if \(\Phi=n\pi/2\), where \(n\) is a non-zero integer. If \(n\) is even, \(C(\tau\geq 0)=0\) and corresponds to the completely non-chiral case. If \(n\) is odd, we reproduce the results of the perfect chiral case that results in a maximally entangled state with \(C(\tau\geq 0)=1\) as discussed in the previous subsection. For partial chirality \(\Phi\neq\pi/2\), the FSS induces oscillations between non-maximally entangled states and \(C\) oscillates as a function of the detection time \(\tau\). In general, \(C\) is below unity except for \(S\tau=\pi\), where the concurrence is unity for all \(\Phi\neq 0,\pi\).
### Timing jitter
In this subsection we analyse the effect of uncertainty in the timing of photodetection events on the entanglement quality. We model the uncertainty in detection time by averaging the density matrix (33) elements \(\rho_{n,n^{\prime},m,m^{\prime}}\) with a Gaussian probability distribution with standard deviation \(\sigma\)
\[\begin{split}\bar{\rho}_{n,n^{\prime},m,m^{\prime}}(\tau)=\int_ {0}^{\infty}& d\tau^{\prime}\exp\left[-\frac{(\tau^{\prime}-\tau )^{2}}{2\sigma^{2}}\right]\\ &\times\psi_{n,m}(\tau^{\prime})\psi_{n^{\prime},m^{\prime}}^{*} (\tau^{\prime})\,.\end{split} \tag{36}\]
Figure 4: a) Concurrence \(C\) of the state as a function of the detection timing jitter, quantified by the Gaussian RMS width \(\sigma\), for several values of chirality \(\Phi\) and time difference \(\tau\) (the selected values are marked with red \(\times\)-symbols in Fig. 3(b)). b) Concurrence \(C\) of the state for different degree of chirality, quantified by the phase \(\Phi\), with a timing jitter of \(\sigma=0.3/\gamma_{X}\). The concurrence can be larger at negative than positive time intervals (grey shaded region) since such events effectively have less timing uncertainty than those with positive time intervals (see text). c) Corresponding probability density \(\bar{N}\) of the detection time for the situation analysed in (b). Note that the probability density quickly approaches zero for \(\tau<0\) (grey) in contrast to the increase in concurrence. Throughout the figure we assume a non-zero FSS of \(S=4\gamma_{X}\).
The time-averaged density matrix \(\bar{\rho}(\tau)\) is then given by
\[\bar{\rho}(\tau)=\frac{1}{N(\tau)}\begin{pmatrix}\bar{\rho}_{AAAA}(\tau)&\bar{ \rho}_{AAAB}(\tau)&\ldots&\bar{\rho}_{AABB}(\tau)\\ \bar{\rho}_{ABAA}(\tau)&\ddots&&\vdots\\ \vdots&&\ddots&\vdots\\ \bar{\rho}_{BBAA}(\tau)&\ldots&\ldots&\bar{\rho}_{BBBB}(\tau)\end{pmatrix}\,, \tag{37}\]
where \(\bar{N}=\int_{-\infty}^{\infty}d\tau^{\prime}\exp\bigl{[}-(\tau^{\prime}-\tau) ^{2}/(2\sigma^{2})\bigr{]}N\) is a normalisation constant equal to the probability density of the detection time and \(N\) is given by Eq. (25). From this density matrix we can then calculate the concurrence \(C\).
Figure 4(a) shows the dependence of the concurrence \(C\) on the detection timing jitter \(\sigma\) at different combinations of chirality and time delay. As seen in the figure the concurrence drops when the uncertainty in detection time becomes comparable to the oscillation period \(1/S\). This highlights the importance of keeping track of the time dependence for the quality of the final path-entangled state. Unlike the jitter-free case, even systems with perfect chirality (\(\Phi=\pi/2\)) exhibit \(C<1\) for non-zero values of \(\sigma\) since we do not know precisely which state we have. The asymptote of \(C\) with increasing time jitter is observed to depend only on the phase \(\Phi\), i.e., when the time jitter is comparable to or larger than the spread in emission time; the precise time of the detection is not important. Figure 4(b) shows the time evolution of \(C\) for a fixed timing jitter \(\sigma=0.3/\gamma_{X}\) for different values of the chiral phase and \(S=4\gamma_{X}\). Note that a peculiar effect occurs for \(\Phi=\pi/2\) (Fig. 4(b)), where we observe that \(C\) increases at negative time delays (grey shaded region). Since the emission of the exciton always occurs after the biexciton emission (\(t_{X}-t_{XX}=\tau>0\)), negative detection intervals (\(\tau<0\)) correspond to the case where the emission of the photon must have occurred close to \(\tau=0\), i.e. with minimal time delay, but was measured to be at a negative value due to the time jitter. Therefore the uncertainty in the emission time, which is otherwise given by the detection time jitter, is effectively reduced for negative detection times, leading to a higher concurrence. The probability of measuring the state at negative time intervals, however, decays very rapidly as \(\tau\) decreases, as shown in Fig. 4(c). The larger concurrence at small positive time delays (\(0<\tau\lesssim\sigma\)) compared to later times can be understood with similar arguments. On the other hand for \(\Phi=\pi/4\) and \(\Phi=\pi/8\) the fidelity in the absence of time jitter is lower around \(\tau=0\) than at later times, c.f. Fig. 3(b). As a consequence the peak concurrence still occurs around \(S\tau=\pi\). The probability density in Fig. 4(c) decays with the decay rate \(\gamma_{X}\) of the exciton states. On top of this it oscillates with increasing amplitude as the system becomes less chiral (\(\Phi\to 0\)). The reason is that the polarisation of waveguide modes becomes linear as the system loses chirality. After the decay of the biexciton, the polarisation of the exciton state rotates due to the FSS \(S\) and may thus be more or less aligned with the waveguide polarisation. In contrast, for the chiral case the waveguide polarisation is circular and the rotation of the polarisation does not affect the decay rate.
### Asymmetric exciton decay
In the experiments presented in Ref. [12], the decay rates of the \(x\) and \(y\)-polarized exciton levels were nearly identical (i.e. \(\gamma_{x}\approx\gamma_{y}\)). However, in general these two decay rates may differ depending on the position of the QD in the waveguide, with the asymmetry more dominant at locations with a low degree of directional emission, i.e., far from perfect chirality. In this subsection we analyse how this asymmetry can affect the quality of entanglement.
Figure 5(a,b) shows the impact of asymmetry \(\epsilon\equiv(\gamma_{x}-\gamma_{y})(\gamma_{x}+\gamma_{y})\) on the concurrence for the case of \(\epsilon=-0.4\). As the decay rates of the \(x\)- and the \(y\)-polarized exciton levels are different, one can gain "which path" information about the photon decay from the photodetection time, i.e. the highest decay rate would result in increased likelihood of early detection of photon, and vice versa. This extra information about the emission process reduces the entanglement. Furthermore, the difference in decay rates of the biexciton state creates a difference in populations of the \(|X_{x}\rangle\) and \(|X_{y}\rangle\) states. However, if the difference in the emission time is comparable to the difference in decay rates, the 'which path' information arising from the asymmetric decay rates is erased and the entanglement is recovered. This interplay between the difference in emission times and the asymmetry \(\epsilon\) leads to an optimal time delay \(\tau\) that maximizes the concurrence as observed in Fig. 5(a,b). In addition to this optimality, we still observe that \(C\) oscillates with emission time delay due to the non-zero \(S\) as discussed in Sec. III.A.
For a systematic study of the effect of asymmetry, we calculate the average concurrence \(\bar{C}\) over all \(\tau\) detection times, defined as
\[\bar{C}=\int_{-\infty}^{\infty}P(\tau)C(\tau)d\tau\,, \tag{38}\]
where \(P(\tau)\) is the corresponding probability density of the state at time \(\tau\). The dependence of \(\bar{C}\) on the asymmetry parameter \(\epsilon\) and the phase difference \(\Phi\) is shown in Fig. 5(c), which highlights that \(\bar{C}\) is maximized for symmetric decay of the exciton dipoles, i.e. \(\epsilon=0\).
## IV Conclusion
We have provided an in-depth analysis of the entanglement properties of a QD biexciton cascade embedded in a chiral nanophotonic waveguide, as experimentally realised in Ref. [12]. We have calculated how the biexciton cascade can deterministically prepare a path-encoded state mediated by the chiral-coupling of the waveguide.
The entanglement of the state is, however, affected by errors unavoidably present in the experimental implementation of the system. In particular, we have shown how the time dependence of the state induced by the FSS plays a crucial role in determining the generated entanglement. The amount of path-entanglement generated by the biexciton cascade can strongly depend on the emission time, while the presence of detection time jitter reduces the concurrence of the state. Finally, imperfect directional-coupling in the waveguide reduce the concurrence of the path-encoded entangled state as well. Our work quantifies the role of such imperfections and lay out a route to a deterministic source of path-encoded entangled photons of high entanglement quality. We hope our work will motivate further experimental improvements of this novel entanglement source.
## V Acknowledgments
We acknowledge the support of Danmarks Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks).
|
2310.04305
|
INVALS: An Efficient Forward Looking Inventory Allocation System
|
We design an Inventory Allocation System (INVALS) that, for each item-store
combination, plans the quantity to be allocated from a warehouse replenishing
multiple stores using trailers, while respecting the typical supply-chain
constraints. We formulate a linear objective function which when maximised
computes the allocation by considering not only the immediate store needs, but
also its future expected demand. Such forward-looking allocation significantly
improves the labour and trailer utilisation at the warehouse. To reduce
overstocking, we adapt from our objective to prioritise allocating those items
in excess which are sold faster at the stores, keeping the days of supply (DOS)
to a minimum. For the proposed formulation, which is an instance of Mixed
Integer Linear Programming (MILP), we present a scalable algorithm using the
concepts of submodularity and optimal transport theory by: (i) sequentially
adding trailers to stores based on maximum incremental gain, (ii) transforming
the resultant linear program (LP) instance to an instance of capacity
constrained optimal transport (COT), solvable using double entropic
regularization and incurring the same computational complexity as the Sinkhorn
algorithm. When compared against the planning engine that does the allocation
only for immediate store needs, INVALS increases on an average the labour
utilization by 34.70 and item occupancy in trailers by 37.08. The DOS
distribution is also skewed to the left indicating that higher demand items are
allocated in excess, reducing the days they are stocked. We empirically
observed that for ~ 90% of replenishment cycles, the allocation results from
INVALS are identical to the globally optimal MILP solution.
|
Shiv Krishna Jaiswal, Karthik S. Gurumoorthy, Etika Agarwal, Shantala Manchenahally
|
2023-10-06T15:05:14Z
|
http://arxiv.org/abs/2310.04305v1
|
# INVALS: An Efficient Forward Looking Inventory Allocation System
###### Abstract
We design an **In**vent**ory **A**llocation System (INVALS) that, for each item-store combination, plans the quantity to be allocated from a warehouse replenishing multiple stores using trailers, while respecting the typical supply-chain constraints. We formulate a linear objective function which when maximised computes the allocation by considering not only the immediate store needs, but also its future expected demand. Such forward-looking allocation significantly improves the labour and trailer utilization at the warehouse. To reduce over-stocking, we adapt from our objective to prioritise allocating those items in excess which are sold faster at the stores, keeping the days of supply (DOS) to a minimum. For the proposed formulation, which is an instance of Mixed Integer Linear Programming (MILP), we present a scalable algorithm using the concepts of submodularity and optimal transport theory by: (i) sequentially adding trailers to stores based on maximum incremental gain, (ii) transforming the resultant linear program (LP) instance to an instance of capacity constrained optimal transport (COT), solvable using double entropic regularization and incurring the same computational complexity as the Sinkhorn algorithm. When compared against the planning engine that does the allocation only for immediate store needs, INVALS increases on an average the labour utilization by \(34.70\) and item occupancy in trailers by \(37.08\). The DOS distribution is also skewed to the left indicating that higher demand items are allocated in excess, reducing the days they are stocked. We empirically observed that for \(\approx 90\%\) of replenishment cycles, the allocation results from INVALS are identical to the globally optimal MILP solution.
_Index terms--_ inventory allocation, replenishment, linear program, submodularity, optimal transport
## 1 Introduction
A retail giant typically consists of thousands of stores dispersed through a vast geographic area, with each store offering tens of thousands of products. Each store receives its replenishment regularly from the warehouse to which it is mapped to. For every (item, store) combination, the forecasting models predict the expected demand for multiple days into the future. Using these forecast values and the current inventory on-hand, each store for every item determines the quantity it would require to meet the customer demand for different days in the future. This is the store demand/need sent to the warehouse, which, based on the requirements received from multiple stores, decides a plan to allocate the inventory-- the _inventory allocation plan_. Once an allocation plan of items to store is created, shipments from the warehouse are moved to stores with the help of trailers. Trailers are dispatched to stores only on certain days of the week based on its replenishment cycle and a store may not receive trailers every day. The time between two consecutive receiving of trailers is called the _coverage period_. The granularity of allocation is a box level known as _warehouse-packs_ (whpacks) consisting of a fixed number of same item. A stock-out at a store occurs if the inventory for a particular product at the store goes to zero. This is undesirable as it not only leads to lost sales but also poor customer experience. Holding costs are incurred by stores to maintain on-hand inventory of products. These are comprised of quantities such as electricity costs for the store, refrigeration costs for food items, and storage area maintenance costs etc., and increase with the amount of inventory kept on-hand. As the warehouse has limited inventory and labour availability, allocation of items must be carefully done and prioritised to stores requiring them the most in the current replenishment cycle.
The store needs comprises of two parts: (i) Coverage/immediate need: Quantity required for current replenishment cycle that needs to be shipped to prevent stock-outs at the stores, and (ii) Pull-forward (PF) need: Need on the next few days in the future after the coverage period. While it is not necessary to allocate items more than the immediate need for the current replenishment cycle, doing so can prove beneficial on multiple aspects. As an example, it is often seen that store demand for Mondays is generally low and peaks during weekends. If the inventory planned for Monday only caters to the Monday's need, then the workforce available at the warehouse will be under-utilized during this planning cycle. As fewer items are loaded to the trailer, its capacity utilization will be equivalently low resulting in higher per-item transportation cost. A contrasting picture emerges during the weekend planning where the warehouse lacks manpower to process the higher demand, leading to even the coverage need being unmet sometimes and resulting in stock-outs.
In this work, we argue that the planning system should be able to consider requirements from upcoming future days while deciding the replenishment for the replenishment cycle, provided no business constraints are violated. We term
this additional allocation from future demand as _Pull Forward_. As we show in our experiments, PF leads to higher labour usage and improved trailer utilization.
### Contributions
Following is a brief summary of our contributions.
1. Design of a new forward looking inventory allocation system (INVALLS), that decides to allocate items meeting future demand in addition to fulfilling the immediate store needs, for better and efficient utilization of labour and trailer at the warehouse.
2. Representing INVALS from the lens of an optimisation framework which trades off between allocating more items to stores, and simultaneously minimising the storage of excessive inventory for longer duration of time.
3. Development of a fast algorithm using the concepts of submodularity and optimal transport theory to solve the optimisation framework. We demonstrate that the resultant linear program (LP) instance can be transformed to an instance of capacity constrained optimal transportation (COT) by inclusion of multiple pseudo variables. This transformation process could be of independent interest and open a separate thread of research in identifying the structure of LP instances amenable to such reductions. This assumes significance given that LP formulations appear in plethora of problems studied in operations research.
## 2 Replenishment Systems and Related Work
The replenishment algorithms generally studied in the literature identify the best replenishment strategy for a single item, to maintain proper inventory levels under different settings (Bosman et al., 2022). Some of the popular methods include: (i) _Reorder point_ where the current inventory level triggers the action to replenish the item, (ii) _Periodic_ where the inventory is replenished at specific time intervals at the predetermined review points, (iii) _Actual Demand_ where the item is restocked as and when the demand arises, and (iv) _Top off_ where restocking is done to fixed levels during business lean time (Khan and Jain, 1999). Though our work is based on periodic replenishment strategy, determining the allocation plan simultaneously for 10000's of products across multiple stores satisfying a plethora of constraints is a novel problem, and to the best of our knowledge has received very little attention. Approaches in (Martino et al., 2016) and the references there in are specific to fashion retail industry, where the replenishment problem is to maximize profit --defined as difference between revenue and incurred costs-- satisfying a whole market demand that follows a uniform distribution, and subject to a budget constraint. Replenishment of multiple items is studied as a Joint Replenishment Problem (JRP) (Muriel, Chugh, and Prokle, 2022; Goyal and Satir, 1989; Khouja and Goyal, 2008; Peng, Wang, and Wang, 2022; Zhuang et al., 2023) when certain fixed costs are incurred regardless of mix or quantities of the items ordered. The objective of these approaches is usually to minimize a total monetary cost while satisfying demand. In contrast, our work does not have an explicit monetary cost minimisation objective. We introduce a new concept of pulling-forward items from future demand into the current replenishment cycle to maximize labour and trailer usage.
### Existing Replenishment System
The planning system deployed in large retail systems are rule-based in nature, drafting an allocation plan that only focuses on the coverage need, and considers only the inventory availability at the warehouse. With lack of visibility to labour or trailer usage, these allocations plans often turn out to be infeasible for ground operations to execute them. The reasons could be shortage of labour workforce, exceeding the trailer capacity to transport the allocated items, and in many cases severe under utilization of trailer capacity. To counter this, multiple downstream systems update this plan, each making it feasible only with respect to a specific category. Operating sequentially based on rules, these systems lack visibility to the overall replenishment system and introduce their own quantum of sub-optimality to labour and trailer usage. Further, maintaining multiple such systems is inefficient and costlier.
### Invals
INVALLS is a forward-looking planning engine that creates the inventory allocation plan by not only looking into the immediate store needs for the current replenishment cycle, but also multiple days into the future while optimizing for labour and trailer usage, reducing out of stock, and minimizing excess inventory. By maximizing the tailor-made linear objective function (3.1) stated in Sec. 3.2 and incorporating various business constraints described in Sec. 3.3, INVALS solves a unified problem leading to improved operational efficiencies.
The primary challenge in designing INVALS is at the formulation of the right objective function whose maximisation leads to higher labour and trailer usage by not only pulling-forward items, but also ensuring that the coverage need for any store is always prioritised before allocating items for future need of other stores to reduce stock-outs. Allocating more items can have the adverse effect of increased inventory at the stores and ensuing holding costs. To reduce these costs, our formulation should allocate only those surplus items to stores which will be sold faster, keeping the number of days the items last, known as days of supply (DOS), on the store to a minimum. The framework should have the flexibility to accommodate business constraints where trailers can be prioritised to stores with longer replenishment cycles, and also guarantee that trailers are loaded with a minimum quantity of items to justify their transportation cost. The second challenge is in developing a fast optimisation algorithm respecting the standard supply chain constraints, the solution to which determines the inventory allocation plan.
## 3 Mathematical Formulation
### Notations
Let \(\mathcal{I}\) and \(\mathcal{J}\) denote the set of items and stores respectively indexed by \(i\) and \(j\). Often items are categorized into multiple categories such as those that can and cannot be moved via
an conveyor belt etc. We denote by \(\mathcal{L}\) the set of such available categories. For \(l\in\mathcal{L}\), \(\mathcal{I}_{l}\) represents items belonging to channel \(l\), and \(H_{l}\) denotes the available labour capacity in terms of number of whpacks that can processed per day at the warehouse for items in \(\mathcal{I}_{l}\). We represent the trailer max-capacity and min-capacity by \(M\) and \(m\), the shelf-capacity of the \(i^{th}\) item in store \(j\) by \(C_{ij}\), and the maximum number of trailers that can be dispatched to store \(j\) by the integer \(R_{j}\). Coverage period is denoted by time \(t=0\), and time periods for future need of the \(i^{th}\) item on \(j^{th}\) store by \(t_{ij}\). Based on the store needs \(D_{ij}^{t}\) for each time \(t\) in \(\mathcal{T}_{ij}=\{0\}\cup\{1,2,\ldots t_{ij}\}\) for \(i\in\mathcal{I},j\in\mathcal{J}\), INVALS does the inventory allocation to satisfy the immediate need at \(t=0\) and perform PF by looking forward into the needs of the future \(t_{ij}\) days. Let \(S_{i}\) denote the available inventory of the \(i^{th}\) item at the warehouse, \(p_{j}\) represent the store-priority, and \(q_{ij}^{t}\) indicate the item-store priority for sending \(i^{th}\) item to \(j^{th}\) store on \(t^{th}\) day. The quantities \(p_{j}\) and \(q_{ij}^{t}\) are discussed below in Sec. 3.2. The hyper-parameters are denoted by \(\alpha[t],t\in\mathcal{T}_{ij}\) which is an exponentially decreasing function of \(t\), and large positive constants \(\beta\) and \(\gamma\) with \(\gamma>\beta\).
Our aim is to determine the variables: (i) \(\mathcal{D}=\{d_{ij}^{t}\}\) indicating the quantity of \(i^{th}\) item allocated to \(j^{th}\) store corresponding to the \(t^{th}\) day, and (ii) \(\mathcal{X}=\{x_{j}\}\) representing the number of trailers dispatched to each store \(j\). For simplicity, we introduce additional variables \(s_{ij}=\sum_{t\in\mathcal{T}_{ij}}d_{ij}^{t}\), \(s_{j}=\sum_{i\in\mathcal{I}}s_{ij}\), \(y_{j}=min(1,x_{j})\), and \(b_{j}\) denoting respectively the allocation of item \(i\) to store \(j\) over all time periods, total inventory allocation to store \(j\), an indicator whether any trailer is assigned to store \(j\), and the extent of trailer min-capacity violation in the last trailer dispatched to the \(j^{th}\) store. We define the set \(\mathcal{B}=\{b_{j}\}\) comprising of the min-capacity breach slack variables.
### Utility Function
We propose to _maximize_ the following utility function:
\[Obj(\mathcal{D},\mathcal{X},\mathcal{B}) =\left(\sum_{\begin{subarray}{c}i\in\mathcal{I},j\in\mathcal{J }\\ t\in\mathcal{T}_{ij}\end{subarray}}\alpha[t]q_{ij}^{t}d_{ij}^{t}\right)+\beta \left(\sum_{j\in\mathcal{J}}p_{j}x_{j}\right)\] \[-\gamma\left(\sum_{j\in\mathcal{J}}b_{j}\right) \tag{3.1}\]
subject to the various constraints described in Sec. 3.3. Our objective comprises of three terms namely, (i) Item to store allocation utility, (ii) Store prioritization utility, and (iii) Cost due to LTMC (less than min capacity) trailers, each described and motivated below.
The first term is the _allocation utility_ made-up of three quantities: (i) Allocation (\(d_{ij}^{t}\)) of \(i^{th}\) item to \(j^{th}\) store for \(t^{th}\) day, (ii) Quantification of importance of the \(i^{th}\) item to \(j^{th}\) store at time \(t\) (\(q_{ij}^{t}\)) based on how fast the item is sold at the store as explained in Sec. 5, and (iii) Exponentially decreasing function \(\alpha[t]\) to prioritise allocating the immediate need for any store over the future need of others. Sans this function, the plan could result in surplus allocation of an item (quantity of item more than coverage period need) at one store and stock-out (quantity of an item less than coverage-period need) of the same item on another store.
The middle term is the _store prioritization utility_. As trailers are dispatched to stores only on specific days of the week, not sending items to stores with longer coverage period can impact more to business compared to those which receive trailers more frequently. It is also important from the business perspective that stores opened in the recent past are given more priority. To model this, we introduce the store prioritization term \(p_{j}\in\mathbb{N}\) representing how critical is it for the store to receive a trailer as INVALS is planning the item allocation. The higher the value of \(p_{j}\), greater the importance of assigning trailers to the store. We control the impact of this term with a parameter \(\beta>0\).
The last term refers to the _Less Than Min Capacity (LTMC)_ trailer cost appearing in the negative. INVALS encourages to utilize the trailer capacity maximally. However, due to scarcity of labour and inventory at warehouse, or insufficient need from store, trailers may not be filled completely resulting into higher transportation cost per item. Hence we desire a minimum shipment quantity \(m\) that should be loaded on each trailer assigned to a store. Nevertheless, at times it may still be required for the trailer to be dispatched with less than \(m\) shipments for smooth customer buying experience at the store. To avoid infeasibility in the formulation, we introduce a slack variable \(b_{j}\) to represent min-capacity breach of last trailer headed towards \(j^{th}\) store. By defining a parameter \(\gamma\), we control the extent to which min-capacity breach can be allowed. Setting \(\gamma=\infty\) is equivalent to disallowing min-capacity breach, and any other non-negative \(\gamma\) implies that the trailer min-capacity can be breached provided the total benefit of allocating inventory using this trailer is at least \(\gamma\).
### Constraints
The following constraints need to be respected for the allocation plan to be operationally feasible.
Inventory ConstraintFor any item \(i\), the sum of its allocation across all stores should be within the inventory available at warehouse, i.e., \(\sum_{j\in\mathcal{J}}s_{ij}\leq S_{i},\forall i\in\mathcal{I}\).
Warehouse Labour ConstraintAs mentioned earlier, for every labour channel \(l\) the total processed quantity of items requiring that labour type should be within the available workforce \(H_{l}\), represented by the constraint \(\sum_{i\in\mathcal{I}_{l},j\in\mathcal{J}}s_{ij}\leq H_{l},\forall l\in\mathcal{L}\).
Planned trailer constraintThe condition \(x_{j}\leq R_{j},x_{j}\in\mathbb{N}\cup\{0\},\forall j\in\mathcal{J}\) bounds the number of trailers \(x_{j}\) dispatched to store \(j\) to be within a pre-specified constant \(R_{j}\). This condition is related to the availability of store labour to unload these items upon arrival.
Trailer max-capacity constraintThe condition \(s_{j}\leq Mx_{j},\forall j\in\mathcal{J}\) ensures that total items shipped to \(j^{th}\) store is within the available trailer capacity.
Trailer min-capacity constraintAs mentioned before, we desire that each trailer is loaded with at least \(m\) whpacks for transportation costs to be justified. However, sometimes the allocation plan may breach the min-capacity and dispatch a trailer with items less than \(m\) whpacks due to labour or inventory shortage at the warehouse, or inadequate store need. If multiple trailers are dispatched to a store, the min-capacity breach can only occur in the last trailer, as all earlier trailers have to be fully loaded to their max-capacity \(M\) before assigning an additional trailer. We represent the same by introducing a non-negative slack variable \(b_{j}\leq m\) for each store and enforce that total store allocation \(s_{j}\) should satisfy \(s_{j}\geq M(x_{j}-y_{j})+my_{j}-b_{j}\). The desirable solution is to have \(b_{j}=0,\forall j\) and avoid min-capacity breaches.
Shelf capacity constraintWe impose \(s_{ij}\leq C_{ij},\forall i\in\mathcal{I},j\in\mathcal{J}\) to make sure that the total (item, store) allocation \(s_{ij}\) does not exceed the available shelf-capacity \(C_{ij}\) at the store to hold the inventory upon arrival.
Demand constraintThe condition \(0\leq d_{ij}^{t}\leq D_{ij}^{t},\forall i\in\mathcal{I},j\in\mathcal{J},t\in \mathcal{T}_{ij}\) ensures that allocation for any day \(t\) does not exceed the store need \(D_{ij}^{t}\). Ideally, one should enforce that the allocations \(d_{ij}^{t}\) and the min-capacity breach slack variables \(b_{j}\) also satisfy the integrality constraint and assume only non-negative integer values. This would not scale as number of such decision variables are typically in millions for a large retail network. We choose to relax this constraint as our problem structure inherits some properties of total unimodularity (TU) [10]. As stated below in (4.1), the item allocation problem involving \(d_{ij}^{t}\) is an instance of LP once the variables \(x_{j}\) are fixed, and can be succinctly expressed as \(\max_{\mathbf{q}}\{\mathbf{p}^{T}\mathbf{d}|A\mathbf{d}\leq\mathbf{q}\}\), for vectors \(\mathbf{d},\mathbf{p},\mathbf{q}\) and matrix \(A\). It can be verified that the entries of \(A\) are either \(\{0,1,-1\}\) and \(\mathbf{q}\) is integral. This does not necessarily guarantee that \(A\) is a TU matrix to endow the LP instance with integral optima, where the integrality constraints on the variables \(d_{ij}^{t}\) and \(b_{j}\) are implicitly met. However, in our experiments we observed that for an overwhelming \(99.99\%\) of the variables \(d_{ij}^{t}\), the solution is integer valued even without the integrality constraint. Theoretical analysis of this relaxation is beyond the scope of this work.
## 4 Proposed Solution
The formulation in 3.1 is an instance of MILP and can be solved using any off-the-shelf solver. The presence of integer variables \(x_{j}\) makes the problem NP-Hard, and obtaining the global optimum as the solution to MILP instance is not run-time efficient and increases exponentially in the number of such variables. However, once the number of trailers assigned to each store namely the variables \(x_{j}\) are determined, solving for the \(d_{ij}^{t}\) variables to compute the item allocation to stores reduces to an instance of LP. Let \(\mathcal{X}^{s}=\{x_{1}^{s},x_{2}^{s},\ldots,x_{J}^{s}\}\) be the set of store-trailer mappings with entries corresponding to the number of trailers assigned to each store respecting the condition that \(x_{j}^{s}\leq R_{j}\). Recall that \(y_{j}^{s}=\min(1,x_{j}^{s})\). The linear program objective is:
\[\mathbb{P}_{1}:g(\mathcal{X}^{s})=\max_{d_{ij}^{t};b_{j}|\mathcal{X}^{s}} \left(\sum_{i,j,t}\alpha[t]q_{ij}^{t}d_{ij}^{t}\right)-\gamma\left(\sum_{j}b_ {j}\right) \tag{4.1}\]
subject to all the constraints detailed in Sec. 3.3 barring the planned trailer constraint which depends only on \(x_{j}^{s}\). We assume \(g(\mathcal{X}^{s})=0\) when the feasible set is empty. Bearing this in mind, we develop an incremental iterative algorithm leveraging the concepts of submodularity and optimal transport theory. In this approach we incrementally grow the value \(x_{j}\) of the number of trailers dispatched to any store \(j\in\mathcal{J}\) by posing the optimisation problem as a selection of best store assignments for a sequence of trailers. Our methodology consists of three steps namely: (i) Determine the store to assign the \(n+1^{th}\) trailer given the assignment \(\mathcal{X}^{n}\) of the first \(n\) trailers, (ii) Efficiently compute the objective value for the item allocation problem \(\mathbb{P}_{1}\) in (4.1) for each candidate store \(k\) in the previous step, by transforming the LP instance into an instance of capacity constrained optimal transport and using the double regularization method (DRM) [20], and (iii) Once the final number of trailers to be dispatched to each store are determined, solve the item allocation for (4.1) using any standard LP solver just for this one last instance to compute the item allocation \(d_{ij}^{t}\). We use LP solver for the last step as in addition to the solution being optimal given \(\mathcal{X}\), we empirically observed \(99.99\%\) of \(d_{ij}^{t}\) to be integer valued. As DRM is an approximate algorithm, it may not necessarily yield such overwhelming integral solution. We describe these steps below.
### Incremental Trailer Assignment
The aim of this module is to find the number of trailers assigned to each store. Let \((\mathcal{D}_{\mathcal{X}}^{s},\mathcal{B}_{\mathcal{X}}^{s})\) be the optimal point for (4.1). The objective in (3.1) can be rewritten as:
\[Obj\left(\mathcal{D}_{\mathcal{X}}^{s},\mathcal{X},\mathcal{B}_{\mathcal{X}}^ {s}\right)=f(\mathcal{X})=g(\mathcal{X})+\beta*h(\mathcal{X}) \tag{4.2}\]
where \(g(\mathcal{X})\) is defined in (4.1) and \(h(\mathcal{X})=\sum_{j\in\mathcal{J}}p_{j}x_{j}\). Our goal is to maximize \(f(\mathcal{X})\) subject to the planned trailer constraint \(x_{j}\leq R_{j},\forall j\in\mathcal{J}\). The function \(g(\mathcal{X})\) satisfies the property of submodularity and non-negativity for chosen values of the hyper-parameters \(\alpha[t]\), \(\beta\) and \(\gamma\), and \(h(\mathcal{X})\) is a non-negative, modular function. It then follows that \(f(.)\) is a non-negative, submodular function. These concepts and discussion of submodularity is detailed in Sec. C of the Appendix.
Submodularity implies diminishing returns where the incremental gain in adding a new element to a set \(\mathcal{A}\) is at least as high as adding to its superset \(\mathcal{B}\)[14, 15]. In our context, it implies that it is more beneficial to assign a trailer to a store when there are fewer trailers assigned. Maximisation of submodular set functions is a well studied problem. As they are \(\mathcal{N}\)/\(\mathcal{P}\)-hard in the general form, a variety of algorithms have been proposed to find approximate solutions to submodular optimisation problems. One of the most popular category of algorithms are the variants of incremental selection of a set
using greedy approaches Nemhauser, Wolsey, and Fisher (1978); Buchbinder et al. (2014); Buchbinder, Feldman, and Schwartz (2017); Kuhnle (2019); Sakaue (2020). The methods in Buchbinder et al. (2014); Kuhnle (2019); Sakaue (2020) provide approximation guarantees when the submodular function in (4.1) is non-negative, even when it is not monotone.
A faster variation of the greedy algorithm called the _stochastic greedy algorithm_Mirzasoleiman et al. (2015) progresses as follows. Let \(\mathcal{X}^{n}=\{x_{1}^{n},x_{2}^{n},\ldots,x_{J}^{n}\}\) be the number of trailers assigned to each store at the end of iteration \(n\). The property of diminishing returns enables us to assign trailers in an incremental fashion, where starting from the set \(\mathcal{X}^{0}=\{0,0,\ldots,0\}\), in every iteration \(n+1\), given the allocation of stores for the first \(n\) trailers, we identify the best store \(k_{n+1}\) for the next trailer from a random subset \(\mathcal{J}^{n+1}\) of eligible candidate stores \(\mathcal{A}^{n+1}=\{j\in\mathcal{J}:x_{j}^{n}<R_{j}\}\). The selection is made to maximize the incremental gain (IG) \(f_{\mathcal{X}^{n}}(k)=f\left(\mathcal{X}^{n}\uplus\{k\}\right)-f(\mathcal{X} ^{n})\). The set \(\mathcal{X}^{n}\) is grown incrementally to \(\mathcal{X}^{n+1}=\mathcal{X}^{n}\uplus\{k_{n+1}\}\) by including the chosen store \(k_{n+1}\) with the highest IG. We slightly abuse the notation and define \(\mathcal{X}^{n+1}=\mathcal{X}^{n}\uplus\{k\}\) as the set of store-trailer mappings with \(x_{k}^{n+1}=x_{k}^{n}+1\), and \(x_{j}^{n+1}=x_{j}^{n},\forall k\neq j\). This way of incrementally selecting the stores eliminates the need of integrality constraints and is a major factor for reducing the solution complexity.
BatchingFor large value of \(\beta\), stores in the candidate set \(\mathcal{A}^{n+1}\) with higher store-priority value \(p_{j}\) are most likely to yield the highest IG because of the factor \(\beta*p_{j}\). To this end, we create a batch \(\mathcal{P}_{p}=\{j\in\mathcal{J}:p_{j}=p\}\) by grouping stores with same store-priority \(p\) and in every iteration \(n\) by assigning the next trailer only to those stores in higher-priority batches first before the lower-priority ones. In other words, the random set \(\mathcal{J}^{n}\subset\mathcal{A}^{n}\) consists only of stores from the currently considered high-priority batch. However, because of the min-capacity breach term \(-\gamma\sum_{j\in J}b_{j}\), the function \(f(.)\) is not necessarily monotonically increasing, and hence the incremental gain obtained by mapping the next trailer to a store could be negative. In any iteration \(n\), if all store-trailer assignments to a high priority batch \(\mathcal{P}_{p}\) yields negative IG, the diminishing property of sub-modular functions eliminates the need for testing trailer assignment to the same batch in subsequent iterations. We proceed to the next lower priority batch \(\mathcal{P}_{p-1}\) and choose the store from that batch with the highest IG. This process is repeated until all the eligible stores in the least priority batch \(\mathcal{P}_{1}\) yields negative IG. Once all the batches are processed, we obtain the number of trailers assigned to all the stores. The pseudo-code is given in Sec. D of the Appendix.
### IG Determination by Transformation to Optimal Transport Problem
The greedy algorithm for the optimal set determination problem measure the computational complexity in terms of the number of calls to a _value oracle_Buchbinder et al. (2017); Kuhnle (2019); Sakaue (2020). Given the store-trailer mapping set \(\mathcal{X}\), a _value oracle_ is a system or an algorithm to compute the value of the submodular function \(g(\mathcal{X})\) in (4.1). As the greedy algorithm and its variants require repeated evaluation of the incremental gain, _value oracle_ is used multiple times in the set selection process. For the solution of our objective in (3.1) using the stochastic greedy algorithm, the _value oracle_ must solve the optimal item allocation problem \(\mathbb{P}_{1}\) (4.1), an instance of LP, once for every candidate store \(k\) in the random store set \(\mathcal{J}^{n}\) across multiple iterations \(n\), and hence can take significant amount of time for a large scale supply chain network involving millions of decision variables in \(\mathcal{D}\). In this section, we propose to solve the item allocation problem by transforming it to an instance of COT Korman and Mccann (2015), for which the a fast, good quality solution can be obtained via DRM Wu et al. (2022).
**Theorem 4.1**.: _Let \(P_{ij}^{t}=\alpha[t]q_{ij}^{t}\) represent the profit for unit allocation of item \(i\) to \(j^{th}\) on day \(t\). Denote \(M_{j}=Mx_{j}\) as the maximum possible allocation to store \(j\) for the current number of assigned trailers \(x_{j}\). By adding pseudo items and pseudo stores to produce the super-sets \(\tilde{\mathcal{I}}\supset I\), and \(\tilde{J}\supset J\), and creation of pseudo item inventories \(S_{i}\), max store allocations \(M_{j}\), demands \(D_{ij}^{t}\) and profits \(P_{ij}^{t}\) corresponding to these pseudo items and stores, the LP instance in (4.1) can be reduced to an instance of COT:_
\[\mathbb{P}_{2}:g(\mathcal{X})=\max_{(d_{ij}^{t})}\sum_{i,j,t}P_{ij}^{t}d_{ij}^{ t} \tag{4.3}\]
_subject to the constraints:_
\[0\leq d_{ij}^{t}\leq D_{ij}^{t} \forall i\in\tilde{\mathcal{I}},j\in\tilde{\mathcal{J}},t\in \mathcal{T}_{ij}\] \[\sum_{j\in\tilde{\mathcal{J}}}\sum_{t\in\mathcal{T}_{ij}}d_{ij}^{ t}=S_{i} \forall i\in\tilde{\mathcal{I}},\] \[\sum_{i\in\tilde{\mathcal{I}}}\sum_{t\in\mathcal{T}_{ij}}d_{ij}^{ t}=M_{j} \forall j\in\tilde{\mathcal{J}},\]
_satisfying \(K=\sum_{i\in\tilde{\mathcal{I}}}S_{i}=\sum_{j\in\tilde{\mathcal{J}}}M_{j}\)._
Sec. A of the Appendix contains the proof where we also show that the shelf-capacity constraint \(s_{ij}\leq C_{ij}\) can be dropped by suitably adjusting the demand values \(D_{ij}^{t}\). The DRM technique is an approximate algorithm and approaches COT by imposing double entropic regularization for both the lower and upper bounds on the transport plan \(d_{ij}^{t}\). The idea is similar to the well known Sinkhorn algorithm Cuturi (2013) popular in OT theory Peyre and Cuturi (2019); Villani (2009), where by adding an entropy maximization term \((-\mu\sum_{i,j,t}d_{ij}^{t}\log\left(d_{ij}^{t}\right))\), the transport plan can be determined through an alternate iterative scheme involving simple matrix operations. The computation is efficient as the cubic computational complexity of standard LP solvers are replaced by a linear time solution Cuturi (2013); Abid and Gower (2018). However because of the additional capacity constraints, in DRM the matrix-vector multiplication operations of the Sinkhorn algorithm are replaced into finding the unique zero point of several single-variable monotonic functions, determined using any root finding methods Press et al. (2007). We refer to Wu et al. (2022) for detailed description of the algorithm, where the authors argue that the DRM method incurs the same computational complexity as the
Sinkhorn iterations. The mathematical formulation is given in Sec. B of the Appendix for completeness.
We wish to emphasize the following important note. In our framework, the primarily application of DRM is used to quickly identify the candidate store \(k_{n+1}\) producing the highest IG given the current store-trailer mapping set \(\mathcal{X}^{n}\), to assign the next trailer \(n+1\). After solving the COT instance in (4.3) for every randomly sampled candidate store \(k\in\mathcal{J}^{n+1}\) using DRM, we only require to compare the resultant objective values to determine \(k_{n+1}\). As the intermediate item allocation solutions \(d^{t}_{ij}\) are not useful for the entire trailer assignment process described in Sec. 4.1, the accuracy required from an approximate method like DRM is enough to _getting the right ordering on the objective values of (4.3) for different candidate stores._ This allows us to choose a relatively higher value for the parameter \(\mu\) controlling the entropy regularization, resulting in faster computation for each COT instance. Such flexibility may not be available when standard LP solvers are used instead.
## 5 Experiments
The experiments are run on x\(86\_64\) arch., Intel(R) Xeon(R) [email protected], 8 core 64 GB virtual machines, on the true data used in the business operations of a global retailer, for a real supply chain network where a sample warehouse allocates inventory of \(\approx 9000\) items to 158 stores. We use 19 different data-sets, each corresponding to a replenishment cycle, to validate our proposed inventory allocation approach.
As described in Sec. 2.2, the challenge of designing INVALS is in framing the right objective function which makes the inventory allocation plan utility effective and improves business operations. To showcase that our objective in (3.1) meets these requirement, we perform multiple experiments to evaluate INVALS with respect to the following metrics: (i) Importance of PF for higher labour and trailer utilization, (ii) Need to incorporate store-item priority \(q^{t}_{ij}\) to reduce DOS, (iii) Role played by LTMC trailer cost in improving the trailer utilization and avoiding min-capacity breach, (iv) Experimental validation of Theorem 4.1, (v) Improvement in computation speed by using DRM method to solve the COT instance in 4.3 over standard LP solvers, and (vii) Quality of the incremental trailer assignment described in Sec. 4.1 against the global optimal from off-the-shelf MILP-solver. To this end, we run the following four experiments:
1. [label=
computed over the pull-forwarded items across replenishment cycles. The DOS distribution is further to the right of the time axis in exp. **C** compared to exps. **Da-Dd**, where \(q_{ij}^{t}=1\) in the former and inversely proportional to store-item DOS in the latter. With an average reduction in DOS by \(1.35\) days, the impact on the savings in inventory holding costs from INVALS will be significant. The box-plot of DOS distribution over the pull-forwarded items for one replenishment cycle is shown in Fig. 2, where again the DOS values from exp. **C** are higher compared to exps. **Da-Dd**.
From Figs. 1(a)-(e) it is evident that the four optimisation algorithms in exps. **Da-Dd** yield almost identical results. Even at individual item-store allocation, their values match for all item-stores for \(17\) of the \(19\) data-sets \(\approx 90\%\). Recall that once these techniques assign equal number of trailers to each store, they compute the solution for \(d_{ij}^{t}\) from the same (or transformed) LP instance in (4.1). The solutions from exps. **Db** and **Dc** differed from exp. **Da** for 1 data-set due to the stochasticity involved in selecting the set \(\mathcal{J}^{n}\) of candidate stores. We further verified that for each LP instance, invoked for every candidate store \(k\) over multiple iterations \(n\), exps. **Db** and **Dc** gave identical results for all item-store allocations for all data-sets, empirically corroborating our Theorem 4.1. Similarity with the MILP solution validates our approach of assigning and determining store-trailer counts in an incremental fashion. Only for the \(2\) data-sets, the total number of trailers differ by \(2\) between the solutions from exps. **Da** and **Dd**. This introduced negligible differences of \(0.1\) in the trailer utilization and quantity of inventory allocation by \(0.02\%\) between them.
The advantage of employing DRM over LP solver such as GLOP [10] is seen in Fig. 1(f), where we show the box-plot of the time taken by exps. **Db**, **Dc** and **Dd** for the largest LP instances of (4.1) and (4.3), for each data-set. Comparing absolute times could in fact be biased against DRM, as our implementation is pitted against an highly optimised GLOP tool. Nevertheless, we observed that DRM takes \(61.16\%\) less time on average than GLOP for the COT instance in (4.3). As the addition of pseudo variables increases the instance size in (4.3) compared to (4.1), the run time for GLOP is longer in **Dc** than **Db**. We believe that an optimised version of our implementation and usage of recently proposed root finding algorithms such as [14] could further reduce the run time of DRM.
## 6 Conclusion
We studied the problem of inventory allocation from warehouse to stores and proposed a novel forward-looking allocation system INVALS. We formulated a suitable utility function which when maximised under different business constraints computes the inventory allocation plan. Exploiting the submodularity of the objective function, we designed an iterative algorithm for finding the number of trailers dispatched to each store from the warehouse, thus eliminating the need to directing solve for integer valued variables and usage of MILP solvers. We presented a transformation
Figure 1: (a) Labour utilization, (b) Trailer utilization, and (c) Normalised allocated quantity for different experiments, (d) Normalized dispatched trailers, (e) Mean PF-DOS distribution, and (f) Runtime comparison
Figure 2:
of the resultant LP problem to a COT instance, and leveraged the recently proposed DRM to efficiently compute IG and the selection of the best store to assign the next trailer. We thoroughly investigated the importance of each term in our objective function and analysed the accuracy of our proposed iterative-approximate algorithm by experimenting with 7 different variants of our framework on 19 data-sets.
As part of our future work, we would like to develop a similar optimisation framework for a multi-echelon system [1], and determine the end-to-end inventory flow from the supplier through the warehouses to the stores. This could be a challenging initiative, as a large retailer typically sources products from \(10\)'s of supplier, supplying to \(100\)'s of warehouses, and generating inventory plans for \(1000\)'s of stores. Another research avenue is to extend our formulation to incorporate store demand variability instead of using only expected values.
## Appendix A Proof of Theorem 4.1
Before we proceed with the proof, the following lemma is useful.
**Lemma A.1**.: _The shelf-capacity constraint \(s_{ij}\leq C_{ij}\) for every (item, store) can be dropped altogether by suitable adjustment of the store demands \(D_{ij}^{t}\)._
Proof.: Let \(D_{ij}=\sum\limits_{t\in\mathcal{T}_{ij}}D_{ij}^{t}\) be to total demand for the \(i^{th}\) from store \(j\). If \(D_{ij}\leq C_{ij}\), then the shelf-capacity constraint is inconsequential as the condition will be implicitly true. Otherwise, consider any two time periods \(t_{1}<t_{2}\) and observe that if \(d_{ij}^{t_{2}}>0\), then \(d_{ij}^{t_{1}}=D_{ij}^{1_{1}}\) for the following reason. Since \(\alpha[t]\) is a monotonically decreasing function of \(t\), then for every (item, store) the objective is maximized by completing allocating the demand for an earlier time period, before allocating for later days. This also makes sense from a business perspective, as allocation of any item to a store should first meet the demands of days in the immediate future, before being assigned for later time periods. Bearing this mind, the demand for the last PF period \(D_{ij}^{t_{ij}}\) can be decreased till it reaches zero or \(D_{ij}=C_{ij}\), which ever comes earlier. In the former case, we proceed with decreasing the demand for the time period \(t_{ij}-1\) and continue the process till \(D_{ij}=Cij\).
Hence we assume \(D_{ij}\leq C_{ij}\) and drop the shelf-capacity constraint for all (item, store). Recall that the linear program instance to obtain the item allocations \(\mathcal{D}=\{d_{ij}^{t}\}\), is invoked for every candidate store \(k\) in the randomly chosen set \(\mathcal{J}^{n+1}\) for every iteration \(n+1\). Let \(s_{ij}^{n}=\sum\limits_{t}d_{ij}^{t,n}\), \(y_{j}^{n}\), \(b_{j}^{n}\) denote the solution of the corresponding decision variables and as before, let \(\mathcal{X}^{n}=\{x_{1}^{n},x_{2}^{n},\ldots,x_{j}^{n}\}\) be the number of trailers assigned to each store at the end of iteration \(n\). For the candidate store \(k\) in iteration \(n+1\) we have, \(x_{k}^{n+1}=x_{k}^{n}+1\) and for \(j\neq k\), \(x_{j}^{n+1}=x_{j}^{n}\) and \(y_{j}^{n+1}=y_{j}^{n}\). Let \(M_{j}=Mx_{j}^{n+1}\), \(\forall j\in\mathcal{J}\) be the maximum possible allocation to any store using all the trailers assigned to it.
Without loss of generality, we assume a suitably high value of \(\gamma\) where a positive IG for any candidate stores implies that the last trailer assigned to each store is loaded at least to its min-capacity \(m\), and all the breach slack variables \(b_{j}\) equal zero at the solution. This is not a restriction to our proof and only simplifies the discussion around the suitable definition of pseudo variables, their corresponding quantities, and the existence of non-empty feasible set. We describe the transformation process below by handling individual constraints in each subsection.
### Trailer capacity constraints
Min-capacity constraintThe negative cost \(-\gamma\sum\limits_{j\in\mathcal{J}}b_{j}\) in the linear objective be handled by the following additions:
* Introduce pseudo item \(b\) with inventory \(S_{b}=M\).
* Set the profit \(P_{bj}^{0}=-\gamma\) for the coverage period \(t=0\), and \(P_{bj}^{t}=0\) when \(t>0\) for the PF period, \(\forall j\in\mathcal{J}\).
* Impose no upper bound on the demand \(D_{bj}^{0}\) for \(t=0\), and set \(D_{bj}^{t}=0\) for \(t>0\), \(\forall j\in\mathcal{J}\).
The straightforward way to handle the constraint \(0\leq b_{j}\leq m\) would be to set \(D_{bj}^{0}=m\) so that the allocation of the pseudo item \(b\) to any store does not exceed the bound \(m\). This would entail that \(S_{b}=m*|\mathcal{J}|\). We prove in Section A.5 that for sufficiently high value of \(\gamma\), setting \(S_{b}=M\) and no specific bound on \(D_{bj}^{t}\) is enough to guarantee feasibility and the identification of the best store \(k_{n+1}\) having the highest IG\(>0\).
Max-capacity constraintRecall that the max-capacity constraint of \(s_{j}\leq Mx_{j}^{n+1}\) enforces that the total allocation to the store \(j\) across all items does not exceed the available trailer capacity. This condition can be handled as follows:
* Introduce pseudo item \(z\) with inventory \(S_{z}=(M-m)*|\mathcal{J}|\).
* Set profit \(P_{zj}^{t}=0\), \(\forall j\in\mathcal{J},\forall t\geq 0\).
* Set demand \(D_{zj}^{0}=M-m\), \(D_{zj}^{t}=0\) when \(t>0\), \(\forall j\in\mathcal{J}\).
* Enforce that all trailers are loaded to their maximum capacity \(M\) to meet the maximum possible allocation \(M_{j}\).
Let \(\tilde{\mathcal{I}}=\mathcal{I}\uplus\{b,z\}\). The pseudo inventory \(z\) acts as the filler to reach the capacity \(M\) in the last trailer assigned to each store. The upper bounds on the demands \(D_{zj}^{0}\) ensures that \(z\) is not used more than \(M-m\), i.e, the difference between the maximum and minimum trailer capacities. If item \(z\) were to be used, it does not add any profit to the objective function as \(P_{zj}^{t}=0\). By using the real inventories \(i\in\mathcal{I}\), assume that the allocation \(s_{j}^{n+1}\) is deficient in the sense \(s_{j}^{n+1}\leq M\left(x_{j}^{n+1}-y_{j}^{n+1}\right)+m\). Then the left over trailer space \(M\left(x_{j}^{n+1}-y_{j}^{n+1}\right)+m-s_{j}^{n+1}\) needs to filled with item \(b\), incurring a unit cost of \(-\gamma\). This is precisely the characteristic of the LP instance in (4.1). As each \(D_{zj}^{0}=M-m\), we set \(S_{z}=(M-m)|\mathcal{J}|\).
### Labour constraint
For each category \(l\in\mathcal{L}\), we repeat the steps below: For each channel \(l\),
* Let \(A_{l}=\sum_{i\in\mathcal{I}_{l}}S_{i}\) be the total inventory available across all items belonging to the \(l^{th}\) category.
* Create a pseudo store \(h_{l}\) with need \(M_{h_{l}}=\max(0,A_{l}-H_{l})\), profits \(P^{t}_{ih_{l}}=0\), \(\forall t\geq 0\), \(\forall i\in\tilde{\mathcal{I}}\).
* Set demand \(D^{t}_{ih_{l}}=0\)\(\forall t\geq 0\), \(\forall i\in\tilde{\mathcal{I}}\setminus\mathcal{I}_{l}\).
* For \(i\in I_{l}\) impose no upper bound on the demand \(D^{0}_{ih_{l}}\), and set \(D^{t}_{ih_{l}}=0\) for \(t>0\).
These additions ensure that utmost a sum of \(H_{l}\) quantities of items in \(\mathcal{I}_{l}\) are available for allocation to real stores, to satisfy the warehouse labour constraint \(\sum_{i\in\mathcal{I}_{l}}\sum_{j\in\mathcal{J}}s_{ij}\leq H_{l}\).
### Surplus inventory
Given that we are in iteration \(n+1\) where we add a trailer to store \(k\), implies the highest IG was positive in iteration \(n\). As mentioned earlier, for sufficiently high value of \(\gamma\) this positive IG implies that the allocations \(s^{n}_{ij}\) satisfy:
\[\sum_{i\in\mathcal{I}}s^{n}_{ij} \geq M(x^{n}_{j}-y^{n}_{j})+m,\quad\forall j\in\mathcal{J},\] (A.1) \[\sum_{i\in\mathcal{I}_{l}}\sum_{j\in\mathcal{J}}s^{n}_{ij} \leq\min(\sum_{i\in\mathcal{I}_{l}}S_{i},H_{l}), \forall l\in\mathcal{L}\ \ \text{and}\] (A.2) \[d^{t,n}_{ij} \leq D^{t}_{ij} \forall l\in\mathcal{I},j\in\mathcal{J},t\in\mathcal{T}_{ij}.\] (A.3)
Recalling that \(S_{z}=(M-m)|\mathcal{J}|\) and \(S_{b}=M\) we have,
\[\sum_{j\in J}\sum_{i\in\mathcal{I}}s^{n}_{ij}+S_{z}+S_{b} \geq\sum_{j\in J}\left[M(x^{n}_{j}-y^{n}_{j})+m+(M-m)\right]\] \[+M\] \[\geq\left(\sum_{j\neq k}Mx^{n}_{j}\right)+Mx^{n+1}_{k}=\sum_{j\in J }M_{j}.\]
Adding the surplus inventory \(\left(\sum\limits_{i\in\mathcal{I}}S_{i}-\sum\limits_{i\in\mathcal{I}}\sum \limits_{j\in\mathcal{J}}s^{n}_{ij}\right)\) to both sides we get,
\[\sum_{i\in\mathcal{I}}S_{i} \geq\sum_{j\in J}M_{j}+\sum_{i\in\mathcal{I}}S_{i}-\sum_{i\in \mathcal{I}}\sum_{j\in\mathcal{J}}s^{n}_{ij}\] \[\geq\sum_{j\in J}M_{j}+\sum_{i\in\mathcal{I}}S_{i}-\sum_{l\in \mathcal{L}}\min(\sum_{i\in\mathcal{I}_{l}}S_{i},H_{l})\] \[=\sum_{j\in J}M_{j}+\sum_{i\in\mathcal{L}}(\sum_{i\in\mathcal{I}_ {l}}S_{i})+\sum_{l\in\mathcal{L}}\max(-\sum_{i\in\mathcal{I}_{l}}S_{i},-H_{l})\] \[\stackrel{{\text{(a)}}}{{=}}\sum_{j\in J}M_{j}+\sum_{l \in\mathcal{L}}\max\left(0,\sum_{i\in\mathcal{I}_{l}}S_{i}-H_{l}\right)\] \[=\sum_{j\in J}M_{j}+\sum_{l\in\mathcal{L}}M_{h_{l}},\]
where the inequality denoted by (a) follows from (A.2). The above result establishes that when \(\gamma\) is high, the total available inventory is always in surplus to the maximum allocation possible to both real and pseudo stores.
To handle this surplus inventory, we introduce one another pseudo store \(e\) with need \(M_{e}=\sum_{i\in\tilde{\mathcal{I}}}S_{i}-\sum_{j\in\mathcal{J}}M_{j}-\sum_{l \in\mathcal{L}}M_{h_{l}}\) to absorb real and pseudo item inventories whose allocation is less than their respective \(S_{i}\). For this store \(e\):
* Set all profits \(P^{t}_{ie}=0\), \(\forall i\in\tilde{\mathcal{I}}\), \(\forall t\geq 0\).
* Impose no upper bound on the demand \(D^{0}_{ie}\) for \(t=0\), and set \(D^{t}_{ie}=0\) for \(t>0\), \(\forall i\in\tilde{\mathcal{I}}\).
Henceforth, let \(\tilde{\mathcal{J}}=\mathcal{J}\uplus\{h_{l}\}_{l\in\mathcal{L}}\uplus\{e\}\) represent the set of both real and pseudo stores. For small values of \(\gamma\), it is possible that the total item inventory is less than then maximum possible store allocation. This setting can be handled in the similar fashion by introducing a pseudo item \(e\) instead of a store.
### Optimal transport problem
Armed with this step, the linear program instance for every candidate store \(k\in\mathcal{J}^{n+1}\) in the iteration \(n+1\) can be re-expressed as:
\[\mathbb{P}_{2}:\max_{\left(d^{t}_{ij}\right)}\sum_{i,j,t}P^{t}_{ij}d^{t}_{ij}\] (A.4)
subject to the constraints:
\[0\leq d^{t}_{ij} \leq D^{t}_{ij}\quad\forall i\in\tilde{\mathcal{I}},j\in\tilde{ \mathcal{J}},t\in\mathcal{T}_{ij}\] (A.5) \[\sum_{j\in\tilde{\mathcal{J}}}\sum_{t\in\mathcal{T}_{ij}}d^{t}_{ij} =S_{i}\quad\quad\forall i\in\tilde{\mathcal{I}},\] (A.6) \[\sum_{i\in\tilde{\mathcal{I}}}\sum_{t\in\mathcal{T}_{ij}}d^{t}_{ij} =M_{j}\quad\forall j\in\tilde{\mathcal{J}}.\] (A.7)
and satisfying \(K=\sum\limits_{i\in\tilde{\mathcal{I}}}S_{i}=\sum\limits_{j\in\tilde{ \mathcal{J}}}M_{j}\).
### Feasibility
We now show that the COT instance always has a non-empty feasible set for sufficiently high \(\gamma\) value. To this end, we prove the following lemma using the inequality (A.1) for the allocations \(s^{n}_{ij}\) obtained as the solution to the previous iteration \(n\).
**Lemma A.2**.: _As before let \(s^{n}_{j}=\sum_{i\in\mathcal{I}}s^{n}_{ij}\), \(\forall j\in\mathcal{J}\). There exist a solution involving min-capacity breach variables \(b^{n+1}_{j}\geq 0\) satisfying the condition \(\sum_{j\in\mathcal{J}}b^{n+1}_{j}=M\) such that \(s^{n}_{j}+b^{n+1}_{j}\geq M\left(x^{n+1}_{j}-y^{n+1}_{j}\right)+m,\forall j\in \mathcal{J}\)._
Proof.: Consider \(j\neq k\). Setting \(b^{n+1}_{j}=0\) and using the condition that \(s^{n}_{j}\geq M(x^{n}_{j}-y^{n}_{j})+m\), we have \(s^{n}_{j}+b^{n+1}_{j}\geq M(x^{n+1}_{j}-y^{n+1}_{j})+m\). We consider two cases for the store \(k\).
**case 1**: Let \(x^{n+1}_{k}>1\) implying \(y^{n+1}_{k}=y^{n}_{k}\). Setting \(b^{n+1}_{k}=M\) we get \(s^{n}_{k}+b^{n+1}_{k}\geq M\left(x^{n}_{k}-y^{n}_{k}\right)+m+M=M\left(x^{n+1}_{k}- y^{n+1}_{k}\right)+m\).
**case 2**: If \(x_{k}^{n+1}=1\) then \(x_{k}^{n}=y_{k}^{n}=s_{k}^{n}=0\), and \(y_{k}^{n+1}=1\). Setting \(b_{k}^{n+1}=M\) we find \(s_{k}^{n}+b_{k}^{n+1}\geq M\left(x_{k}^{n+1}-y_{k}^{n+1}\right)+m\).
The proof follows.
We now present a feasible solution for the problem (A.4) where the variables \(b_{j}^{n+1}\) are set to values as stated in Lemma A.2.
* All the variables \(d_{ij}^{t}\) whose upper bound \(D_{ij}^{t}=0\) will assume the value \(0\).
* Set \(d_{ij}^{t,n+1}=d_{ij}^{t,n}\), \(\forall i\in\mathcal{I}\), \(\forall j\in\mathcal{J}\), \(\forall t\in\mathcal{T}_{ij}\) satisfying (A.3) for real items and stores. Then \(s_{ij}^{n+1}=s_{ij}^{n}\), \(\forall i\in\mathcal{I}\), \(\forall j\in\mathcal{J}\). Letting \(d_{zj}^{0,n+1}=M_{j}-\left(s_{j}^{n+1}+b_{j}^{n+1}\right)\leq M-m\) be the allocation of slack item \(z\) to store \(j\) for day \(0\), we have \(\sum_{i\in\mathcal{\tilde{I}}}s_{ij}^{n+1}=M_{j}\), \(\forall j\in J\).
* From the inequality (A.2) it follows that the allocations \(s_{ij}^{n+1}\) also respect the labour constraint. Then for each category \(l\in\mathcal{L}\) with the corresponding store need \(M_{h_{l}}>0\), there exists non-negative variables \(d_{ih_{l}}^{0,n+1}\leq S_{i}-\sum_{j\in\mathcal{\tilde{I}}}s_{ij}^{n+1}\) for \(i\in\mathcal{I}_{l}\), such that \(\sum_{i\in\mathcal{I}_{l}}s_{ih_{l}}^{n+1}=M_{h_{l}}\). For those categories whose need \(M_{h_{l}}=0\), set \(d_{ih_{l}}^{0,n+1}=0\,\forall i\in\mathcal{I}_{l}\).
* Finally, set \(d_{ie}^{0,n+1}=S_{i}-\sum_{j\in\mathcal{\tilde{J}}}s_{ij}^{n+1}-\sum_{l\in \mathcal{L}}s_{ih_{l}}^{n+1}\), \(\forall i\in\mathcal{\tilde{I}}\) assigning the left over inventories to pseudo store e. By construction of the store need \(M_{e}\) we have \(\sum_{i\in\mathcal{\tilde{I}}}s_{ie}^{n+1}=M_{e}\).
For the best store \(k_{n+1}\) having the highest IG\(>0\), the solution will satisfy the condition \(s_{j}^{n+1}>=M(x_{j}^{n+1}-y_{j}^{n+1})+m\), for all real stores \(j\in\mathcal{J}\). Item \(z\) will used as a filler in the last trailer so that the store need of \(M_{j}=Mx_{j}^{n+1}\) is met. For all \(j\in\mathcal{J}\) allocation \(s_{bj}^{n+1}=0\), and for the dummy store \(e\), \(s_{be}^{n+1}=M\) to absorb the inventory of the pseudo item \(b\). As \(\tilde{P}_{be}^{t}=0\), no negative cost is added to the objective (A.4) from item \(b\).
## Appendix B Mathematical Formulation of the DRM
For the COT instance in Section A.4, we can replace \(S_{i}\gets S_{i}/K\), \(M_{j}\gets M_{j}/K\) and \(D_{ij}^{t}\gets D_{ij}^{t}/K\) to make the total inventory flow in the network to sum up to 1. The resultant solution \(d_{ij}^{t}\) can be scaled back by \(K\) to obtain the true inventory allocation. The COT instance differs from the classical OT because of the presence of a point-wise upper bound \(d_{ij}^{t}\leq D_{ij}^{t}\) on each element of the transport plan, limiting the mass transported between each pair of item (source) and store (sink). Further, the presence of multiple days \(t\) adds an extra \(3^{rd}\) time dimension to the COT problem. However, as no constraints are present along the time axis because the shelf-capacity constraints can be omitted as shown in Lemma A.1, the complexity of problem (A.4) is the same as the setting studied in (Wu et al., 2022).
Define the sets
\[\mathcal{M} =\{(i,j,t):D_{ij}^{t}>0\text{ or unspecified}\},\] \[\mathcal{N} =\{(i,j,t):(i,j,t)\in\mathcal{M}\text{ \& }D_{ij}^{t}\text{ is finite}\}\]
For \((i,j,t)\notin\mathcal{M}\), \(d_{ij}^{t}=0\) as the demand \(D_{ij}^{t}=0\). By introducing entropic regularization for both the lower and upper demand bounds on the allocation variables \(d_{ij}^{t}\) and the addition of Lagrange parameters, the DRM method in (Wu et al., 2022) considers the modified objective:
\[\mathcal{O} =\sum_{\mathcal{M}}P_{ij}^{t}d_{ij}^{t}-\mu\sum_{\mathcal{M}}d_{ ij}^{t}\ln\left(d_{ij}^{t}\right)\] (B.1) \[-\mu\sum_{\mathcal{N}}\left(D_{ij}^{t}-d_{ij}^{t}\right)\ln\left( D_{ij}^{t}-d_{ij}^{t}\right)\] \[-\sum_{i}\alpha_{i}\left(\sum_{jt}d_{ij}^{t}-S_{i}\right)-\sum_ {j}\beta_{j}\left(\sum_{it}d_{ij}^{t}-M_{j}\right).\] (B.2)
By taking partial derivatives with respect to \(d_{ij}^{t}\) and setting \(\frac{\partial\mathcal{O}}{\partial d_{ij}^{t}}=0\), we get
\[d_{ij}^{t}=\begin{cases}D_{ij}^{t}\left[1-\frac{1}{1+\phi_{i}K_{ij}^{t}\psi_{j} }\right],&\text{if }(i,j,t)\in\mathcal{N}\\ \frac{\phi_{i}K_{ij}^{t}\psi_{j}}{\mathrm{e}},&\text{if }(i,j,t)\in\mathcal{M}- \mathcal{N}\end{cases}\]
where \(\phi_{i}=\mathrm{e}^{-\frac{\alpha_{i}}{\mu}}\), \(\psi_{j}=\mathrm{e}^{-\frac{\beta_{j}}{\mu}}\) and \(K_{ij}^{t}=\mathrm{e}^{\frac{P_{ij}^{t}}{\mu}}\). The problem reduces to identifying \(|\tilde{I}|\) values \(\phi_{i}\) and \(|\tilde{J}|\) values \(\psi_{j}\), for a total of \(|\tilde{I}|+|\tilde{J}|\) constants, such that the supply and demand constrains in (A.6) and (A.7) are simultaneously meet. DRM determines these constants via an alternative iteration scheme similar to the Sinkhorn algorithm, as described below.
For some large constant \(\zeta\), we can set \(D_{ij}^{t}=\zeta\) for \((i,j,t)\in\mathcal{M}\text{ - }\mathcal{N}\) and without loss of generality assume \(\mathcal{N}=\mathcal{M}\). Setting \(\frac{\partial\mathcal{O}}{\partial\alpha_{i}}=\frac{\partial\mathcal{O}}{ \partial\beta_{j}}=0\) of the objective in (B.1), we have
\[S_{i} =\sum_{jt}\left(D_{ij}^{t}-\frac{D_{ij}^{t}}{1+\phi_{i}K_{ij}^{t} \psi_{j}}\right)\] \[M_{j} =\sum_{it}\left(D_{ij}^{t}-\frac{D_{ij}^{t}}{1+\phi_{i}K_{ij}^{t} \psi_{j}}\right).\]
Consider an alternative minimization scheme where \(\{\psi^{m}\}\) are the values at the end of iteration \(m\). Then each \(\phi_{i}^{m+1}\) can be determined as the _unique zero point_ of its corresponding single-variable monotonic function \(g_{i}(\phi_{i})\) given by:
\[g_{i}(\phi_{i})=S_{i}-\sum_{jt}D_{ij}^{t}+\sum_{jt}\frac{D_{ij}^{t}}{1+\phi_{i}K_ {ij}^{t}\psi_{j}^{m}}.\] (B.3)
Observe that \(g_{i}(0)>0\) and \(g_{i}(\infty)<0\) and \(\phi_{i}^{m+1}\) is the point where \(g_{i}\left(\phi_{i}^{m+1}\right)=0\). Similarly, given \(\{\phi_{i}^{m}\}\), each \(\psi_{j}^{m+1}\) is obtained as the unique zero point of its corresponding single-variable monotonic function \(h_{j}(\psi_{j})\) given by:
\[h_{j}(\psi_{j})=M_{j}-\sum_{it}D_{ij}^{t}+\sum_{it}\frac{D_{ij}^{t}}{1+\phi_{i} ^{m}K_{ij}^{t}\psi_{j}}.\] (B.4)
The zero points can be computed using any root finding algorithm in one-variable such as Newton's, Bisection, Interpolate Truncate and Project, Dekker's, Brent's, etc. (Press et al., 2007), and the alternating scheme is proven to converge (Wu et al., 2022).
## Appendix C Submodularity of the Item Allocation Linear Program
**Definition C.1** (Incremental Gain).: _For a set function \(\text{ff}(\cdot)\), a subset \(\mathcal{A}\subseteq\mathcal{V}\) and an element \(i\in\mathcal{V}\), the incremental gain is defined as:_
\[f_{\mathcal{A}}\left(i\right)=f\left(\mathcal{A}\cup\left\{i\right\}\right)\;- f\left(\mathcal{A}\right)\] (C.1)
**Definition C.2** (Modularity, Submodularity, Non-negativity).: _Consider any two sets \(\mathcal{A}\subseteq\mathcal{B}\subseteq\mathcal{V}\). A set function \(f(\cdot)\) is submodular iff for any \(i\notin\mathcal{B}\), \(f_{\mathcal{A}}(i)\geq f_{\mathcal{B}}(i)\), non-negative when \(f(\mathcal{A})\geq 0\) and modular when \(f_{\mathcal{A}}(i)\) is independent of \(\mathcal{A}\)._
Recall our representation \(\mathcal{X}=\left\{x_{1},x_{2},\ldots,x_{|\mathcal{J}|}\right\}\) and the slight abuse of notation \(\mathcal{Z}=\mathcal{X}\uplus\left\{k\right\}\) as the set of store-trailer mappings with \(z_{k}=x_{k}+1\), and \(z_{j}=x_{k},k\neq j\). As \(h_{\mathcal{X}}(k)=p_{k}\), it is a modular function.
We now study the submodularity of the item allocation linear program \(g(\mathcal{X})\) defined in (4.1). Consider two sets \(\mathcal{X}\) and \(\mathcal{Y}=\left\{y_{1},y_{2},\ldots,y_{|\mathcal{J}|}\right\}\) where \(\mathcal{X}\subseteq\mathcal{Y}\) in the sense that \(x_{j}\leq y_{j},\forall j\). To prove the submodular property, we require to show that the diminishing returns property
\[g(\mathcal{X}\uplus\left\{k\right\})-g(\mathcal{X})\geq g(\mathcal{Y}\uplus \left\{k\right\})-g(\mathcal{Y})\] (C.2)
holds after addition of a new trailer to any store \(k\), given the current store-trailer assignments. Under a much restricted setting of \(R_{j}=1\), \(|\mathcal{T}_{ij}|=1\), uniform item inventory of \(S_{i}=\kappa,\forall i\) and no constraints on labour \(H_{l}\), trailer min-capacity \(m\), and in particular no point-wise capacity constraint \(D^{t}_{ij}\) limiting the allocation between each pair of item and store, the LP instance in (4.1) reduces to a _relaxed assignment problem_ which is proven to be submodular in (Kulik et al., 2019).
For our generalized formulation, we do not yet have a formal mathematical proof of this property. It appears to be an arduous task primarily due to the individual capacity constraints \(D^{t}_{ij}\). To see this, consider the interesting case where the incremental gain \(g_{\mathcal{Y}}(k)=g(\mathcal{Y}\uplus\left\{k\right\})-g(\mathcal{Y})\) is positive. As discussed before, a positive IG for our parameter setting of high \(\gamma\) would imply that at the solution \(\left(\mathcal{D}^{*}_{\mathcal{Y}\uplus\left\{k\right\}},\mathcal{B}^{*}_{ \mathcal{Y}\uplus\left\{k\right\}}\right)\) corresponding to the set \(\mathcal{Y}\uplus\left\{k\right\}\) for the LP instance in (4.1), all trailers are loaded to their min-capacities, and the allocations \(s^{\mathcal{Y}\uplus\left\{k\right\}}_{ij}\) satisfy (A.1). The min-capacity slack variables \(b^{\mathcal{Y}\uplus\left\{k\right\}}_{j}=0,\forall j\in\mathcal{J}\). If the inventory allocation plan for a super-set \(\mathcal{Y}\) does not involve min-capacity breaches, then the allocation plan for any subset \(\mathcal{X}\subseteq\mathcal{Y}\) should also respect the min-capacity constraint with \(b^{\mathcal{Y}}_{j}=0,\forall j\).
Consider the equivalent COT instance stated in Sec. A.4. Then the solution for each of the four store-trailer assignment sets \(\mathcal{X}\), \(\mathcal{X}\uplus\left\{k\right\}\), \(\mathcal{Y}\), and \(\mathcal{Y}\uplus\left\{k\right\}\) will have the structure that \(s_{be}=M\), and \(s_{bj}=0,\forall j\in\mathcal{J}\), with the inventory of pseudo item \(b\) allocated to the pseudo store \(e\) in entirety as detailed in Sec. A.5. It becomes redundant to add the pseudo inventory \(b\) and its corresponding pseudo quantities in the transformation to the COT instance. Then the profit matrix with entries \(P^{t}_{ij}\) in (A.4) will all be non-negative. Further, the total item inventory of \(K=\sum_{i\in\mathcal{I}}S_{i}\) is a constant, independent of the store-trailer mapping set \(\mathcal{X}\). It is only the maximum store allocation values \(M_{j}\) and the pseudo store need \(M_{e}\) that vary with the COT instances. Our problem structure gradually begins to take the shape of the assignment problem studied in (Kulik et al., 2019). It is the presence of the capacity constraints \(d^{t}_{ij}\leq D^{t}_{ij}\) that makes the COT instance fundamentally different, engendering complexity to the proof of submodularity.
However, we firmly believe that \(g(\mathcal{X})\) is submodular based on our experimental validations. To this end, we created \(760\) sets of store-trailer mappings \(\mathcal{X}\subset\mathcal{Y}\) such that the LP instance in (4.1) always had a feasible solution for \(\mathcal{Y}\) (and hence for \(\mathcal{X}\)). For each pair of \((\mathcal{X},\mathcal{Y})\), we considered \(10\) different choices of candidate stores \(k\) to assign the next trailer while ensuring that \(\mathcal{Y}\uplus\left\{k\right\}\) had a feasible solution. For each triplet \((\mathcal{X},\mathcal{Y},k)\), we computed the incremental gains \(g_{\mathcal{X}}(k)\) and \(g_{\mathcal{Y}}(k)\) and generated a distribution plot of the difference \(g_{\mathcal{X}}(k)-g_{\mathcal{Y}}(k)\) consisting of \(7600\) data-points. The normalised plot is shown in Fig. 3 where the differences are made relative to the largest value of \(g_{\mathcal{X}}(k)-g_{\mathcal{Y}}(k)\) and the latter set to 1. _We observe that every difference is non-negative_; thus empirically corroborating the diminishing returns property (C.2) of the function \(g(.)\).
## Appendix D Pseudo-code of the Incremental Trailer Assignment Method
Below we present the pseudo-code of the batched iterative algorithm to determine the best store for assigning the next trailer. To keep it simple, we have not incorporated the _laziness_ advantage of greedy algorithms (Minoux, 1978; Leskovec et al., 2007) in our description. It can be included using a max-heap data structure and further constraint the stores randomly selected in step 3 of Algorithm 2.
Figure 3: Distribution of normalised \(g_{\mathcal{X}}(k)-g_{\mathcal{Y}}(k)\)
```
1:Set \(x_{j}=0,\forall j\)
2:Choose \(\rho\in(0,1]\)\(\triangleright\) Store selection probability
3:for\(p\) in decreasing store priority do
4:\(\mathcal{X}=\{x_{1},x_{2},\ldots,x_{|\mathcal{J}|}\}\)
5:\(\mathcal{P}_{p}\leftarrow\{j\in\mathcal{J}:p_{j}=p\}\)
6:while\(\mathcal{P}_{p}\neq\emptyset\)do
7:\((k,\mathcal{F})\) = NextBestStore(\(\mathcal{X},\mathcal{P}_{p},\rho\))\(\triangleright\) Determine the best store to assign next trailer
8:\(\mathcal{P}_{p}\leftarrow\mathcal{P}_{p}\setminus\mathcal{F}\)\(\triangleright\) Remove stores with negative or zero IG
9:if\(k\neq\varnothing\)then
10:\(x_{k}\gets x_{k}+1\)
11:if\(x_{k}=R_{k}\)then\(\triangleright\) Planned trailer constraint met
12:\(\mathcal{P}_{p}\leftarrow\mathcal{P}_{p}\setminus\{k\}\)
13:endif
14:endif
15:endwhile
16:endfor
17:return\(\mathcal{X}\)
18:Output:\(\mathcal{X}\)-Final set of store-trailer assignments
```
**Algorithm 1** Batched stochastic greedy algorithm
|
2307.10967
|
ESASCF: Expertise Extraction, Generalization and Reply Framework for an
Optimized Automation of Network Security Compliance
|
The Cyber threats exposure has created worldwide pressure on organizations to
comply with cyber security standards and policies for protecting their digital
assets. Vulnerability assessment (VA) and Penetration Testing (PT) are widely
adopted Security Compliance (SC) methods to identify security gaps and
anticipate security breaches. In the computer networks context and despite the
use of autonomous tools and systems, security compliance remains highly
repetitive and resources consuming. In this paper, we proposed a novel method
to tackle the ever-growing problem of efficiency and effectiveness in network
infrastructures security auditing by formally introducing, designing, and
developing an Expert-System Automated Security Compliance Framework (ESASCF)
that enables industrial and open-source VA and PT tools and systems to extract,
process, store and re-use the expertise in a human-expert way to allow direct
application in similar scenarios or during the periodic re-testing. The
implemented model was then integrated within the ESASCF and tested on different
size networks and proved efficient in terms of time-efficiency and testing
effectiveness allowing ESASCF to take over autonomously the SC in Re-testing
and offloading Expert by automating repeated segments SC and thus enabling
Experts to prioritize important tasks in Ad-Hoc compliance tests. The obtained
results validate the performance enhancement notably by cutting the time
required for an expert to 50% in the context of typical corporate networks
first SC and 20% in re-testing, representing a significant cost-cutting. In
addition, the framework allows a long-term impact illustrated in the knowledge
extraction, generalization, and re-utilization, which enables better SC
confidence independent of the human expert skills, coverage, and wrong
decisions resulting in impactful false negatives.
|
Mohamed C. Ghanem, Thomas M. Chen, Mohamed A. Ferrag, Mohyi E. Kettouche
|
2023-07-20T15:51:23Z
|
http://arxiv.org/abs/2307.10967v1
|
ESASCF: Expertise Extraction, Generalization and Reply Framework for an Optimized Automation of Network Security Compliance
###### Abstract
The Cyber threats exposure has created worldwide pressure on organizations to comply with cyber security standards and policies for protecting their digital assets. Vulnerability assessment (VA) and Penetration Testing (PT) are widely adopted Security Compliance (SC) methods to identify security gaps and anticipate security breaches. In the computer networks context and despite the use of autonomous tools and systems, security compliance remains highly repetitive and resources consuming. In this paper, we proposed a novel method to tackle the ever-growing problem of efficiency and effectiveness in network infrastructures security auditing by formally introducing, designing, and developing an Expert-System Automated Security Compliance Framework (ESASCF) that enables industrial and open-source VA and PT tools and systems to extract, process, store and re-use the expertise in a human-expert way to allow direct application in similar scenarios or during the periodic re-testing. The implemented model was then integrated within the ESASCF and tested on different size networks and proved efficient in terms of time-efficiency and testing effectiveness allowing ESASCF to take over autonomously the SC in Re-testing and offloading Expert by automating repeated segments SC and thus enabling Experts to prioritize important tasks in Ad-Hoc compliance tests. The obtained results validate the performance enhancement notably by cutting the time required for an expert to 50% in the context of typical corporate networks first SC and 20% in re-testing, representing a significant cost-cutting. In addition, the framework allows a long-term impact illustrated in the knowledge extraction, generalization, and re-utilization, which enables better SC confidence independent of the human expert skills, coverage, and wrong decisions resulting in impactful false negatives.
+
Footnote †: Corresponding author: Mohamed C. Ghanem (e-mail: [email protected]).
+
Footnote †: Corresponding author: Mohamed C. Ghanem (e-mail: [email protected]).
+
Footnote †: Corresponding author: Mohamed C. Ghanem (e-mail: [email protected]).
## 1 Introduction
In the cyber era we are living in, our lives are becoming more and more accustomed to the presence of IT equipment, devices, and systems. This emerging technology is associated with objects that, through the connection to the internet and data transmission, make everyone's life more comfortable. Nonetheless, this comfort comes at a cost, as IT networks are increasingly larger, complex, and inter-connected to ensure a wide range of tasks for the benefit of users and organizations [9] & [20]. In parallel to this evolution in networking, cyber threats are becoming more frequent, complex, and sophisticated creating more opportunities for cybercriminals to launch malicious attacks in the hopes of gaining access to sensitive data for their own gain [1]. The flexibility comes at a huge cost as cyber-security practitioners, experts and researchers noticed that cyber threats are becoming more
frequent, complex, and sophisticated following the general rule of attack surface evolution [10]. Protecting complex networks and critical assets from cyber threats pushed the network security professional into the trap of bolting on more and more security layers and policies [6]. The Defense-In-Depth approach is complex which results in adding the multi-levels of security which is often vulnerable when faced with a high-calibre attacker because of what it contained in terms of vulnerabilities due to human errors, misconfigurations, and systems weaknesses. Thus, ensuring that the applied security measures are effective is the cyber-security communities' major concern, several approaches have been proposed and adopted over time. Nevertheless, using the offensive approach demonstrated that is, the best and most reliable method and the most favourably adopted by the security experts [34]. At its core, cybersecurity compliance is a well-established security auditing method that aims to ensure adherence to standards, regulatory requirements, and laws [29]. Since the introduction of GDPR and related legislations across the world, organizations are legally required to achieve compliance by establishing risk-based controls that protect the confidentiality, integrity, and availability (CIA) of their digital assets (Computers, networks, web applications, servers...etc) by trying to identify vulnerabilities and measuring the associated risk [1]. In this paper, we are concerned with making Security Compliance and Penetration Testing more efficient by enabling industrial tools and systems to observe, capture and replay human expertise in future cases relying on a novel representation of the practice and the use of a rule-based Expert Systems.
### _Background on Security Compliance_
Security Compliance constitutes a central and mandatory component of the cyber-security audit and embeds all standard auditing and testing tasks starting from information gathering, analysis, planning, and testing the appropriate attacks targeting the identified vulnerabilities. Such assessments are considered the most effective method to identify exactly how effective the existing security controls are against a skilled adversary and validate the efficacy of defensive mechanisms, as well as end-user adherence to security policies [20].
ISO/IEC 27001 is a neutral and worldwide approved standard for information security management systems (ISMS), along with PCI-DSS (Payment Card Industry Data Security Standard) in the financial sector and HIPAA (Health Insurance Portability and Accountability Act) in the healthcare sector they constitute a cornerstone of security compliance standardization. In fact, Security Compliance is formalized through these three industry standards namely ISO-27001, PCI-DSS, and HIPAA, and designed to be comprehensive and multi-phase practice carried out by experts and usually involves the use of versatile tools, systems, and frameworks to accomplish different tasks. for instance, the information gathering phase typically involves utilizing tools such as traffic monitoring, port scanning, and OS fingerprinting in order to gather relevant information that can be used to dress the target system defences and therefore determine if it contains a vulnerability that can be exploited [19]. on the other hand, the exploitation phase(if required) employs a set of frameworks, add-on modules, and scripts in order to customize and execute the selected exploits which can vary from pieces of code to data payload with the ultimate aim of taking advantage of the discovered vulnerability and causing unintended behaviour in the system or compromising the target leading to gain additional privilege access. in addition, once an exploit execution is successful post-exploitation tools and frameworks are heavily utilized in order to maintain the breach and work toward further penetration [10]. Finally, SC also involves versatile testing scenarios and contexts with tested assets that differ immensely, in each case the same general phases are followed but executed tasks differ significantly [10] & [20]. VA and PT are methodological approaches which involve an active extraction, analysis, and exploitation of the assessed assets and their potential vulnerabilities [33]. Being the industry's standard Security Compliance method, PT and subsequently VA rely on a set of classic tools that automate repetitive and complex tasks [27]. The PT tests are often initiated and carried out from the position of a potential attacker and involve active exploitation of security vulnerabilities. Real-time exploration and decision-making as the practice evolves are the key [28]. The human expert's knowledge, decision-making, and reasoning are a cornerstone of the PT and VA [11]. Currently, PT and VA tools and systems are developed with the aim of making the practice efficient and allowing regular and systematic testing without a prohibitive amount of human labour along with reducing the precious consumed time and network downtime [19]. Additionally, they are designed to offload human experts from heavy tasks and helping him/her to focus on more special and complex situations such as unusual vulnerabilities or combined non-obvious combinations (application flaws, improper configurations, risky end
Figure 1: Penetration Testing and Vulnerability Assessment are standard methods for assessing network defence and achieving security compliance by following sequential and interactive multi-phase procedures starting by gathering information and ending by reporting the obtained results.
user behaviours) which require particular attention in order to produce the best results [22]. Additionally, the wide variety of assets and vectors such as servers, endpoints, web applications, wireless networks, network devices, mobile devices and other potential points of exposure are also playing against the pen-tester breaking through the network firewall and evolving beyond by pivoting across networks machines, systems and applications and attempting to find a new path of attack or revealing how chains of exploitable vulnerabilities to progress further within the target network critical systems and data [28]. Figure 3 illustrates the versatility of security compliance practice.
### _Research Motivation_
This research is rooted in a real-world problem that experts and technicians working in the offensive cyber security field are continuously facing. In fact, the need for PT is increasing making it a central and mandatory component of the cyber-security audit and compliance with different standards and regulations worldwide [27] & [7]. This research seeks to propose a scientific solution to a real-world problem by investigating practice automation, electing the most adequate AI approach and proposing a versatile framework which produces an intelligent and optimized penetration test in a network context while remaining intelligent, autonomous and self-learner [25]. The VA and PT practices have significantly evolved to keep pace with cyber advisories, and this led to the appearance of dozens of commercial and professional systems and frameworks which all aim to offer automation of the different activities, tasks and sub-tasks [10] & [7]. Nonetheless, the existing automation remains either local (specific to one activity such as the vulnerability scanning) or not optimized (covering blindly all cases including irrelevant ones). These reasons make current VA and PT systems such as Metasploit and Nessus being used as tools fully controlled by the expert and only executing tasks launched by the human according to his/her decisions which often lacks prioritization and optimization. The expert use output to analyze, plan and request the execution of the required tasks and those systems only execute the expert instructions [8]. Furthermore, the Security Compliance practice's repetitive nature is becoming problematic, especially during periodic or ad-Hoc compliance where most of the workload remains unchanged and this problem worsens in large IT assets [15]. All reasons enumerated in this section triggered this research and the expert system choice is backed by the lack of knowledge extraction, re-usability and improvement as is the case during manual Security Compliance which is the main reason behind expert VA and PT poor efficiency [2].
### _Research Challenges_
All organizations across the world are witnessing an increase in terms of connectivity and online resources making a higher number of machines exposed online and thus a larger attack surface [11].with attacks that can range in scale from massive state attacks to simple attacks on individuals and SMEs in the hopes of gaining credentials or financial details [22] & [3]. In addition, other issue arises with the use of such automated systems in combination with issues raised on the manual approach notably:
1. The high cost of regular and Ad-Hoc security audits in terms of human resources and cost, consumed time and the impact on the IT assets' performances and systems downtime during working hours.
2. The high volume in terms of data produced by comprehensive non-targeted testing is often wasted and unexploited properly.
3. The nature of the PT environment where the high threats' emergence and fast-changing rate along with assets continuous security protection evolution and update which
4. The evolving attacks complexity with more evasive threats launched in which hackers adopt complex and indirect attack routes, techniques, technologies, this results in unlikely paths being used to squeeze through the security layers which is difficult to be imitated during PT and VA.
5. The huge amount of repeatability as most of the performed activities and tasks are repeated with hardly any change and this is representing a significant part of testers' time, often repeating does not require PT human expert decision-making or manual intervention which results in decreasing the performances.
6. The common high degree of obfuscation in large infrastructures notably in the corporate and financial sectors where organizations tend to use in-house developed security systems making the coverage of the whole assets challenging.
## II Methodology
This section provides an outline of the research methodology followed and the chosen approaches in our journey towards an ES-led security compliance framework. This research started by reviewing the state of the art in the domain of VA and PT automation and optimization, identifying key elements of the current practice requiring improvements [20]. This survey and critical evaluation of existing methods led us to consider the suitability of many AI techniques to settle down on a rule-based Expert System and then proceed with designing, developing, testing and evaluating the proposed ESASCF. In summary, the proposed methodology is expected to address in a scientific manner the real-world problem of efficiency and effectiveness related to the current VA and PT automation. The research methodology's five steps are summarized as follows:
* Grasping the VA and PT domains and components and understanding the interaction between the different entities and the human expert.
* Reviewing the current state of the art of the current methods of VA and PT automation at different phases of the practice such as information gathering, discovery, vulnerabilities assessment and exploiting to fully digest
and analyze the functioning mechanisms of each and the reason why they fail to meet the PT expectation in term of efficiency and accuracy.
* Studying the cyber security auditor and experts' (eg. Certified Ethical Hackers) methods, operations and approaches when performing security compliance tests. This includes a detailed understanding of activities, tasks and sub-tasks that experts perform from the initial reconnaissance and data gathering to the exploiting and post-exploitation tasks.
* Investigating the suitability of rule-based reasoning and how the Expert System can reduce or even replace human intervention in the sequential decision process in VA and PT and which approach is more suitable and likely to produce results.
* Producing an initial Expert System using CLIPS which capture, process, generalize and reuse expertise from human-led network PT and VA activities, the developed ES is then integrated as a separate module within ESASCF.
* Testing the proposed solution and evaluating its contribution in terms of efficiency and accuracy in real-world large security compliance cases and subsequently introducing the appropriate changes in due course.
This adopted methodology aims to achieve the research's final output which is a novel ES-led security compliance framework ESASCF that will offload the human expert in performing Security Compliance and covering the entire spectrum of activities, tasks and sub-tasks.
## III Expert System for Security Compliance
### _Expert Systems Overview_
An Expert System is a Rule-Based Decision Tree program that utilizes Artificial Intelligence technologies to simulate the judgment and behaviour of a human or an organization that has expertise and experience in a particular field [4]. Expert systems are usually intended to complement and not completely replace human experts [26] and [11]. Expert Systems intended to model human expertise or knowledge by learning either by receiving (implementation) or capturing the expertise or knowledge directly from human experts while being aware of the environmental parameters under which these later have been taken [2].
In practice, this is done typically in three different ways:
1. Rules: which is mainly intended for capturing and modelling human expert decision-making in the form of a state-action format which reflects knowledge representation based on experience.
2. Functions: defined and generic functions which are primarily intended for procedural knowledge.
3. Object-oriented: which is a programming oriented mainly intended for procedural knowledge with accepted features including classes, message-handlers, abstraction, encapsulation, and inheritance.
The C Language Integrated Production System (shortly annotated as CLIPS) is an expert system-building tool, a simple and complete environment for the development and implementation of rule-based expert systems [17]. CLIPS is particularly efficient and is designed to provide a low-cost option for deploying expert system applications across
Figure 3: Expert System Functioning diagram in the context of human
Figure 2: The versatility in penetration testing and vulnerability assessment in terms of tasks, methods and domains of practice.
resource-constraint hardware platforms. Following its first release, CLIPS has undergone several upgrades and improvements to become one of the most attractive rule-based expert systems in applied research works. CLIPS's main strength is its ability to facilitate software development to model human knowledge or expertise [25].
### SESURITY COMPLIANCE EXPERISE MODELING AND REPRESENTATION
In this subsection, we will detail the method used in our research to model SC activities, tasks and sub-tasks as processes. We will also detail the representation of this expertise in the form of rule-based ES inspired by a deep understanding of the human technical expertise and knowledge role in the VA and PT practice. This enabled us to implement these activities, tasks, and sub-task in a CLIPS expert system. The activities in VA and PT are divided into a sequence of tasks in order to methodically and comprehensively identify existing vulnerabilities and perform a set of tasks to assess and test if the target is vulnerable or could be compromised by running exploits against identified vulnerabilities.
In our quest to design the CLIPS Expert System, we followed a rigorous examination of the security compliance activities, tasks, and sub-tasks. In fact, at this stage, we attempted to grasp the domain fully. We noticed that VA and PT experts adopt a multi-phase operating mode which includes reconnaissance, vulnerability scanning, identification, validation, and optionally exploitation for all computers, equipment, networking, and security devices constituting the assessed network [27]. As a result, we concatenated previous research output and elaborated a novel universal workflow that accounts for and represents all activities, tasks, and sub-tasks in network security compliance as shown in Figure 5.
We introduce here, a novel algorithm that constitutes the main component of ESASCF and covers the expertise identification, extraction and validation based on predefined criteria. In practice, this algorithm process is virtually separated into two tasks which consist of extracting the expertise in the form of attack vectors and then evaluating this expertise compared with past similar expertise and only validating if it exceeds the past expertise in terms of the likelihood of being the optimal decision flow as explained in figure 10. we define the following notions:
* S is the network states space including topology, machines configuration, and running services details.
* A is the possible actionable tasks and sub-tasks that the SC expert can perform.
* E and V are respectively the list of possible exploits and vulnerabilities that apply to the network context imported and processed from the CVE database.
* C is the possible states of compromised machines within the network.
### Rule-based Expert System for Security Compliance
we detail here the method adopted into the definition of expertise from PT and VA perspectives and the proposed rule-based expert system takes knowledge from a human Certified Expert Hacker and converts it into a set of hard-coded rules to be applied in future tests which will ultimately result in fully autonomous PT systems that rely on a well-defined Expert System in emulating the decision-making ability of a human expert. The proposed ES will be developed in a modular way to enable future integration with previously developed modules to form a PoC ES-led Automated Security Compliance Framework (ESASCF). In order to put into practice the ES, the definition of SC expertise definition is the cornerstone of the process. we opted for the most realistic method of defining expertise mimicking the human experts and respecting the PT and VA workflow as illustrated in figure 6.
The proposed rule-based expert system is written in CLIPS which is a data-driven program where the facts, and objects if
Figure 4: Proposed representation of Cyber Security Compliance in the form of activities, tasks and sub-tasks
desired, are the data that stimulate execution via the inference engine [25]. In CLIPS ES, rules are defined using the def-rule construct and are composed of an antecedent and a consequent. The antecedent of a rule is a set of conditions or conditional elements which must be satisfied for the rule to be applicable [12]. We opted for CLIPS as an efficient approach to implementing our proposed ES as it provides the basic elements of an expert system. The first component of our ES is the domain knowledge composed of fact-list and instance-list which represent the main memory pool for data to be used, The domain knowledge is knowledge about the machine configuration such as Operating System, Running Services, Open Ports, Security defence and Storage nature [26].
The second component is the knowledge base which contains all the rules captured, validated and generalized from monitoring human CEH activities and written following the defined rule-base format [18]. The third and last component is the inference engine which is in charge of controlling the overall execution of rules and communicating with the VA or PT tool respectively Nessus and Metasploit. The inference engine decides which rules should be executed and then launch the execution. In terms of programming, our ES program written in CLIPS consists of rules, facts, and objects [24] and [16]. Finally, we opted to represent knowledge and expertise directly captured from human CEH in our CLIPS EN through the use of simple or multiple IF-THEN rules, this approach is widely adopted in cyber security in general as it mirrors the real-world situation where the human expert act (perform tasks or sub-task) when a set of conditions are
Figure 5: ES Security Compliance Expertise Extraction form of Vectors Algorithm.
Figure 6: Modeling SC activities in the form of attack vectors covering each the full assessed machines including reconnaissance, probing, exploiting and privileges escalations
met as illustrated in figure 6. The first step in implementing the learning process in the form of a decision tree in CLIPS was to decide on which knowledge should be represented and how. Since CLIPS rules' tree should learn, the tree is also represented as facts and not as rules to make the edition and change in the tree easier [13]. in addition, we opted to use implemented CLIPS rules to traverse the decision tree by implementing the Solve Tree and learn algorithm following a rule-based approach. Finally, we utilized the built-in CLIPS pattern-match on facts and objects which can be called from a procedural language, perform its function, and then return control back to the calling program. Therefore, procedural code can be defined as external functions and called from CLIPS and When the external code completes execution, control returns to CLIPS [14][24].
## IV Proposed ESASCF Framework
In this section, we will detail the design and implementation of the proposed ESASCF with a special emphasis on the integration of the CLIPS expert system module alongside the processing module. we also detail virtual test-bed networks' construction out of data collected from real-world corporate networks. This research will produce a proof-of-concept (PoC) framework along with its practical implementation which will assist the human expert in performing security compliance in an efficient and effective manner.
In practice, security compliance activities vary from case to case but generally start with the information-gathering phase, where the expert explores the web using OSINT tools and techniques (open-source intelligence tools) to gather information and about the target system. this later was implemented in an independent data gathering, processing and structuring module during our past research work which we will reuse directly as part of ESASCF [20]. we developed several scripts in C integrated to CLIPS which is a bidirectional Python to C language Foreign Function Interface (CFFI) that facilitate the translation of CLIPS capabilities within the Python ecosystem [23]. These scripts are used to capture certified human experts' (Certified Ethical Hackers or Certified Information Systems Security Professionals) decisions made along with the asset parameters that made the human expert make such decisions. For legal and ethical purposes, we also enabled the human expert to assess and control the ESASCF autonomous functioning in order to validate or reject the made decision. Figure 7 shows the proposed rule-based ES functioning in terms of capturing, processing, validating, generalizing and storing expertise for future usage.
### ESASCF Architecture
In ESASCF, we opted for a modular framework that covers the security compliance activities and this is through all VA and PT tasks and sub-tasks. The choice is justified by the nature of VA and PT activities. Figure 5 illustrates the proposed ES-led Automated Security Compliance Framework (ESASCF) including the Pre-possessing, rule-based Expert System and the VA/PT core. The system consists of the VA module, RBES and Memory module as well as the proactive testing and auditing systems module incorporating the Interface, Metasploit and Nessus. These modules are represented in Figure 10.
The framework development started by building the first module based on the existing ESASCF which is our previous research work output [20]. The vulnerability assessment module uses input data from information gathering, discovery and vulnerability assessment phases to represent it as POMDP environments. The second core component of the framework is the Expert System and framework memory.
Figure 7: Expertise Construction, Evaluation and Generalization process
in this module, we opted to represent knowledge in CLIPS through the use of simple or multiple IF-THEN rules which is the widely used in Expert System and Security programs in general, this approach mirrors the real-world situation where the human expert act (perform tasks or sub-task) when a set of conditions are met. Vulnerability assessment data is collated with all data acquired and formatted during the pre-processing and feature extraction functions which work together as independent scripts. The ES interact directly with ESASCF-memory which serves as the main memory for the framework and the expert system in charge of expertise capturing, generalization, storing and replaying. In ESASCF the Metasploit and Nessus are considered as an entire module of ESASCF and consist of interfaces, libraries, MSF modules, tools and plugins which all will be controlled by the ESASCF through Python scripts relying on CLIPS. Finally, it is worth mentioning that in our proposed rule-based expert system we opted for using the graphical user interface (GUI) and we implemented a simple exchange and display mechanism between the expert system ES, Metasploit MSF and human expert using Python scripts and temporary text files.
## V Testing, Results and Discussion
### _Setup of Experiments_
The experiments are run on an HP Z2 tower with CPU Intel Xeon Processor E7-4809v3, 8 Core, 20MB Cache and 2.00GHz, an Unbuffered Memory of 64GB DDR4, Graphical NVIDIA Quadro P4000 8GB. This machine runs Linux Calculate 20 kernel 5.4.6 which is a fast and resource-efficient Linux distribution based on Gentoo and maintains an optimal balance between state-of-the-art processing libraries and renowned stability. The rule-based Expert System is developed in CLIPS 6.40 and with the help of CLIPS which is Python CFFI binding that enables us to translate CLIPS capabilities within the Python ecosystem. Furthermore, we implemented all of our memory and data handlers in Python.
### _Research Data Input_
This section aims to describe the method used in our research to collect data from real LANs and recreate equivalent virtual networks to be then used to test and validate the ESASCF framework. The starting point which serves as input for this research is 53 different size virtual LANs which were recreated out of data imported from real financial institution
Figure 8: An example of Expert System rules definition on CLIPS covering PT and VA tasks
[MISSING_PAGE_POST]
networks. The collected data include networking, functioning and security data which was used to recreate the virtual equivalent of these networks in a virtual box platform. Computer machines and servers were included in the virtual networks by directly downloading virtual equivalent from a specialized open-source website 'vulhnub.com' which serves as a repository and provides materials that allow ethical hackers to experience digital security, computer software and network administration using virtual appliances. Security mechanisms including firewalls, Routers and intrusion detection systems were also imported along with the associated configurations (implemented security policy) and included in the virtual networks by adopting a specific approach of considering them as machines and forcing the traffic to transit through them in a specific way to reflect the real-world scenarios, thus approach was unavoidable as the virtual environment is restricted in term of networking. To sum up, we constructed 53 different networks with size varying from 2 to 250 machines and were categorized as follow: 2-50 small LANs, 55-100 medium LANs and 105-250 large LANs. Even though our research focuses on medium and large networks, we were obliged to start from a small LAN to test the framework. Finally, it is worth mentioning that the 250-machine limitation is purely for operational purposes and larger LANs can be also accommodated with adequate hardware. figure 10 shows an example large LAN.
Figure 11: Proposed ESASCF framework overall architecture.
Figure 10: Metasplot interaction with ESASCF framework
### Evaluation and Optimization Criteria
Currently, security auditing and compliance including PT and VA efficiency is measured following several quantitative and qualitative metrics which are widely adopted and standardized as performance measurement criteria. Nonetheless, the operational cost and the reliability of the results remain the most relevant ones. In terms of relevance and accuracy we elaborated a hierarchical function that calculated the value of expertise extracted and its relevance alongside the extraction process outlined in Figure 8. To tack
we assume that security testing and auditing tools and system licensing constitute 1/10 of the total cost [10]. The remaining cost is allocated to pay human experts conducting compliance assessing and testing activities [22] & [2]. therefore we simplified the efficiency evaluation metric to only account for the average running time (which is reflected in cost as experts are often hourly paid). The second metric is compliance coverage measured by the number of performed assessment and tests which are in our research measured by the number of covered machines including low-risk machines often neglected by human experts and which ESASCF cover fully.
## 6 Experimental Results and Discussion
ESASCF testing was carried out in two stages; first, we tested the framework efficiency in different security compliance situations when ESASCF observe and capture expertise from human CEH performing initial VA and PT using Nessus and Metasploit respectively and then ESASCF is used to repeat the security compliance after few changes were introduced.
### Obtained Results
Figures 13 and 14 illustrate the huge contribution of ESASCF in compliance scenarios when VA and PT are repeated periodically or after introducing a few changes (e.g. 25%). The impact in terms of time is less significant in VA as the assessment practice is more deterministic and more automated. Nonetheless, the Re-testing efficiency enhancement is far more important with the practice running time representing, in large LANs, a fifth (1/5) of the normal time required for testing when only 25% or less of configuration change has been introduced to the LANs which in fact represent the real-world situation and more-likely situation in IT.
Finally, we compared the ESASCF performances with full blind automation and human expert CEH performances in term of retesting the same LANs after introducing the time 25% changes and Figure 15 illustrate the obtained results.
From the obtained results, we confirmed that ESASCF outperforms the human expert as well the blind automation which validates the contribution of ES-led security compliance. the unanimous results reflect the contribution of expertise capturing and reuse in cyber security compliance. In addition to the quantitative results brought by ESASCF to the
Figure 14: ESASCF performances in network Penetration Re-Testing using Metasploit on different size LANs
Figure 12: Example of a large LAN network used as test-bed for this research
Figure 13: ESASCF performances in network Vulnerability Re-Assessing using Nessus on different size LANs
security compliance practice and specifically Vulnerability assessment (VA) and Penetration Testing (PT), the proposed ES-led solution produces a similar compliance quality as with highly qualified and certified human experts. Figure 16, illustrates the qualitative impact of ESASCF on the security compliance practice notably by enabling high-quality expertise extraction and reuse. From the obtained results, it is clearly highlighted that ESASCF security testing coverage outperforms any human expert along with attack coverage far larger and more precise in the sense that only the relevant scenarios were covered which in the large network includes running 15 exploits, 6 post-exploitation payloads and resulted into compromising five high-value targets computer or servers as illustrated in figure 16 where each coloured line represent an extracted and validated attack vector.
## VII Conclusion and Future Works
This paper investigated the enhancement of Security Compliance performances through the use of a rule-based expert system within the industrial VA and PT tools and systems, this enables industrial systems to acquire, generalize and re-use the expertise learned from human experts and prioritize its use in future relevant scenarios notably similar cases and re-testing/ re-assessing. The proposed expert system is based on an expertise identification and extraction model and covers all networks and infrastructures VA and PT which optimize the SC practice and enhance the efficiency and effectiveness of current industry tools and systems such as Metasploit and Nessus. The main contribution of the proposed framework built upon the introduced model is to safely replace (or reduce to the minimum) the human expert intervention in the SC practice and make it accessible to non-experts. On the other hand, ESASCF allows efficient and accurate SC in terms of consumed time, testing coverage, resource use and impact on the assessed assets. The obtained results are unanimous and defeat human-led and fully automated security compliance assessing and testing performances in terms of consumed time which reflect the cost of the practice in general. This improvement is particularly obvious in the medium and large network contexts. The learning process is the second strength of the proposed model notably in the case of re-assessing and retesting the same LAN after a few changes were introduced which represent the real-world context in security. Here again, the performance enhancement and the previously extracted expertise reuse are enormous, especially in large LANs which is translated into further performance and practically confirms the suitability of our proposed approach.
Finally, despite the fact that this work opened the door for the use of ES-led security compliance, the proposed framework can be further enhanced notably by addressing current limitations of CLIPS notably the single-level rule sets which pushed us to arrange rule sets in a hierarchy for loop sub-task such as the port probing and service detection. the second issue faced in CLIPS is the issue related to matching rules and objects as it is not possible to embed rules in objects which remain problematic in some aspect of security compliance such as changing pivot for re-scanning or re-testing. In addition, the CLIPS lacks an explicit agenda mechanism making forward chaining the only available approach to control flow and therefore pushing toward manipulating tokens in working memory as the only alternative to implementing other kinds of reasoning. One of the future improvements is the migration of the ES towards NExpert Object which is highly reliable and portable, it also includes facilities for designing graphical interfaces and also enables the use of script language in the front end.
## Footnotes
### Ethical approval
Not Applicable
### Funding
No Funding.
#### Availability of data and materials
ESASCF code and Virtual LANs data sets used can be available upon request.
#### Competing interests
The authors declare that they have no known competing interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2301.01514
|
PENDANTSS: PEnalized Norm-ratios Disentangling Additive Noise, Trend and
Sparse Spikes
|
Denoising, detrending, deconvolution: usual restoration tasks, traditionally
decoupled. Coupled formulations entail complex ill-posed inverse problems. We
propose PENDANTSS for joint trend removal and blind deconvolution of sparse
peak-like signals. It blends a parsimonious prior with the hypothesis that
smooth trend and noise can somewhat be separated by low-pass filtering. We
combine the generalized quasi-norm ratio SOOT/SPOQ sparse penalties
$\ell_p/\ell_q$ with the BEADS ternary assisted source separation algorithm.
This results in a both convergent and efficient tool, with a novel Trust-Region
block alternating variable metric forward-backward approach. It outperforms
comparable methods, when applied to typically peaked analytical chemistry
signals. Reproducible code is provided.
|
Paul Zheng, Emilie Chouzenoux, Laurent Duval
|
2023-01-04T10:11:21Z
|
http://arxiv.org/abs/2301.01514v2
|
# PENDANTSS: PENalized Norm-ratios Disentangling Additive Noise, Trend and Sparse Spikes
###### Abstract
Denoising, detrending, deconvolution: usual restoration tasks, traditionally decoupled. Coupled formulations entail complex ill-posed inverse problems. We propose PENDANTSS for joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized quasi-norm ratio SOOT/SPOO\({}^{\dagger}\) sparse penalties \(\ell_{p}/\ell_{q}\) with the BEADS\({}^{2}\) ternary-assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. It outperforms comparable methods, when applied to typically peaked analytical chemistry signals. Reproducible code is provided.
Blind deconvolution, sparse signal, trend estimation, non-convex optimization, forward-backward splitting, alternating minimization, source separation
## I Introduction and background
Restoration recovers information from observations with amplitude distortion, level displacement or random disturbance. We seek estimates \(\widehat{\mathbf{s}}\), \(\widehat{\mathbf{t}}\) and \(\widehat{\mathbf{\pi}}\) from observation \(\mathbf{y}\), under the discrete additive-convolutive degradation:
\[\mathbf{y}=\overline{\mathbf{s}}*\overline{\mathbf{\pi}}+\overline{\mathbf{t}}+\mathbf{n}. \tag{1}\]
Among \(N\) sample values, a series of _spikes_ (also called impulses, events, "diracs" or spectral lines) models the **first component**, the sought sparse signal \(\overline{\mathbf{s}}\in\mathbb{R}^{N}\). Its convolution with an unknown short-support _kernel_\(\overline{\mathbf{\pi}}\in\mathbb{R}^{L}\) -- typically peak-shaped -- yields the _peak-signal_\(\overline{\mathbf{x}}=\overline{\mathbf{s}}*\overline{\mathbf{\pi}}\in\mathbb{R}^{N}\). The **second component**\(\overline{\mathbf{t}}\in\mathbb{R}^{N}\) offsets the reference level, harming quantitative estimations. It can be called baseline, background, continuum, drift, or wander. We opt for _trend_, a reference above which peaks are detected, evaluated and measured. "Trends" address slowly varying amplitude shifts (due to seasonality, calibration distortion, sensor decline...), challenging its automated removal. **Third component**\(\mathbf{n}\in\mathbb{R}^{N}\) (_noise_) gathers stochastic residuals. Given (1), the goal is to perform jointly denoising, detrending and deconvolution. Namely, given \(\mathbf{y}\), retrieve estimations of the spiky signal, the kernel and the trend. Fig. 1 is reminiscent of standard spectral subtraction [1], and motivated here by peak-signal retrieval in separative analytical chemistry (AC): chromatography, spectrometry, spectroscopy [2], where peak localization, amplitude, width or area provide useful chemical quantitative information.
Whether acquired in its natural domain [3] or after sparsification [4], noise/trend/spike models (1) cover many multidimensional issues: signal (1D), image (2D), video, volume (3D+). We focus here on 1D data common to diverse domains: Fourier spectral analysis, econometrics, stock prices, biomedical measurements (ECG, EEG, EMG), environmental observations, astronomical spectroscopy, etc.
On the one hand, joint denoising and detrending is a long-standing preprocessing question, ranging from time series analysis to imaging. Background issues are commonly solved using a host of filling, fitting and filtering methods. We refer to overviews in [5, 6], and for AC to background corrections backcor [7] and BEADS [8].
On the other hand, joint denoising and blind deconvolution matters from channel estimation in communications [9] to image deblurring [10]. We refer to [11, 12], and especially emphasize on sparsity-promoting methods like SOOT [13] and SPOQ [14], using smoothed "scale-invariant" norm ratios.
PENDANTSS original contributions are (i) a fully coupled and solvable non-convex formulation for (1) (Section II) and (ii) a novel efficient joint disentangling algorithm (forward-backward-based [15, 16]) with proved convergence (Section III), validated by its comparative performance (Section IV).
## II Proposed problem formulation
### _BEADS peak/trend/noise separation paradigm_
Estimates of \((\widehat{\mathbf{s}},\widehat{\mathbf{t}},\widehat{\mathbf{\pi}})\) of \((\overline{\mathbf{s}},\overline{\mathbf{t}},\overline{\mathbf{\pi}})\) are obtained through the resolution of the penalized least squares problem
\[\underset{\pi\in\mathbb{R}^{N}}{\text{minimize}_{\pi\in\mathbb{R}^{L}}}\ \frac{1}{2}\|\mathbf{y}-\mathbf{\pi}*\mathbf{s}-\mathbf{t}\|^{2}+R(\mathbf{s},\mathbf{t},\mathbf{\pi}), \tag{2}\]
with regularization term \(R\) incorporating prior knowledge. Disentangling trend and signal is tedious [17]. As in BEADS [8], we assume that the trend can be recovered from a peakless observation through a low-pass filter \(\mathbf{L}\):
\[\widehat{\mathbf{t}}=\mathbf{L}(\mathbf{y}-\widehat{\mathbf{\pi}}*\widehat{\mathbf{s}}). \tag{3}\]
This motivates the rewriting of the data fidelity term in (2) as:
\[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L })\ \rho(\mathbf{s},\mathbf{\pi}) =\frac{1}{2}\|\mathbf{y}-\mathbf{L}\mathbf{y}-\mathbf{H}(\mathbf{\pi}*\mathbf{s})\|^{2}\] \[=\frac{1}{2}\|\mathbf{H}(\mathbf{y}-\mathbf{\pi}*\mathbf{s})\|^{2}, \tag{4}\]
where \(\mathbf{H}=\mathbf{Id}_{N}-\mathbf{L}\) is a high-pass filter, and \(\mathbf{Id}_{N}\) the identity operator of \(\mathbb{R}^{N}\). We introduce a regularization term \(\Psi\), promoting signal sparsity. We add two extra terms to constrain
estimates \(\widehat{\mathbf{s}}\) and \(\widehat{\mathbf{\pi}}\) to sets \(C_{1}\subset\mathbb{R}^{N}\) and \(C_{2}\subset\mathbb{R}^{L}\) assumed closed, non-empty and convex. The indicator function \(\iota_{C_{i}}\), \(i\in\{1,2\}\) equals zero when the value evaluated belongs to \(C_{i}\), \(+\infty\) otherwise. Optimization problem (2) becomes:
\[\underset{\mathbf{s}\in\mathbb{R}^{N},\ \mathbf{\pi}\in\mathbb{R}^{L}}{\text{minimize}} \frac{1}{2}||\mathbf{H}(\mathbf{y}-\mathbf{\pi}*\mathbf{s})||^{2}+\iota_{C_{1}}(\mathbf{s})+\iota _{C_{2}}(\mathbf{\pi})+\lambda\Psi(\mathbf{s}). \tag{5}\]
The estimated trend can be obtained from (3) with \(\widehat{\mathbf{\pi}}\) and \(\widehat{\mathbf{s}}\) obtained by (5).
### _SPOQ/SOOT norm/quasi-norm ratio penalties_
Tractable penalties for sparsity characterization include homogeneous \(\ell_{p}\)-norms, quasi-norms (for \(0<p<1\)), or mixed norms. We refer to [12, 13, 14, 18, 19] and references therein. Ratios of norms are also promising proxies, being scale-invariant [20]. We here promote sparse \(\widehat{\mathbf{s}}\) through the family of SPOQ norm ratio penalties, introduced in [14], as a generalization to the SOOT ratio [13]. Let \(p\in]0,2[\) and \(q\in[2,+\infty[\). Smoothed approximations to the \(\ell_{p}\) quasi-norm and \(\ell_{q}\) norm, parameterized by constants \((\alpha,\eta)\in]0,+\infty[^{2}\) are defined, for every \(\mathbf{s}=(s_{n})_{1\leq n\leq N}\in\mathbb{R}^{N}\), as:
\[\ell_{p,\alpha}(\mathbf{s})=\left(\sum_{n=1}^{N}\left((s_{n}^{2}+\alpha^{2})^{p/2} -\alpha^{p}\right)\right)^{1/p}, \tag{6}\]
and
\[\ell_{q,\eta}(\mathbf{s})=\left(\eta^{q}+\sum_{n=1}^{N}|s_{n}|^{q}\right)^{1/q}. \tag{7}\]
The non-convex SPOQ penalty is given, for \(\beta\in]0,+\infty[\), as:
\[(\forall\mathbf{s}\in\mathbb{R}^{N})\quad\Psi(\mathbf{s})=\log\left(\frac{(\ell_{p, \alpha}^{p}(\mathbf{s})+\beta^{p})^{1/p}}{\ell_{q,\eta}(\mathbf{s})}\right). \tag{8}\]
\(\Psi\) is Lipschitz differentiable on \(\mathbb{R}^{N}\)[14, Prop. 2] and admits \(\mathbf{0}_{N}\) as a local minimizer when [14, Prop. 1]:
\[q>2,\quad\text{or}\quad q=2\quad\text{and}\quad\eta^{2}\alpha^{p-2}>\beta^{p}. \tag{9}\]
Condition (9) is assumed throughout this paper.
## III Proposed optimization algorithm
### _Problem structure_
The objective function in (5) is the sum of a differentiable function (least squares + SPOQ) and terms acting separably on \(\mathbf{s}\) or \(\mathbf{\pi}\) (i.e., indicator terms). In the differentiable part
\[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L})\quad f( \mathbf{s},\mathbf{\pi})=\rho(\mathbf{s},\mathbf{\pi})+\lambda\Psi(\mathbf{s}), \tag{10}\]
with function \(\rho\) from (4) quadratic in \(\mathbf{s}\) and \(\mathbf{\pi}\). In particular, for every \(\mathbf{\pi}\in\mathbb{R}^{L}\) (resp. \(\forall\mathbf{s}\in\mathbb{R}^{N}\)), the gradient \(\nabla\rho_{1}(\cdot,\mathbf{\pi})\) (resp. \(\nabla\rho_{2}(\mathbf{s},\cdot)\)) of \(\rho\) with respect to its first (resp. second) variable is Lipschitz continuous with constant \(\Lambda_{1}(\mathbf{\pi})\) (resp. \(\Lambda_{2}(\mathbf{s})\)). As aforementioned, \(\nabla\Psi\) is Lipschitz continuous too. The second part of the objective function reads as:
\[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L})\quad g( \mathbf{s},\mathbf{\pi})=\iota_{C_{1}}(\mathbf{s})+\iota_{C_{2}}(\mathbf{\pi}). \tag{11}\]
In a nutshell, Problem (5) amounts to minimizing:
\[(\forall\mathbf{s}\in\mathbb{R}^{N})(\forall\mathbf{\pi}\in\mathbb{R}^{L})\quad \Omega(\mathbf{s},\mathbf{\pi})=f(\mathbf{s},\mathbf{\pi})+g(\mathbf{s},\mathbf{\pi}). \tag{12}\]
### _Proposed Trust-Region PENDANTS algorithm_
The structure of (12) suggests a block alternating approach where signal \(\mathbf{s}\) and kernel \(\mathbf{\pi}\) are updated sequentially. We hereby introduce Algorithm 1, that generalizes the BC-VMFB algorithm [16], also used in [13] for blind deconvolution.
```
Settings:\(K_{\max}>0\), \(\varepsilon>0\), \(\mathcal{I}>0\), \(\theta\in]0,1[\), \((\gamma_{s,k})_{k\in\mathbb{N}}\in[\gamma,2-\overline{\gamma}]\) and \((\gamma_{\pi,k})_{k\in\mathbb{N}}\in[\gamma,2-\overline{\gamma}]\) for some \((\gamma,\overline{\gamma})\in]0,+\infty[^{2}\), \((p,q)\in]0,2[\times[2,+\infty[\) satisfying (9), convex sets \((C_{1},C_{2})\subset\mathbb{R}^{N}\times\mathbb{R}^{L}\). Initialize:\(\mathbf{s}_{0}\in C_{1}\), \(\mathbf{\pi}_{0}\in C_{2}\) for\(k=0,1,\dots\)do Update of the signal for\(i=1,\dots,\mathcal{I}\)do Set TR radius \(\rho_{k,i}\) using (16) with parameter \(\theta\); Construct MM metric \(\mathbf{A}_{1,\rho_{k,i}}(\mathbf{s}_{k},\mathbf{\pi}_{k})\) using (15); Find \(\mathbf{s}_{k,i}\in C_{1}\) such that (17) holds. if\(\mathbf{s}_{k,i}\in\overline{\mathcal{B}}_{q,\rho_{k,i}}\)then Stop loop end if \(\mathbf{s}_{k+1}=\mathbf{s}_{k,i}\); Update of the kernel Find \(\mathbf{\pi}_{k+1}\in C_{2}\) such that (19) holds. Stopping criterion if\(\|\mathbf{s}_{k}-\mathbf{s}_{k+1}\|\leq\varepsilon\) or \(k\geq K_{\max}\)then Stop loop end if \((\widehat{\mathbf{s}},\widehat{\mathbf{\pi}})=(\mathbf{s}_{k+1},\mathbf{\pi}_{k+1})\) and \(\widehat{\mathbf{t}}\) given by (3); Result:\(\widehat{\mathbf{s}},\widehat{\mathbf{\pi}},\widehat{\mathbf{t}}\)
```
**Algorithm 1**TR-BC-VMFB for solving (5)
### _Proposed Trust-Region PENDANTSS algorithm_
The structure of (12) suggests a block alternating approach where signal \(\mathbf{s}\) and kernel \(\mathbf{\pi}\) are updated sequentially. We hereby introduce Algorithm 1, that generalizes the BC-VMFB algorithm [16], also used in [13] for blind deconvolution.
```
Settings:\(K_{\max}>0\), \(\varepsilon>0\), \(\mathcal{I}>0\), \(\theta\in]0,1[\), \((\gamma_{s,k})_{k\in\mathbb{N}}\in[\gamma,2-\overline{\gamma}]\) and \((\gamma_{\pi,k})_{k\in\mathbb{N}}\in[\gamma,2-\overline{\gamma}]\) for some \((\gamma,\overline{\gamma})\in]0,+\infty[^{2}\), \((p,q)\in]0,2[\times[2,+\infty[\) satisfying (9), convex sets \((C_{1},C_{2})\subset\mathbb{R}^{N}\times\mathbb{R}^{L}\). Initialize:\(\mathbf{s}_{0}\in C_{1}\), \(\mathbf{\pi}_{0}\in C_{2}\) for\(k=0,1,\dots\)do Update of the signal for\(i=1,\dots,\mathcal{I}\)do Set TR radius \(\rho_{k,i}\) using (16) with parameter \(\theta\); Construct MM metric \(\mathbf{A}_{1,\rho_{k,i}}(\mathbf{s}_{k},\mathbf{\pi}_{k})\) using (15); Find \(\mathbf{s}_{k,i}\in C_{1}\) such that (17) holds. if\(\mathbf{s}_{k,i}\in\overline{\mathcal{B}}_{q,\rho_{k,i}}\)then Stop loop end if \(\mathbf{s}_{k+1}=\mathbf{s}_{k,i}\); Update of the kernel Find \(\mathbf{\pi}_{k+1}\in C_{2}\) such that (19) holds. Stopping criterion if\(\|\mathbf{s}_{k}-\mathbf{s}_{k+1}\|\leq\varepsilon\) or \(k\geq K_{\max}\)then Stop loop end if \((\widehat{\mathbf{s}},\widehat{\mathbf{\pi}})=(\mathbf{s}_{k+1},\mathbf{\pi}_{k+1})\) and \(\widehat{\mathbf{t}}\) given by (3); Result:\(\widehat{\mathbf{s}},\widehat{\mathbf{\pi}},\widehat{\mathbf{t}}\)
```
**Algorithm 2**TR-BC-VMFB for solving (5)
### _Proposed Trust-Region PENDANTSS algorithm_
The structure of (12) suggests a block alternating approach where signal \(\mathbf{s}\) and kernel \(\mathbf{\pi}\) are updated sequentially. We hereby introduce Algorithm 1, that generalizes the BC-VMFB algorithm [16], also used in [13] for blind deconvolution.
with the constant \(\chi_{q,\rho}=(q-1)/(\eta^{q}+\rho^{q})^{2/q}\). In (14), \(\|.\|_{\mathbf{A}}\) denotes the weighted Euclidean norm related to a symmetric definite positive (SDP) matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), i.e., \(\forall\mathbf{z}\in\mathbb{R}^{N},\ \|\mathbf{z}\|_{\mathbf{A}}=(\mathbf{z}^{\top}\mathbf{A}\mathbf{z})^{1/2}\). Since inequality (14) only holds on a limited region, we introduce a Trust-Region-based (TR) loop [22, 23] to make sure that the minimizer of the majorant is indeed in the validity domain of (14). Namely, we set \(\mathcal{I}>0\), a maximum number of trials of TR approach. For \(i\in\{1,\ldots,\mathcal{I}\}\), we define the TR radius as:
\[\rho_{k,i}=\begin{cases}\sum_{n=1}^{N}|s_{n,k}|^{q}&\text{if }i=1\,,\\ \rho_{k,i-1}&\text{if }2\leq i\leq\mathcal{I}-1\,,\\ 0&\text{if }i=\mathcal{I}\,.\end{cases} \tag{16}\]
We compute the associated MM metric \(\mathbf{A}_{1,\rho_{k,i}}(\mathbf{s}_{k},\mathbf{\pi}_{k})\) and define \(\mathbf{s}_{k,i}\) as a minimizer of the right term in (14). The loop stops whenever \(\mathbf{s}_{k,i}\) belongs to \(\vec{\mathcal{B}}_{q,\rho_{k,i}}\), which is ensured to arise in a finite number of steps according to [14]. There remains to explain how we practically compute \(\mathbf{s}_{k,i}\). Depending on the choice for \(C_{1}\), the right term in (14) might not have a closed-form minimizer. Actually, as we will show, it appears sufficient for convergence purpose to search for \(\mathbf{s}_{k,i}\in C_{1}\) satisfying the first order optimality conditions:
\[\begin{cases}(\mathbf{s}_{k,i}-\mathbf{s}_{k})^{\top}\nabla_{1}f(\mathbf{s}_{k},\mathbf{\pi}_ {k})+\gamma_{s,k}^{-1}\|\mathbf{s}_{k,i}-\mathbf{s}_{k}\|^{2}_{\mathbf{A}_{1,\rho_{k,i}}( \mathbf{s}_{k},\mathbf{\pi}_{k})}\!\leq\!0,\\ \|\nabla_{1}f(\mathbf{s}_{k},\mathbf{\pi}_{k})\!+\!\mathbf{r}_{k,i}^{(1)}\|\leq\kappa_{1} ||\mathbf{s}_{k,i}-\mathbf{s}_{k}||_{\mathbf{A}_{1,\rho_{k,i}}(\mathbf{s}_{k},\mathbf{\pi}_{k})} \end{cases} \tag{17}\]
for some \(\mathbf{r}_{k,i}^{(1)}\in N_{C_{1}}(\mathbf{s}_{k,i})\) (i.e., the normal cone of \(C_{1}\) at \(\mathbf{s}_{k,i}\)[24]), and some \(\kappa_{1}>0\). The existence of such an \(\mathbf{s}_{k,i}\) can be shown from [25, Rem. 3.3]. In particular, a minimizer over \(C_{1}\) of the right term in (14) satisfies (17).
#### Iii-B2 Kernel update
It follows a similar approach. The main difference is that we do not use the TR loop in that case, as the function to minimize here is simpler. Let \(k\in\mathbb{N}\), and \((\mathbf{s}_{k+1},\mathbf{\pi}_{k})\in C_{1}\times C_{2}\). By descent lemma,
\[(\forall\mathbf{\pi}\in C_{2})\quad\Omega(\mathbf{s}_{k+1},\mathbf{\pi})\leq f (\mathbf{s}_{k+1},\mathbf{\pi}_{k})\\ +(\mathbf{\pi}-\mathbf{\pi}_{k})^{\top}\nabla_{2}f(\mathbf{s}_{k+1},\mathbf{\pi}_ {k})+\frac{\Lambda_{2}(\mathbf{s}_{k+1})}{2}\|\mathbf{\pi}-\mathbf{\pi}_{k}\|^{2}. \tag{18}\]
The new iterate \(\mathbf{\pi}_{k+1}\) is then defined as a minimizer of the right term of (18). Hereagain, we can solve this problem in an inexact manner, that is to search for some \(\mathbf{\pi}_{k+1}\in C_{2}\) satisfying
\[\begin{cases}(\mathbf{\pi}_{k+1}-\mathbf{\pi}_{k})^{\top}\nabla_{2}f(\mathbf{s}_{k+1},\bm {\pi}_{k})\\ \quad\quad+\gamma_{\pi,k}^{-1}\Lambda_{2}(\mathbf{s}_{k+1})\|\mathbf{\pi}_{k+1}-\mathbf{ \pi}_{k}\|^{2}\leq 0,\\ \|\nabla_{2}f(\mathbf{s}_{k+1},\mathbf{\pi}_{k})+\mathbf{r}_{k}^{-1}\|\leq\kappa_{2}\sqrt{ \Lambda_{2}(\mathbf{s}_{k+1})}\|\mathbf{\pi}_{k+1}-\mathbf{\pi}_{k}\|,\end{cases} \tag{19}\]
for some \(\mathbf{r}_{k}^{(2)}\in N_{C_{2}}(\mathbf{\pi}_{k+1})\) and \(\kappa_{2}>0\). The existence of \(\mathbf{\pi}_{k+1}\) can be shown from [25, Rem. 3.3]. In particular, a minimizer over \(C_{2}\) of the right term in (18) satisfies (19). The kernel update can be deactivated, if the kernel is known (i.e., non blind case), Algorithm 1 then identifies with [14].
### _Convergence Result_
We establish the following convergence theorem for Algorithm 1. Its proof is provided in the supplementary material.
**Theorem 1**.: _Let \((\mathbf{s}_{k})_{k\in\mathbb{N}}\) and \((\mathbf{\pi}_{k})_{k\in\mathbb{N}}\) be sequences generated by Alg. 1. If \((C_{1},C_{2})\) are semi-algebraic sets, and \(\nabla f\) is Lipschitz on the domain of \(\Omega\), then the sequence \((\mathbf{s}_{k},\mathbf{\pi}_{k})_{k\in\mathbb{N}}\) converges to a critical point \((\widehat{\mathbf{s}},\widehat{\mathbf{\pi}})\) of Problem (5)._
The above result is novel, as it extends [14, Theo.1] to the block alternating case using proof ingredients from [16, 26]. The assumption on \((C_{1},C_{2})\) ensures that function \(\Omega\) satisfies Kurdyka-Lojasiewicz inequality, which is essential for the proof of descent schemes in a non-convex setting [15].
## IV Numerical results
### _Datasets_
Two datasets A and B were considered. The original sparse signal \(\overline{\mathbf{s}}\) and the observed signal \(\mathbf{y}\) are shown in Fig. 1, both of size \(N=200\). Signal \(\mathbf{y}\) is obtained from (1) where \(\overline{\mathbf{\pi}}\) is a normalized Gaussian kernel with standard deviation 0.15 and size \(L=21\). The noise \(\mathbf{n}\) is zero-mean white Gaussian with variance \(\sigma^{2}\) either equals 0.5 % or 1.0 % of \(x_{\max}\) defined as the maximum amplitude of \(\overline{\mathbf{x}}=\overline{\mathbf{\pi}}*\overline{\mathbf{s}}\). Signal and kernel convolution is implemented with zero padding. Trend \(\overline{\mathbf{t}}\) is taken as the low-frequency signal from [8].
### _Algorithmic settings_
We set \(C_{1}=[0,100]^{N}\), and \(C_{2}\) the simplex unit set, i.e. \(C_{2}=\{\mathbf{\pi}=(\pi_{\ell})_{1\leq\ell\leq L}\in[0,+\infty[^{L}\) s.t. \(\sum_{\ell=1}^{L}\pi_{\ell}=1\}\). For such choices, the assumptions of Theorem 1 hold, and since metric (15) is diagonal, the resolution of (17) and (19) is straightforward, by [24, Prop. 24.11] and [27, Cor. 9]. Namely, for every \(k\in\mathbb{N}\), and \(i\in\{1,\ldots,\mathcal{I}\}\),
\[\begin{cases}\mathbf{s}_{k,i}\!=\!\text{Proj}_{C_{1}}\!\left(\mathbf{s}_{k}\!-\!\gamma_{s, k}\mathbf{A}_{1,\rho_{k,i}}(\mathbf{s}_{k},\mathbf{\pi}_{k})^{-1}\nabla_{1}f(\mathbf{s}_{k},\mathbf{\pi}_{k}) \right),\\ \mathbf{\pi}_{k+1}=\text{Proj}_{C_{2}}\left(\mathbf{\pi}_{k}-\gamma_{\pi,k}\Lambda_{2}( \mathbf{s}_{k+1})^{-1}\nabla_{2}f(\mathbf{s}_{k+1},\mathbf{\pi}_{k})\right).\end{cases}\]
Hereabove, \(\text{Proj}_{C_{1}}\) is the projection over the positive orthant, that has a simple closed form expression, while \(\text{Proj}_{C_{2}}\) is the projection over the simplex unit set, that can be computed using the fast procedure from [28]. For simplicity, we set constant stepsizes \(\gamma_{s,k}\equiv\gamma_{\pi,k}\equiv 1.9\), thus satisfying the required range assumption. Moreover, we take \(\theta=0.5\) in the TR update, and a maximum of \(\mathcal{I}=50\) of TR trials. We use the same initialization strategy for all methods as in [13], namely \(\mathbf{s}_{0}\in C_{1}\) is a constant positive-valued signal and \(\mathbf{\pi}_{0}\in C_{2}\) is a centered Gaussian filter with standard deviation of 1. The stopping criterion parameters are set as \(\varepsilon=\sqrt{N}\times 10^{-6}\) and \(K_{\max}=2000\).
### _Numerical results_
PENDANTSS jointly performs blind deconvolution and trend removal, using SPOQ penalty. Let us recall that SOOT penalty from [13] is retrieved by setting \((p,q)=(1,2)\) in SPOQ. Another setting will be analyzed, namely \((p,q)=(0.75,2)\). Other choices led to similar or poorer restoration results, as also observed in [14]. In the spirit of an ablation study, we compare PENDANTSS pipeline with the state-of-the-art background estimation method backcor [7] to
method [13], to estimate the signal \(\widehat{\mathbf{s}}\) and the kernel \(\widehat{\mathbf{\pi}}\). In both cases, we either use SPOQ \((p,q)=(0.75,2)\), or SPOQ \((p,q)=(1,2)\) (i.e., SOOT) for promoting sparsity in \(\widehat{\mathbf{s}}\).
We use signal-to-noise ratios to evaluate our estimations, respectively for signal (SNR\({}_{\mathbf{s}}\)), kernel (SNR\({}_{\mathbf{\pi}}\)) and trend (SNR\({}_{\mathbf{t}}\)). For instance, \(\text{SNR}_{\mathbf{s}}=20\log_{10}(\|\widehat{\mathbf{s}}\|_{2}/\|\widehat{\mathbf{s}}- \widehat{\mathbf{s}}\|_{2})\). Moreover, TSNR evaluates the SNR only on the support of the original sparse signal. While their support are not known in general, it reveals how peak-derived quantities (height, width, area), important for downstream quantitative chemical analysis, would be impacted by detrending and deconvolution.
Hyperparameters, e.g. regularization parameters of back-cor [7] and SPOQ/SOOT parameters \((\lambda,\beta,\eta)\), are adjusted through grid search to maximize a weighted sum of SNRs for one completely known reference realization, i.e. \(2\text{SNR}_{\mathbf{s}}+\text{SNR}_{\mathbf{\pi}}+\text{SNR}_{\mathbf{t}}\), which appeared as a representative metric in our experiments. We set \(\alpha=7\times 10^{-7}\) as recommended in [14]. In practice, \((\alpha,\beta,\eta)\) have little influence on performance, while the choice of \(\lambda\) is critical. The cutoff frequency of the low-pass filter in (3) is chosen as the best performing point over the first ten peak points of the modulus of the signal frequency spectrum. To assure the kernel is centered, a spatial shift on the estimated kernel and the sparse signal is applied as a post-processing step as spatially shifted kernels and sparse signals result in the same observed signal. A rough grid search determines the number of inner loops to maximize the SNR\({}_{\mathbf{\pi}}\).
Table I summarizes the results of mean SNR values, and standard deviations after the "\(\pm\)" sign, calculated over two hundred noise realizations. Best and second best values are almost always achieved by the proposed PENDANTSS approach with \((p,q)=(0.75,2)\) or \((1,2)\). The difference with the baseline methods is also significant for all cases especially in terms of TSNR\({}_{\mathbf{s}}\) and SNR\({}_{\mathbf{t}}\). One exception lies on SNR\({}_{\mathbf{\pi}}\) with dataset B with the noise level of 1.0% of \(x_{\max}\), where the second best is achieved by the combination backcor+SPOQ. We stress out that in such problems, correct estimations of sparse signal and baseline are usually more important than kernel estimation.
Regarding parameters \((p,q)\), the performance of PENDANTSS is dependent on the datasets and the noise level. Considering various SPOQ parameters is indeed beneficial. According to the presented simulation results, PENDANTSS with \((p,q)=(0.75,2)\) is better for datasets with sparser, well-separable peaks (dataset A) whereas PENDANTSS with \((p,q)=(1,2)\) is preferable for more challenging datasets (dataset B). Graphical details on the quality of estimated peaks are provided as supplementary material. Computational cost for PENDANTSS is slightly higher than for the sequential method with backcor: in the order of 4 s. vs 1 s. for dataset A and 20 s. vs 10 s. for dataset B on a standard laptop.
## V Conclusion and perspectives
We address a complicated joint sparse signal blind deconvolution and additive trend problem. Our method handles smooth trend removal by exploiting the low-pass property and simplifies the problem into a blind deconvolution problem formulation integrating the SPOQ sparse penalty and appropriate constraints. A new block alternating algorithm with trust region acceleration is introduced, and its convergence is established. PENDANTSS outperforms comparable methods on typical sparse analytical signals on simulation results. Further works include its validation on other sparse spike signals. The appropriate parameters for the sparsity-promoting norm ratio penalty ought to be investigated, for instance with respect to the alleged signal sparsity or peak separability. PENDANTSS Matlab code and hyper-parameter extensive analysis are available at [https://github.com/paulzhengfr/PENDANTSS](https://github.com/paulzhengfr/PENDANTSS). The authors thank Vincent Mazet, Bruno Lety, the reviewers and the associate editor.
Fig. 1: Unknown sparse signal \(\mathbf{\pi}\) (b) and (d); in (a) and (c) observation \(\mathbf{y}\) (blue) and baseline \(\widehat{\mathbf{t}}\) (black) (bottom) for datasets A and B. Signal A has 10 spikes (5.0% of sparsity) while signal B has 20 spikes (10.0% of sparsity).
|
2310.18704
|
Statistics of inhomogeneous turbulence in large scale quasi-geostrophic
dynamics
|
A remarkable feature of two-dimensional turbulence is the transfer of energy
from small to large scales. This process can result in the self-organization of
the flow into large, coherent structures due to energy condensation at the
largest scales. We investigate the formation of this condensate in a
quasi-geostropic flow in the limit of small Rossby deformation radius, namely
the large scale quasi-geostrophic model. In this model potential energy is
transferred up-scale while kinetic energy is transferred down-scale in a direct
cascade. We focus on a jet mean flow and carry out a thorough investigation of
the second order statistics for this flow, combining a quasi-linear analytical
approach with direct numerical simulations. We show that the quasi-linear
approach applies in regions where jets are strong and is able to capture all
second order correlators in that region, including those related to the kinetic
energy. This is a consequence of the blocking of the direct cascade by the mean
flow in jet regions, suppressing fluctuation-fluctuation interactions. The
suppression of the direct cascade is demonstrated using a local coarse-graining
approach allowing to measure space dependent inter-scale kinetic energy fluxes,
which we show are concentrated in between jets in our simulations. We comment
on the possibility of a similar direct cascade arrest in other two-dimensional
flows, arguing that it is a special feature of flows in which the fluid element
interactions are local in space
|
Anton Svirsky, Corentin Herbert, Anna Frishman
|
2023-10-28T13:09:51Z
|
http://arxiv.org/abs/2310.18704v1
|
# Statistics of inhomogeneous turbulence in large scale quasi-geostrophic dynamics
###### Abstract
A remarkable feature of two-dimensional turbulence is the transfer of energy from small to large scales. This process can result in the self-organization of the flow into large, coherent structures due to energy condensation at the largest scales. We investigate the formation of this condensate in a quasi-geostropic flow in the limit of small Rossby deformation radius, namely the large scale quasi-geostrophic model. In this model potential energy is transferred up-scale while kinetic energy is transferred down-scale in a direct cascade. We focus on a jet mean flow and carry out a thorough investigation of the second order statistics for this flow, combining a quasi-linear analytical approach with direct numerical simulations. We show that the quasi-linear approach applies in regions where jets are strong and is able to capture all second order correlators in that region, including those related to the kinetic energy. This is a consequence of the blocking of the direct cascade by the mean flow in jet regions, suppressing fluctuation-fluctuation interactions. The suppression of the direct cascade is demonstrated using a local coarse-graining approach allowing to measure space dependent inter-scale kinetic energy fluxes, which we show are concentrated in between jets in our simulations. We comment on the possibility of a similar direct cascade arrest in other two-dimensional flows, arguing that it is a special feature of flows in which the fluid element interactions are local in space.
## I Introduction
Multi-scale, non-linear, interactions are one of the defining properties of turbulent flows, posing a considerable challenge both for theoretical understanding and numerical modeling. In particular, they imply that the flow at large scales is coupled to smaller scale fluctuations, and that, for instance, the structure of the large scale flow depends on the transport of momentum and the dissipation of energy by such small scales. For statistically homogeneous and isotropic flows such interactions are, to leading order, well captured within phenomenological theories, though many theoretical questions remain open and there are very few results which can be obtained from first principles [1; 2; e.g.]. However, most real flows break such symmetries at large enough scales, either because of external fields such as gravity or a magnetic field, due to the effect of rotation, or the existence of boundaries. In such flows, often the key question is to characterize the large scale mean-flow. In turn, this requires the prediction of energy and momentum transfers due to turbulent fluctuations, across scales and spatially. Generally, this is a challenging task, requiring ad-hoc assumptions. However, there is growing evidence that the non-linear interactions in the presence of a strong mean flow may in fact be easier to treat than those in a homogeneous and isotropic flow (see e.g. [3] for a review). This has been particularly evident in two-dimensional (2D) and quasi-2D flows, where dimensionality imposes strong constraints upon the nature of multi-scale interactions.
Two-dimensional flows exhibit a remarkable tendency to spontaneously self-organize into a coherent mean flow when excited at small scales. The mechanism behind this self-organization is an inverse transfer of a quadratic invariant (e.g energy) from small to large scales, in a process called the inverse cascade [4; 5; 6]. The inverse cascade arises due to the existence of a second inviscid invariant of the dynamics, which is simultaneously transferred to small scales in a so-called direct cascade. In a finite system this inverse transfer results in the accumulation of energy at the largest available length scale, forming a system-size coherent mean flow termed a condensate [4; 7; 8; 9]. In this condensate regime, the direct interactions between the mean-flow and turbulence can dominate over local-in-scale interactions. Indeed, theoretical ideas and numerical methods which do not explicitly resolve the fluctuation-fluctuation interactions have been shown to be applicable in this system (and its variants) [10; 11; 12; 13; 14; 15]. Moreover, analytical results describing both the spatial structure of the mean flow [16; 17; 18] and the turbulent kinetic energy density [19], were successfully derived from first-principles in this regime.
The interest in two-dimensional flows is not limited to the theoretical understanding of turbulent interactions, as many flows in nature become effectively two-dimensional and thus exhibit a similar phenomenology. This occurs when the fluid motion is constrained in one of the directions either because the fluid is contained within a thin layer, is stratified, or is rapidly rotating [20; 21; 22; 23]. Astrophysical and large-scale geophysical flows often have one or more of these properties, with rotation playing a particularly important role in constraining the motion, called the geostrophic regime [24; 25]. A minimal model for the flow in this regime, capturing the main features of the large-scale dynamics and serving as an important theoretical tool, is the shallow water quasi-geostrophic equation (SWQG). In SWQG, there is a typical scale which determines the range of interactions between fluid elements, called the Rossby deformation ra
dius. The special cases within which the condensate had been studied in detail so far are two-dimensional incompressible Navier-Stokes (2DNS) with the beta effect (differential rotation) [11; 12; 14; 26; 27], or 2DNS without differential rotation [16; 18; 19]. Both these cases capture dynamics at scales much smaller than the deformation radius, so that interactions span the entire domain.
Here we consider the opposite limit, where interactions are local-- the so-called large-scale quasi-geostrophic (LQG) equation [28]. This model captures the long-time dynamics at scales much larger than the deformation radius. It contains two inviscidly conserved quantities: the potential energy, transferred to large scales, and the kinetic energy, which cascades to small scales in a direct cascade. The main question we investigate is what type of condensate does this system support, and how does the locality of interactions affect the properties of turbulent fluctuations and their interaction with the mean-flow.
We begin by reviewing the derivation of the LQG equation from SWQG and discuss the conditions necessary for the emergence of a condensate in this system. Building upon our results for the mean flow of an LQG jet-type condensate [29], we fully characterize the two-point second-order statistics, combining analytical derivations and results from direct numerical simulations. Turning to the direct cascade and the kinetic energy balance, we show that in the presence of a mean flow it includes a spatial flux of fluctuating kinetic energy. Such a flux is absent in 2D Navier-Stokes, and seems to be related to the locality of interactions in LQG. Using our analytical results for the second-order statistics, we show that this flux carries most of the kinetic energy away from regions of strong mean flow, effectively arresting the direct cascade there. We confirm the in-homogeneity of the direct cascade induced by the mean flow by examining the flux of energy between scales within a smooth filtering approach [30]. In particular, we consider the local inter-scale flux of kinetic energy and potential energy for the LQG system. Measuring this flux in simulations, we demonstrate that the flux of kinetic energy to small scales is indeed locally suppressed in regions where the jets are strong, an effect so strong it is evident under short-time averaging.
This work also serves as the companion to the paper [29]. Here we provide detailed derivations and discussions of some of the results stated in [29] alongside new results and analysis not contained in [29].
## II Framework
The LQG equation can be derived as a limit of the shallow water quasi-geostrophic equation which is given by [25]
\[\partial_{t}q+\mathbf{v}\cdot\nabla q=\partial_{t}q+J(\psi,q)=0;\quad q=\left( \nabla^{2}-L_{d}^{-2}\right)\psi, \tag{1}\]
where \(q\) is the potential vorticity, \(\psi\) is the stream-fucntion which is related to the fluid velocity via \(\mathbf{v}=\mathbf{\hat{z}}\times\mathbf{\nabla}\psi\), \(\omega=\nabla^{2}\psi=(\mathbf{\nabla}\times\mathbf{v})\cdot\mathbf{\hat{z}}\) is the vorticity, \(J(\psi,q)\) is the Jacobian operator defined as \(J(\psi,q)=\partial_{x}\psi\partial_{y}q-\partial_{y}\psi\partial_{x}q=\epsilon _{ij}\partial_{i}\psi\partial_{j}q\) with \(\epsilon_{ij}\) the 2D Levi-Civita symbol. This equation describes the dynamics of a rapidly-rotating homogeneous fluid layer, wherein the pressure gradient force due to fluctuations of the free surface is balanced by the Coriolis force (the so-called _geostrophic balance_). The stream function both determines the velocity and is proportional to the surface height perturbations of the fluid layer. The scale \(L_{d}\) is called the Rossby deformation radius and sets the range of influence of a surface perturbation on its surroundings. When \(L_{d}/L\to\infty\), where \(L\) is a characteristic scale for the domain size, surface perturbations have a long-range influence on the fluid, and their equilibration is fast compared to the rotation period, giving an incompressible 2D fluid. The opposite limit \(L_{d}/L\to 0\), corresponds to a very rapidly rotating fluid, where the effect of surface perturbations is strictly local. In this limit, the long time dynamics are given by the LQG equation [28]
\[\partial_{\tau}\psi+\mathbf{v}^{\omega}\cdot\mathbf{\nabla}\psi=\partial_{\tau}\psi+ J(\omega,\psi)=f+\alpha\nabla^{2}\psi-\nu(-\nabla^{2})^{p}\psi, \tag{2}\]
with \(\mathbf{v}^{\omega}=\mathbf{\hat{z}}\times\mathbf{\nabla}\omega\) and we have included forcing \(f\) and dissipation, with \(\alpha\) the friction coefficient (corresponding to linear drag on velocity \(\mathbf{v}\)) and \(\nu\) the (hyper) viscosity.
This advection equation is similar to 2DNS but with the roles of the vorticity and the stream-function reversed. Here the vorticity acts as the "effective stream function", and the stream-function is advected by an effective velocity \(\mathbf{v}^{\omega}=\mathbf{\hat{z}}\times\mathbf{\nabla}\omega\). The integral invariants of (2) without forcing and dissipation are the kinetic energy \(Z=\frac{1}{2}\int\left(\nabla\psi\right)^{2}\mathrm{d}^{2}x=\frac{1}{2}\int| \mathbf{v}|^{2}\mathrm{d}^{2}x\) and all moments of \(\psi\), in particular the potential energy \(E=\frac{1}{2}\int\psi^{2}\mathrm{d}^{2}x\). The existence of the two quadratic invariants results in the inverse cascade of \(E\) and a direct cascade of \(Z\)[31].
Let us briefly comment on the conditions for the LQG limiting dynamics to be consistent, a more complete discussion can be found in the Appendix A. The SWQG equation is derived from the rotating shallow water equations in the limit of a small Rossby number, \(\mathrm{Ro}=U/(\Omega L)\) with \(U\) a typical velocity scale, and \(\Omega\) the fluid rotation rate. The derivation also requires that \(\mathrm{Ro}(L/L_{d})^{2}\sim o(1)\) so that height perturbations are small compared to the mean fluid thickness. The limit \(L_{d}/L\to 0\) which we take to obtain LQG is consistent with this assumption provided that \(L_{d}/L\sim\mathrm{Ro}^{\beta}\) with \(0<\beta<1/2\), under which condition LQG can be derived as a limit of SWQG. We remark that traditionally SWQG is derived assuming \(L_{d}/L\sim O(1)\) rather than \(L_{d}/L\sim\mathrm{Ro}^{\beta}\). However, we show in the Appendix A that LQG can be directly derived from the rotating shallow water equations in the latter limit, that requires rescaling time by \(\tau=t(L_{d}/L)^{2}\propto t\mathrm{Ro}^{2\beta}\) and expanding the height field in powers of \(\mathrm{Ro}^{n+2\beta}\) (instead of \(\mathrm{Ro}^{n}\) like the velocity). Thus, the forced LQG equation should be able to capture the large scale dynamics of SWQG with a small but finite \(L_{d}\) with a forcing scale which is larger than
\(L_{d}\)[31]. Note that there is evidence that the inviscid equation eventually develops motions on smaller scales, so the LQG equation may become inadequate [32].
We wish to explore the LQG system (2) in the condensation regime, where the potential energy condenses at the largest available scale. This requires that the rate of energy removal at the box scale is much slower than the transfer rate by the inverse cascade. The inverse cascade rate (eddy-turnover time) can be found using dimensional analysis, requiring that this rate depends only on the scale and the potential energy injection rate \(\epsilon=\langle\psi f\rangle\)[31]. We have \([\epsilon]\sim[\psi^{2}]/t\sim l^{8}/t^{3}\), so that the rate of the inverse cascade of \(E\) at scale \(l\) is \(\tau_{E}(l)\sim\epsilon^{-1/3}l^{8/3}\). Similarly, the rate of the direct cascade of \(Z\) at scale \(l\) is \(\tau_{Z}(l)\sim\eta^{-1/3}l^{2}\), where \(\eta=\langle\mathbf{\nabla}\psi\mathbf{\nabla}f\rangle\) is the kinetic energy injection rate. Assuming the forcing acts in a narrow band of scales around \(l_{f}\), the injection rates can be simply related by \(\eta\approx\epsilon/l_{f}^{2}\). Note that the eddy turnover time decreases with the scale much faster than in 2DNS, with a factor of \(l^{2}\) between the two. This limits the available resolution for simulations and thus also the separation of scales between the forcing and box scale.
The dissipation rates due to the drag and viscous terms are \(\tau_{\alpha}(l)\sim\alpha^{-1}l^{2}\) and \(\tau_{\nu}(l)\sim\nu^{-1}l^{2p}\) respectively. Assuming \(p\geq 2\) (integer), the former will serve as the large scale dissipation mechanism arresting the inverse cascade of \(E\) while the latter as the small-scale dissipation arresting the direct cascade of \(Z\). For the potential energy to condense at the box scale, \(L\), requires \(\delta\equiv\tau_{E}(L)/\tau_{\alpha}(L)=\alpha(L^{2}/\epsilon)^{1/3}\ll 1\). Note that \(\delta\) grows with the scale, i.e. that the ratio between the non-linear time-scale and the dissipative time scale grows with the length scale (like in 2DNS with linear friction, factors of \(l^{2}\) appearing in both time-scales cancelling out to give the same ratio). Additionally, in order for a significant fraction of the (potential) energy to be transferred to large scales, we require that at the forcing scale the dissipation rate \(\tau_{\nu}(l_{f})\) is low compared to the non-linear transfer rate, resulting in the requirement \(\text{Re}\equiv\tau_{\nu}(l_{f})/\tau_{E}(l_{f})=l_{f}^{2p-8/3}\epsilon^{1/3} /\nu\gg 1\). The kinetic energy cascade is arrested at the Kolmogorov scale \(l_{\nu}\) where the inverse cascade rate and the viscous dissipation rate are comparable, resulting in \(l_{\nu}\sim(\nu^{3}/\epsilon)^{1/(6p-8)}\). We can then estimate the potential energy dissipation at small scales: assuming a constant kinetic energy flux down to the Kolmogorov scale, the energy dissipation rate is given by \(\epsilon_{\nu}=l_{\nu}^{2}\eta\) where \(\eta=\epsilon/l_{f}^{2}\) is the injected kinetic energy. Thus we get \(\epsilon_{\nu}=(l_{\nu}/l_{f})^{2}\epsilon\). This implies that the ratio between the dissipated energy and the energy transferred to large scales is \(\epsilon_{\nu}/\epsilon_{\alpha}=\epsilon_{\nu}/(\epsilon-\epsilon_{\nu})=l_ {\nu}^{2}/(l_{f}^{2}-l_{\nu}^{2})\approx l_{\nu}^{2}/l_{f}^{2}\) which is indeed small in the limit of a large Re number.
The sharpness of the small-scale cutoff is determined by \(p\), higher values will increase the kinetic energy removal rate at scales \(l<l_{\nu}\) (and decrease the removal rate for \(l>l_{\nu}\)) resulting in a sharper cutoff of the spectrum at \(l_{\nu}\).
We perform direct numerical simulations (DNS) of the LQG equation (2) using the Dedalus framework [33]. The pseudo-spectral method is implemented using the 3/2 dealiasing rule and time stepping using a third-order, four-stage DIRK/ERK method. We focus on a jet-type LQG condensate, which simplifies the analysis in the following. In a doubly periodic domain such a condensate emerges if the symmetry between the \(x\) and \(y\) directions is broken [34; 35]. Note that such jets differ from those which emerge due to differential rotation (beta-plane turbulence [36]), which is absent here. We therefore use a doubly periodic box of dimensions \(L\equiv L_{y}=2L_{x}=2\pi\). The spatial resolution is taken to be \(64\times 128\), which is relatively low, restricted by the rapid decrease of the eddy-turnover time with decreasing scale in LQG. We use a white-in-time forcing which is localized in Fourier space at a wavenumber \(k_{f}=2\pi/l_{f}=(10,13,15)\) (forcing in an annulus of width \(2dk=2\) with a constant amplitude \(A=10^{-3}\) and a random phase). We use hyper-viscosity with \(p=7\) and \(\nu=(0.5,7.3,10)\times 10^{-19}\) and take \(\alpha=(0.5,1,2)\times 10^{-3}\). Simulation parameters are chosen such that a significant fraction of the (potential) energy is transferred to large scales \(Re\gg 1\) and such that potential energy condenses at large scales \(\delta\ll 1\). Each simulation is run until the system reaches a statistically steady state, and statistics are then gathered over many large scale turnover-times \(\tau_{E}(L)\). The full list of simulations performed is presented in Table 1 and the choice of the temporal and spatial resolutions are discussed in Appendix C.
The resulting condensate, with two alternating jets along the short side (\(x\) direction) of the domain, is shown in Fig. 1. Between the jets, there are two small vortices, similarly to what was found in 2DNS [35], possibly due to instabilities of the mean flow. In the jet region, the flow is statistically homogeneous in \(x\). Small magnitude oscillations along the \(y\) direction of the jet amplitude can also be seen in Fig. 1. In steady state, no significant drift of the profile is observed over time, so the mean profile is simply determined from the average of the snapshots without shift. After averaging, we set the axis such that the mean velocity is zero on the \(y=0\) line with \(U>0\) above it and \(U<0\) below it.
To obtain a statistical description of the steady state LQG jet condensate we decompose the flow into the mean \(\Psi=\langle\psi\rangle\) and fluctuations \(\psi^{\prime}=\psi-\Psi\) focusing on the jet region, where the flow is statistically homogeneous in \(x\), and the mean flow depends on \(y\) only. The mean flow \(\partial_{y}\Psi\equiv-U(y)\) and the mass flux \(\langle\psi_{y}^{\omega^{\prime}}\,\psi^{\prime}\rangle\) can be obtained from the mass flux balance (average of equation (2)) and the potential energy balance, neglecting kinetic energy dissipation for the fluctuations and cubic-in-fluctuations terms. The latter assumes that non-linear interactions are dominated by mean-flow-turbulence interactions at the relevant scales, and is also known as the quasi-linear approximation, see [3] for a review. The derivation and the comparison to DNS are presented in [29], and here
we only cite the resulting leading order solution
\[\partial_{y}\Psi=-U=\pm\sqrt{\frac{\epsilon}{\alpha}}, \tag{3}\] \[\left\langle\psi^{\prime}v_{y}^{\omega\prime}\right\rangle=\pm \sqrt{\epsilon\alpha}. \tag{4}\]
In agreement with (3), the simulated mean velocity \(U(y)\) is indeed constant in the region where the jets are strong, rapidly switching sign in a thin transition region between the jets, as can be seen in Fig. 1(b).
## III Second order statistics: two-point correlation functions
### Analytical results
Given an expression for the mean flow, we are now in a position to go further in the perturbation theory and consider the full second order (single-time) statistics. It is sufficient to consider the two-point correlation function \(\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle\equiv\left\langle \psi^{\prime}(\mathbf{r}_{1})\psi^{\prime}(\mathbf{r}_{2})\right\rangle\) where \(\mathbf{r}_{i}=(x_{i},y_{i})\), from which other single and two-point second-order correlation functions can be subsequently derived. To obtain an expression for \(\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle\), we will use that in a statistically steady state
\[0=\partial_{\tau}\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle= \sum_{i\neq j}\left\langle\psi_{j}^{\prime}\partial_{\tau}\psi_{i}^{\prime} \right\rangle. \tag{5}\]
The evolution equation for the fluctuations \(\partial_{\tau}\psi_{i}^{\prime}=\partial_{\tau}\left(\psi-\Psi\right)\) is obtained by subtracting the average of equation (2) from (2), giving
\[\partial_{\tau}\psi^{\prime}=-\mathbf{v}^{\omega}\cdot\mathbf{\nabla}\psi +\partial_{y}\left\langle v_{y}^{\omega\prime}\psi^{\prime}\right\rangle\\ +f+\alpha\nabla^{2}\psi^{\prime}-\nu(-\nabla^{2})^{p}\psi^{\prime}, \tag{6}\]
where we have used that \(V_{y}^{\omega}\equiv\partial_{x}\left\langle\omega\right\rangle=0\) (due to homogeneity in \(x\)). Evaluating the derivative at point \(\mathbf{r}_{i}\), multiplying by \(\psi_{j}^{\prime}\) and averaging gives
\[\left\langle\psi_{j}^{\prime}\partial_{\tau}\psi_{i}^{\prime} \right\rangle=-\left\langle\psi_{j}^{\prime}\mathbf{v}_{i}^{\omega}\cdot\mathbf{ \nabla}\psi_{i}\right\rangle+\left\langle f_{i}\psi_{j}^{\prime}\right\rangle\\ +\alpha\left\langle\psi_{j}^{\prime}\nabla^{2}\psi_{i}^{\prime} \right\rangle-\nu\left\langle\psi_{j}^{\prime}(-\nabla^{2})^{p}\psi_{i}^{ \prime}\right\rangle, \tag{7}\]
where \(\mathbf{v}_{i}^{\omega}\equiv\mathbf{v}^{\omega}(\mathbf{r}_{i})\). Note that no summation over \(i\) is implied here. The cubic term reads
\[\left\langle\psi_{j}^{\prime}\mathbf{v}_{i}^{\omega}\cdot\mathbf{\nabla} \psi_{i}\right\rangle=\partial_{y}\Psi_{i}\left\langle\psi_{j}^{\prime}v_{y}^ {\omega\prime}(\mathbf{r}_{i})\right\rangle\\ +V_{x}^{\omega}(\mathbf{r}_{i})\psi_{j}^{\prime}\partial_{x}\psi_{i}^ {\prime})+\left\langle\psi_{j}^{\prime}\mathbf{v}_{i}^{\omega\prime}\cdot\mathbf{ \nabla}\psi_{i}^{\prime}\right\rangle, \tag{8}\]
again using that \(V_{y}^{\omega}=0\). As the derivatives act on \(\mathbf{r}_{i}\neq\mathbf{r}_{j}\) we can take them out of the average, resulting in
\[\left\langle\psi_{j}^{\prime}\partial_{\tau}\psi_{i}^{\prime} \right\rangle=\\ -\left\{\partial_{y}\Psi_{i}\nabla_{i}^{2}\partial_{x_{i}}+V_{x} ^{\omega}(\mathbf{r}_{i})\partial_{x_{i}}-\alpha\nabla_{i}^{2}+\nu(-\nabla_{i}^{2 })^{p}\right\}\left\langle\psi_{j}^{\prime}\psi_{i}^{\prime}\right\rangle\\ +\left\langle f_{i}\psi_{j}^{\prime}\right\rangle-\mathbf{\nabla}_{i} \cdot\left\langle\mathbf{v}_{i}^{\omega\prime}\psi_{j}^{\prime}\psi_{i}^{\prime} \right\rangle. \tag{9}\]
where \(\mathbf{\nabla}_{i}\) and \(\nabla_{i}^{2}\) denotes the gradient and Laplacian with respect to \(\mathbf{r}_{i}\). Finally using (5) and (9) we get
\[\sum_{i=1,2}\left\{\partial_{y}\Psi_{i}\nabla_{i}^{2}\partial_{x_ {i}}+V_{x}^{\omega}(\mathbf{r}_{i})\partial_{x_{i}}-\alpha\nabla_{i}^{2}+\nu(- \nabla_{i}^{2})^{p}\right\}\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle\] \[= 2\chi_{12}-\mathbf{\nabla}_{1}\cdot\left\langle\mathbf{v}_{1}^{\omega \prime}\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle-\mathbf{\nabla}_{2}\cdot \left\langle\mathbf{v}_{2}^{\omega\prime}\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle, \tag{10}\]
where we have used that the force two-point correlation function is given by \(\left\langle f(\mathbf{x}_{1},t)f(\mathbf{x}_{2},t)\right\rangle=2\chi_{12}\delta(t-t^{ \prime})\).
\begin{table}
\begin{tabular}{c c c c c c c c} \(k_{f}\) & \(\alpha\times 10^{3}\) & \(\nu\times 10^{19}\) & \(\epsilon\times 10^{4}\) & \(\delta\) & \(Re\times 10^{-13}\) & \(T_{L}\) \\ \hline A & 13 & 2.0 & 7.30 & 2.44 &.109 & 2.26 & 872.9 \\ B & 13 & 1.0 & 7.30 & 2.42 &.055 & 2.25 & 1031.7 \\ C & 13 & 0.5 & 7.30 & 2.42 &.027 & 2.25 & 1122.8 \\ D & 13 & 1.0 & 7.30 & 1.04 &.072 & 1.70 & 185.0 \\ E & 13 & 1.0 & 0.50 & 2.59 &.053 & 33.64 & 176.0 \\ F & 10 & 2.0 & 10.00 & 1.35 &.133 & 26.47 & 191.6 \\ G & 10 & 1.0 & 10.00 & 1.37 &.066 & 26.60 & 155.1 \\ H & 10 & 0.5 & 10.00 & 1.36 &.033 & 26.54 & 252.7 \\ I & 15 & 1.0 & 7.30 & 2.48 &.054 & 0.45 & 327.9 \\ J & 15 & 2.0 & 0.50 & 2.71 &.105 & 6.75 & 300.2 \\ K & 15 & 1.0 & 0.50 & 2.74 &.052 & 6.77 & 206.6 \\ \end{tabular}
\end{table}
Table 1: Parameters of the DNS runs. All runs are performed with hyper-viscosity \(p=7\) on a \(64\times 128\) grid. The forcing wave number is \(k_{f}\), the drag coefficient \(\alpha\), viscosity \(\nu\), potential energy injection rate \(\epsilon\), the ratio of large eddy turnover time and dissipation time scales is \(\delta=\alpha\epsilon^{-1/3}L^{2/3}\), the ratio of viscous and forcing time scales is \(Re=l_{f}^{2p-8/3}\epsilon^{1/3}/\nu\) and \(T_{L}\) is the simulation time in units of large eddy turnover time \(\tau_{L}=\epsilon^{-1/3}L^{8/3}\).
Figure 1: LQG jet condensate, showing the velocity \(\mathbf{v}=\hat{z}\times\mathbf{\nabla}\psi\) snapshot (a) and mean over time (b). The color corresponds to the velocity magnitude. (Simulation-B)
Having derived the general equation for the two-point correlation function, equation (10), we will proceed using a perturbative approach. For the mean flow, we will use the leading order solution cited above. In the same manner as was done for the single point correlation function, we shall neglect the viscous dissipation term (there is no dissipative anomaly) as well as that by linear friction, since in the condensate regime we expect the dissipation of the fluctuations of \(\psi\) to be a sub-leading effect, e.g.
\[\frac{\alpha\nabla_{i}^{2}\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime} \right\rangle}{\partial_{y}\Psi_{i}\nabla_{i}^{2}\partial_{x_{i}}\left\langle \psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle}\sim\frac{\alpha^{3/2}}{\sqrt {\epsilon}}l_{2}=\delta^{3/2}\frac{l_{2}}{L}\leq\delta^{3/2}\ll 1, \tag{11}\]
where \(l_{2}\leq L\) is the length scale of the two-point function and assuming the condensate regime with \(\delta\ll 1\). We will further use the quasi-linear approximation, expecting that at leading order the cubic fluctuation terms are negligible compared to the mean-flow-fluctuations term \(\partial_{y}\Psi_{i}\nabla_{1}^{2}\partial_{x_{i}}\left\langle\psi_{1}^{ \prime}\psi_{2}^{\prime}\right\rangle\). This gives
\[\left\{\partial_{y_{1}}\Psi_{1}\nabla_{1}^{2}\partial_{x_{1}}+\partial_{y_{2} }\Psi_{2}\nabla_{2}^{2}\partial_{x_{2}}\right\}\left\langle\psi_{1}^{\prime} \psi_{2}^{\prime}\right\rangle=2\chi_{12}. \tag{12}\]
Using homogeneity in the \(x\) direction (also for \(\chi_{12}\) which only depends on \(x_{1}-x_{2}\)), i.e. that \(\partial_{x_{1}}=-\partial_{x_{2}}\) when acting on the two-point function, and \(\partial_{y}\Psi=\sqrt{\epsilon/\alpha}\) at leading order, simplifies the advection operator to \(\nabla_{1}^{2}\partial_{x_{1}}+\nabla_{2}^{2}\partial_{x_{2}}=(\partial_{x_{1 }}^{2}+\partial_{y_{1}}^{2})\partial_{x_{1}}-(\partial_{x_{1}}^{2}+\partial_{y_ {2}}^{2})\partial_{x_{1}}=(\partial_{y_{1}}^{2}-\partial_{y_{2}}^{2})\partial_{ x_{1}}\). The equation then reads
\[\left(\partial_{y_{1}}+\partial_{y_{2}}\right)\left(\partial_{y_{1}}-\partial_ {y_{2}}\right)\partial_{x_{1}}\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime} \right\rangle=2\sqrt{\frac{\alpha}{\epsilon}}\chi_{12}. \tag{13}\]
Changing variables to \(y_{+}=(y_{1}+y_{2})/2\) and \(y_{-}=(y_{1}-y_{2})/2=\Delta y/2\) so that \(\partial_{y_{+}}=\partial_{y_{1}}+\partial_{y_{2}}\) and \(\partial_{y_{-}}=\partial_{y_{1}}-\partial_{y_{2}}\), finally gives the equation for the two-point function in compact form
\[\partial_{y_{+}}\partial_{y_{-}}\partial_{x_{1}}\left\langle\psi_{1}^{\prime} \psi_{2}^{\prime}\right\rangle=2\sqrt{\frac{\alpha}{\epsilon}}\chi_{12}. \tag{14}\]
We now briefly outline the solution of equation (14), leaving the detailed derivation to Appendix B. The solution will be a sum of the particular and the homogeneous solutions of equation (14). We begin with the former, first noting that the forcing correlation function \(\chi_{12}\) in equation (14) should be replaced by \(\tilde{\chi_{12}}\)
\[\tilde{\chi}_{12}=\chi_{12}-\int_{-\frac{L_{y}}{2}}^{\frac{L_{y}}{2}}\frac{ds} {L_{y}}\chi_{12}(\Delta x,s)-\int_{-\frac{L_{y}}{2}}^{\frac{L_{y}}{2}}\frac{ ds}{L_{x}}\chi_{12}(s,\Delta y). \tag{15}\]
where the \(\Delta x\) and \(\Delta y\) independent parts (the respective \(k_{x}=0\) and \(k_{y}=0\) Fourier modes) are subtracted. This is necessary as these modes do not satisfy the Fredholm alternative, so the particular solution for them must be determined at next order, see Appendix B. The modified equation can now be straightforwardly integrated to obtain the particular solution. Note that for small separations \(\Delta x,\Delta y\ll l_{f}\ \tilde{\chi}_{12}\approx\chi_{12}\), while for \(\Delta x,\Delta y\gg l_{f}\ \tilde{\chi}_{12}\ll 1\) so the influence of the forcing is limited to scales \(\Delta x,\Delta y<l_{f}\), see Appendix B.
While the forcing provides the leading order contribution to the odd in \(\Delta x\) part of the correlation function, corresponding to parity+time reversal symmetry breaking, the even contribution at leading order must come from the homogeneous solutions to (14). Those are the zero modes of the advection operator \(\mathcal{L}_{1}+\mathcal{L}_{2}=\nabla_{1}^{2}\partial_{x_{1}}+\nabla_{2}^{2} \partial_{x_{2}}=\partial_{y_{+}}\partial_{y_{-}}\partial_{x_{1}}\) :
\[\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle_{\rm hom}=C(\Delta y,\Delta x)+C_{1}(y_{+},\Delta x)+C_{2}(y_{+},\Delta y). \tag{16}\]
The relevant form of the solution in our case is only \(C(\Delta y,\Delta x)\) as we detail in Appendix B. Thus, the full solution reads
\[\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle =C(\Delta y,\Delta x) \tag{17}\] \[+2y_{+}\sqrt{\frac{\alpha}{\epsilon}}\int_{0}^{\Delta x}dz\int_{0 }^{\Delta y/2}dz^{\prime}\tilde{\chi}_{12}\left(z,z^{\prime}\right),\]
Note that for the homogeneous part \(C(\Delta x,\Delta y)=C(-\Delta x,\Delta y)=C(-\Delta x,-\Delta y)\), where the first equality is a consequence of the invariance with respect to parity (\(x\rightarrow-x\)) + time reversal (\(t\rightarrow-t\)) (PT) which we expect the zero modes to have, and the second of the exchange symmetry \(\mathbf{r}_{1}\rightarrow\mathbf{r}_{2}\) of the two-point correlation function. In addition, we get the prediction that for \(\Delta x=0\), \(\Delta y\neq 0\) the correlation function is independent of \(y_{+}\), as confirmed in DNS Fig. 3. Our approach lacks information about the boundary conditions to be applied
Figure 2: The two-point correlation function \(\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle\) as measured in DNS (Simulation-B) with (a) \(\Delta x=\Delta y=0\) (b) \(\Delta x=y_{+}=0\) and (c) \(\Delta y=y_{+}=0\). The region where the leading order solution for the mean flow applies is delimited by dashed lines. (Simulation-B)
and treats the differential operator perturbatively. It is thus unclear how to determine \(C(\Delta x,\Delta y)\), and it may require going to the next order in perturbation theory, which is beyond the scope of the present work.
As a consistency check, we can compute the mass flux \(\langle v_{y}^{\omega\prime}\psi^{\prime}\rangle\) directly from our result for the two-point function (17). In particular, we directly confirm that, being an odd correlator, it is determined by the inhomogeneous solution to the two-point function equation. We shall compute \(\left\langle v_{y}^{\omega\prime}(\mathbf{r}_{1})\psi_{2}^{\prime}\right\rangle\) and will subsequently merge the two points.
\[\sqrt{\frac{\epsilon}{\alpha}}\left\langle v_{y}^{\omega\prime}( \mathbf{r}_{1})\psi_{2}^{\prime}\right\rangle=\] \[=\nabla_{1}^{2}\partial_{x_{1}}\left[2y_{+}\int_{0}^{\Delta x}dz \int_{0}^{\Delta y/2}dz^{\prime}\tilde{\chi}_{12}\left(z,z^{\prime}\right) \right]+\{\text{odd}\}\] \[=\partial_{y_{1}}^{2}\left[2y_{+}\int_{0}^{\Delta y/2}dz^{\prime }\tilde{\chi}_{12}\left(\Delta x,z^{\prime}\right)\right]+\{\text{odd}\}\] \[=\tilde{\chi}_{12}(\Delta x,\Delta y)+\{\text{odd}\} \tag{18}\]
where \(\{\text{odd}\}\) denotes terms odd in \(\Delta y\) and \(\Delta x\) which will vanish when we take the single point limit \(\Delta x,\Delta y\to 0\). Note that the zero mode indeed produces only odd contributions in \(\Delta x\) (since \(C(\Delta x,\Delta y)\) is even under \(x_{1}\to-x_{1}\) while in (18) there is an odd derivative with respect to this variable), which do not contribute. Taking the limit \(\Delta x,\Delta y\to 0\), \(\tilde{\chi}_{12}\to\epsilon\), up to the contribution to the energy injection rate from modes with \(k_{x}=0\) and \(k_{y}=0\), assumed to be \(O(l_{f}/L)\). We thus get the expected result of \(\langle v_{y}^{\omega\prime}\psi^{\prime}\rangle=\sqrt{\alpha\epsilon}\).
### Simulation results for the two-point correlation function
We now present results from DNS for the two-point function \(\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle(\Delta y,\Delta x,y_{+})\). Note that a-priori the correlation function also depends on \(x_{+}\) (or \(x_{1}\)), but taking into account statistical homogeneity in the \(x\) direction, we also average over \(x_{+}\) (in addition to time). Using the fact that both jet regions are statistically identical, we compute the two-point function for each of them and average the two to obtain better statistics. In Fig. 2(a) we present the variance \(\langle\psi^{\prime 2}\rangle\) as a function of \(y\) normalized by its value at \(y=0\) at the center of the jet. The jet region, where the leading order solution for the mean flow (3) applies, is defined by \(|\partial_{y}U|/\sqrt{\epsilon/\alpha L^{2}}<1\) and is delimited by dashed lines, and we expect that \(\langle\psi^{\prime 2}\rangle=C(0,0)\) in this region. The large peaks in the variance \(\langle\psi^{\prime 2}\rangle\) outside the jet region are related to the vortices in between the jets. While they are coherent structures with a large amplitude, which we would normally associate with a mean flow, since they freely move across the domain they contribute to the fluctuations in our averaging procedure. In Fig. 2(b) we present the normalized correlation function \(\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle(\Delta y,\Delta x=0,y_{+}=0) /\langle\psi^{\prime 2}\rangle(y=0)\), showing how \(\psi\) correlations decay with \(\Delta y\) when the separation between the points is taken symmetrically around a jets center. In Fig. 2(c) we present the normalized correlation function \(\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle(\Delta y=0,\Delta x,y_{+}=0) /\langle\psi^{\prime 2}\rangle(y=0)\), showing the correlations in the \(x\) direction for points at the center of the jet \(y_{1}=y_{2}=0\). In Fig. 3 we show that the shape of these correlations is to leading order independent of \(y_{+}\) in the jet region, presenting \(\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle(\Delta y,\Delta x=0,y_{+})\) and \(\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle(0,\Delta x,y)\). Note that according to (17) setting \(y_{+}=0\) or \(\Delta x=0\) or \(\Delta y=0\), as we do in Fig. 2, allows us to probe only the zero mode \(C(\Delta x,\Delta y)\) since the inhomogeneous contribution vanishes.
In the upper panel of Fig. 4 we present the structure of the correlation function as a function of \(\Delta x\) and \(\Delta y\) at a few fixed \(y_{+}\). According to (17) the zero modes \(C(\Delta x,\Delta y)\) can be observed by setting \(y_{+}=0\), as presented in the center panel in Fig. 4. Qualitatively, from Fig. 4 it appears that the zero mode is symmetric with respect to reflection of \(\Delta x\), as expected. To quantify this we decompose the two-point function into its even and odd parts with respect to \(\Delta x\) (and separately \(\Delta y\)) the decomposition given by:
\[G_{\Delta z\text{ even}} = \frac{G(\Delta z,...)+G(-\Delta z,...)}{2}, \tag{19}\] \[G_{\Delta z\text{ odd}} = \frac{G(\Delta z,...)-G(-\Delta z,...)}{2}. \tag{20}\]
To quantify the symmetry of the zero mode we compute the relative power:
\[R_{\Delta z}\left[G\right]=\sqrt{\frac{\iint d^{2}xG_{\Delta z\text{ odd}}^{2}}{\iint d^{2}xG_{\Delta z\text{ even}}^{2}}}, \tag{21}\]
at \(y_{+}=0\). For all simulations considered we get that \(R_{\Delta x}\left[\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle\right]\ll 1\) (as well as \(R_{\Delta y}\left[\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle\right]\ll 1\)) at \(y_{+}=0\) (and in fact for any \(y_{+}\) inside the jet region). Specifically, in the case of the simulation considered in Figs. (2, 3, 4) we get \(R_{\Delta x}\left[\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\rangle\right]=0.092\) (
Figure 3: The variation of the two-point function with \(y_{+}\) with: (a) \(\Delta x=0\) and (b) \(\Delta y=0\). (Simulation-B)
\(R_{\Delta y}\left[\langle\psi^{\prime}_{1}\psi^{\prime}_{2}\rangle\right]=0.099\)) at \(y_{+}=0\). Thus, our results are consistent with the presence of zero modes of the form predicted in (17), and support that these zero modes are even with respect to PT as expected from theoretical considerations.
For \(y_{+}\neq 0\) one expects contributions both from the even and the odd correlators, however, in practice we observe that the two-point function is independent of \(y_{+}\) (as we expect for the zero mode) and looks identical to that at \(y_{+}=0\). Thus it seems that the two-point function is dominated by the zero mode. In the lower panel of Fig. 4 we present the part even with respect to \(\Delta x\) of the correlation function. The lower and upper panels indeed appear identical, meaning that the full correlation function is dominated by the even part. We note that from DNS we also get that \(R_{\Delta y,\Delta x}\left[\langle\psi^{\prime}_{1}\psi^{\prime}_{2}\rangle \right]=0.102\), while it should vanish from the exchange symmetry, suggesting that the odd contribution to \(\langle\psi^{\prime}_{1}\psi^{\prime}_{2}\rangle\) is comparable to the numerical noise.
An important question is how does the level of fluctuations scale with the parameters of the problem. In particular, there are at least two small parameters which can be important, \(l_{f}/L\) and \(\delta\). In order for the perturbation theory to be consistent, fluctuations should be suppressed compared to the mean flow, the ratio expected to scale as a power of a small parameter. To analyze the scaling of the fluctuations we will focus on single point quantities, but instead of considering \(\langle\psi^{\prime 2}\rangle\) we will consider \(\langle u^{\prime 2}\rangle=\langle(\partial_{y}\psi^{\prime})^{2}\rangle\) and \(\langle v^{\prime 2}\rangle=\langle(\partial_{x}\psi^{\prime})^{2}\rangle\). First, since \(U=-\partial_{y}\Psi=\) Const this will make the comparison to the mean flow much cleaner. In addition, it will allow to compare between the scaling of even under PT correlators and that of the odd correlator \(\langle u^{\prime}v^{\prime}\rangle=-\langle\partial_{x}\psi^{\prime}\partial _{y}\psi^{\prime}\rangle\) (while \(\langle\psi^{\prime 2}\rangle\) is an even correlator so there is no odd part to compare to).
For the latter we can derive an analytic expression:
\[\langle v^{\omega}_{y}\psi^{\prime}\rangle =\langle(\partial_{x}\nabla^{2}\psi^{\prime})\psi^{\prime}\rangle,\] \[=-\langle\partial_{x}^{2}\psi^{\prime}\partial_{x}\psi^{\prime} \rangle-\langle\partial_{y}^{2}\psi^{\prime}\partial_{x}\psi^{\prime}\rangle,\] \[=-\partial_{x}\frac{\langle(\partial_{x}\psi^{\prime})^{2} \rangle}{2}+\langle\partial_{y}\psi^{\prime}\partial_{x}\partial_{y}\psi^{ \prime}\rangle+\partial_{y}\langle uv\rangle,\] \[=\partial_{x}\frac{\langle(\partial_{y}\psi^{\prime})^{2} \rangle}{2}+\partial_{y}\langle uv\rangle=\partial_{y}\langle uv\rangle, \tag{22}\]
where we have used the homogeneity in \(x\) repeatedly. From the leading order solution for the mass flux we thus know that \(\partial_{y}\langle vu\rangle=\sqrt{\alpha\epsilon}\), meaning that \(\langle vu\rangle=\sqrt{\alpha\epsilon}y\) which can be written as \(\langle u^{\prime}v^{\prime}\rangle=(\epsilon L)^{2/3}(y/L)\delta^{1/2}\). This is confirmed by our DNS, shown in Fig. 7(b).
On the other hand, we expect the even correlators to be determined by the zero modes, and using equation (17) we have that \(\langle v^{\prime 2}\rangle=-\partial_{\Delta x}^{2}C|_{(0,0)}\) and \(\langle u^{\prime 2}\rangle=-\partial_{\Delta y}^{2}C|_{(0,0)}\) so that both variances are expected to be constant in the jet region. Fig. 5 confirms this expectation. Note in passing that it is not a-priori clear that we can use (17) to compute single point correlators of the derivatives of \(\psi^{\prime}\). Indeed, the presence of a direct cascade of the kinetic energy \((\nabla\psi^{\prime})^{2}\) implies that non-linear interactions become important for correlators of derivatives of \(\psi^{\prime}\) at small enough distances \(\Delta x,\Delta y\ll l_{f}\), which would invalidate the approximations leading to equation (14) and its solution (17) for such correlators. However, as we will see in the next section, in the region where the mean flow is strong the direct cascade is arrested for LQG, which may explain why the solution (17) can still be used.
While lacking a prediction from analytic considerations, we can use the DNS results to determine the scaling of the fluctuations with the parameters of the model. We find that \(\langle v^{\prime 2}\rangle\) and \(\langle u^{\prime 2}\rangle\) scale differently, probably because of the asymmetry introduced by the mean flow, and we therefore examine them separately. We find that the following scalings lead to a collapse of data with different run parameters:
\[\langle u^{\prime 2}\rangle \sim(\epsilon L)^{2/3}\delta^{-1/2},\] \[\langle v^{\prime 2}\rangle \sim(\epsilon L)^{2/3}\delta^{1/4}. \tag{23}\]
Fig. 5 demonstrates the collapse of the variance profile when normalized by this scaling for three runs where only \(\delta\) is varied. In Fig. 6 we demonstrate the collapse for runs with varying forcing and viscous scale. Since the velocity variance inside the jet is uniform (Fig. 5), here we take the mean value in the middle of the jet as representative for the run.
All in all, we find
\[\langle u^{\prime 2}\rangle/U^{2}\sim\delta^{1/2}, \langle v^{\prime 2}\rangle/U^{2}\sim\delta^{5/4}, \tag{24}\] \[\langle u^{\prime}v^{\prime}\rangle/U^{2}\sim(y/L)\delta^{3/2}, \tag{25}\]
Figure 4: The two-point function (averaged over \(x_{+}\)) as a function of \((\Delta x,\,\Delta y)\) measured at different \(y_{+}\) in the jet region. (Simulation-B)
using that the mean jet velocity \(U^{2}\sim(\epsilon L)^{2/3}\delta^{-1}\). Thus, the perturbation theory is indeed consistent, with the fluctuations suppressed compared to the mean flow with powers of \(\delta\). It is also worth noting that there isn't one characteristic scaling for the fluctuations, but rather a hierarchy with \(\langle u^{\prime 2}\rangle\gg\langle v^{\prime 2}\rangle\gg\langle u^{\prime}v^{ \prime}\rangle\), in particular the odd correlator is suppressed compared to the even ones. That was also the case for the condensate state in 2DNS [16; 19]. This emphasizes that one cannot straightforwardly use a kinetic theory approach and justify quasi-linear dynamics based on the naive scaling for the fluctuations (coming from the odd correlator).
## IV The direct cascade
So far we have discussed the potential energy balance and derived the equation for the fluctuations two-point function based on the assumption that potential energy is transferred to large scales, and the only effective way by which it is dissipated is by the formation of the condensate. At the same time, we expect there to be a direct cascade of kinetic energy to small scales and we do not expect the condensate to have a significant influence on this process. Below we will show that it is not the case for LQG turbulence.
### Spatial kinetic energy balance
We first derive the spatial kinetic energy balance. To obtain the total kinetic energy balance we act with \((\partial_{i}\psi)\partial_{i}\) on (2). Note that from here, summation is implied over repeated indices. For the non-linear term we have:
\[\begin{split}&\partial_{i}\psi\partial_{i}\partial_{j}(v_{j}^{ \omega}\psi)=\partial_{i}(\partial_{i}\psi\partial_{j}(v_{j}^{\omega}\psi))- \omega\partial_{j}(v_{j}^{\omega}\psi)\\ &=\partial_{i}\left[\partial_{i}\psi\partial_{j}(v_{j}^{\omega} \psi)-\omega v_{i}^{\omega}\psi\right]=\partial_{i}J_{i},\end{split} \tag{26}\]
where we have used that since \(v_{i}^{\omega}=\epsilon_{ij}\partial_{j}\omega\), from symmetry
\[\psi v_{i}^{\omega}\partial_{i}\omega=\psi\epsilon_{ij}\partial_{j}\omega \partial_{i}\omega=0. \tag{27}\]
As expected from the inviscid conservation of kinetic energy, this contribution takes the form of a divergence of a flux (of kinetic energy) which we denote by \(\mathbf{J}\). We then decompose the stream-function into its mean and fluctuations \(\psi=\Psi+\psi^{\prime}\), average, and assume homogeneity in \(x\) (implying there is only a flux in the \(y\) direction):
\[\langle J_{y}\rangle = \langle\partial_{y}\psi\partial_{j}(v_{j}^{\omega}\psi)\rangle- \langle\omega v_{y}^{\omega}\psi\rangle \tag{28}\] \[= \partial_{y}\Psi\partial_{y}\langle v_{y}^{\omega^{\prime}}\psi^ {\prime}\rangle+\partial_{y}\Psi\langle v_{y}^{\omega\prime}\partial_{y}\psi ^{\prime}\rangle+\langle v_{j}^{\omega\prime}\partial_{y}\psi^{\prime} \partial_{j}\psi^{\prime}\rangle\] \[-\partial_{y}^{2}\Psi\langle v_{y}^{\omega^{\prime}}\psi^{ \prime}\rangle-\Psi\langle\omega^{\prime}v_{y}^{\omega\prime}\rangle-\langle \psi^{\prime}\omega^{\prime}v_{y}^{\omega\prime}\rangle.\]
The term proportional to \(\Psi\) in fact vanishes as \(\langle\omega^{\prime}v_{y}^{\omega\prime}\rangle=\langle\omega^{\prime} \partial_{x}\omega^{\prime}\rangle=\partial_{x}\langle\omega^{2\prime}\rangle/2\) = \(0\). Thus, the spatial kinetic energy flux is given by:
\[\langle J_{y}\rangle= \partial_{y}\Psi\partial_{y}\langle v_{y}^{\omega\prime}\psi^{ \prime}\rangle+\partial_{y}\Psi\langle v_{y}^{\omega\prime}\partial_{y}\psi^{ \prime}\rangle-\partial_{y}^{2}\Psi\langle v_{y}^{\omega^{\prime}}\psi^{ \prime}\rangle\] \[+\langle v_{j}^{\omega\prime}\partial_{y}\psi^{\prime}\partial_{j }\psi^{\prime}\rangle-\langle\psi^{\prime}\omega^{\prime}v_{y}^{\omega\prime }\rangle.\]
It is straightforward to compute the remaining linear terms and the resulting steady state balance of kinetic energy can finally be written as
\[\partial_{y}\left[\langle J_{y}\rangle+I_{D}\right]=\eta-D, \tag{29}\]
where \(\eta=\langle\partial_{i}\psi^{\prime}\partial_{i}f^{\prime}\rangle\) is the kinetic energy injection rate, \(D\) is the kinetic energy dissipation rate (expected to be
Figure 5: Rescaled variance of the velocity fluctuations in the direction (a) parallel to the jet (\(u^{\prime}=-\partial_{y}\psi^{\prime}\)) and (b) perpendicular to the jet (\(v^{\prime}=\partial_{y}\psi^{\prime}\)), for different values of the parameter \(\delta\). The two regions where the leading order solution for the mean flow applies are delimited by vertical dashed lines.
Figure 6: The velocity fluctuation variance inside the jet for the simulations in Table 1. Here \(u\) is the velocity component parallel to the jet (\(x\)-component) and \(v\) is the velocity component perpendicular to the jet (\(y\)-component).
mainly due to hyper-viscous dissipation of the fluctuations) and \(I_{D}\) is the flux due to diffusion (e.g. for the drag \(I_{D_{{}_{\alpha}}}=\alpha\partial_{y}\langle(\partial_{i}\psi)^{2}\rangle/2\)).
It is also useful to write the kinetic energy balance for the fluctuations. For the non-linear term the contribution can be computed by subtracting \(\partial_{i}\Psi\partial_{i}\partial_{j}\langle v_{j}^{\omega\prime}\psi^{ \prime}\rangle=\partial_{y}\Psi\partial_{y}^{2}\langle v_{y}^{\omega\prime} \psi^{\prime}\rangle\) from \(\partial_{y}\langle J_{y}\rangle\). In particular, for the terms involving the mean flow \(\partial_{y}\Psi\) we have
\[\partial_{y}\left[\partial_{y}\Psi\partial_{y}\langle v_{y}^{ \omega\prime}\psi^{\prime}\rangle+\partial_{y}\Psi\langle v_{y}^{\omega\prime }\partial_{y}\psi^{\prime}\rangle\right]-\partial_{y}\Psi\partial_{y}^{2} \langle v_{y}^{\omega\prime}\psi^{\prime}\rangle\] \[=\partial_{y}\left[\partial_{y}\Psi\langle v_{y}^{\omega\prime} \partial_{y}\psi^{\prime}\rangle\right]+\partial_{y}^{2}\Psi\partial_{y} \langle v_{y}^{\omega\prime}\psi^{\prime}\rangle. \tag{30}\]
The first term on the bottom line is (part of) a flux of fluctuating kinetic energy, while the second term is (minus) the transfer term of kinetic energy between the fluctuations and the mean flow which we denote by \(T\). We therefore get
\[\partial_{y}\left[J_{y}^{\prime}+I_{D}^{\prime}\right]=\eta-D^{\prime}+T, \tag{31}\]
where
\[J_{y}^{\prime}\equiv\langle J_{y}\rangle-\partial_{y}\Psi\partial_{y}\langle v _{y}^{\omega\prime}\psi^{\prime}\rangle \tag{32}\]
is the flux of kinetic energy of the fluctuations, \(I_{D}^{\prime}\) is the fluctuating flux due to diffusion and \(D^{\prime}\) is the dissipation rate of kinetic energy fluctuations. We expect \(T\equiv-\partial_{y}^{2}\Psi\partial_{y}\langle v_{y}^{\omega\prime}\psi^{ \prime}\rangle=\partial_{y}U\partial_{y}\langle v_{y}^{\omega\prime}\psi^{ \prime}\rangle\geq 0\) so that the kinetic energy is transferred from the mean flow to the fluctuations. Note that by an order of magnitude estimate we expect the transfer term and the difference between the total flux and the fluctuating flux \(\langle J_{y}\rangle-J_{y}^{\prime}\) to be of order \(\epsilon/L^{2}\ll\eta\).
We can now use the leading order solutions (3), (4), which imply that \(\langle\omega\rangle=0\) and \(V_{y}^{\omega}=0\) as well as that \(\langle J_{y}\rangle=J_{y}^{\prime}\) and the transfer term vanishes, since both \(\partial_{y}\Psi\) and \(\langle v_{y}^{\omega\prime}\psi^{\prime}\rangle\) are independent of \(y\) to leading order. Therefore
\[\langle J_{y}\rangle=J_{y}^{\prime}=\partial_{y}\Psi\langle v_{y}^{\omega \prime}\partial_{y}\psi^{\prime}\rangle+\langle v_{j}^{\omega\prime}\partial_{ y}\psi^{\prime}\partial_{j}\psi^{\prime}\rangle-\langle\psi^{\prime}\omega^{ \prime}v_{y}^{\omega\prime}\rangle, \tag{33}\]
and the balance for the fluctuations reads
\[\partial_{y}\left[\partial_{y}\Psi\langle v_{y}^{\omega\prime} \partial_{y}\psi^{\prime}\rangle+\langle v_{j}^{\omega\prime}\partial_{y}\psi ^{\prime}\partial_{j}\psi^{\prime}\rangle+\langle\psi^{\prime}\omega^{\prime }v_{y}^{\omega\prime}\rangle+I_{D}^{\prime}\right]\] \[=\eta-D^{\prime}. \tag{34}\]
We can now directly evaluate the spatial flux of kinetic energy mediated by the mean flow, computing \(\langle v_{y}^{\omega\prime}\partial_{y}\psi^{\prime}\rangle\) based on our previous results for the two-point function. Indeed, it is given by the limit \(\mathbf{r}_{2}\rightarrow\mathbf{r}_{1}\) of the two-point function
\[\left\langle v_{y}^{\omega\prime}(\mathbf{r}_{1})\partial_{y}\psi_{2}^{\prime} \right\rangle=\left(\partial_{y_{1}}^{2}+\partial_{x_{1}}^{2}\right)\partial_ {x_{1}}\partial_{y_{2}}\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle. \tag{35}\]
Note that we only need to consider the inhomogeneous part of the solution (17) as the zero mode is even under the reflection symmetry \(x\rightarrow-x\), while there is an odd number of derivative with respect to \(x_{1}\) appearing above. Thus, the contribution from the zero mode will be odd and vanish in the limit \(\mathbf{r}_{2}\rightarrow\mathbf{r}_{1}\). Carrying out the calculation, we get
\[\left\langle v_{y}^{\omega\prime}(\mathbf{r}_{1})\partial_{y}\psi_{2}^{\prime}\right\rangle =\sqrt{\frac{\alpha}{\epsilon}}\nabla_{1}^{2}\partial_{x_{1}} \partial_{y_{2}}\left[(y_{1}+y_{2})\int_{0}^{\Delta x}dz\int_{0}^{\Delta y/2}dz^ {\prime}\tilde{\chi}_{12}\left(z,z^{\prime}\right)\right]+\{\text{odd}\}\] \[=\sqrt{\frac{\alpha}{\epsilon}}\nabla_{1}^{2}\partial_{y_{2}} \left[(y_{1}+y_{2})\int_{0}^{\Delta y/2}dz^{\prime}\tilde{\chi}_{12}\left( \Delta x,z^{\prime}\right)\right]+\{\text{odd}\}\] \[=\sqrt{\frac{\alpha}{\epsilon}}\nabla_{1}^{2}\left[-\frac{y_{1}+y _{2}}{2}\tilde{\chi}_{12}(\Delta x,\Delta y)+\int_{0}^{\Delta y/2}dz^{\prime} \tilde{\chi}_{12}\left(\Delta x,z^{\prime}\right)\right]+\{\text{odd}\}\] \[=-y_{+}\sqrt{\frac{\alpha}{\epsilon}}\nabla_{1}^{2}\tilde{\chi}_{12 }+\{\text{odd}\}, \tag{36}\]
where \(\Delta x=x_{1}-x_{2}\), \(\Delta y=y_{1}-y_{2}\) and we have used that the correlation function is even with respect to \(\Delta x\rightarrow-\Delta x\), and \(\Delta y\rightarrow-\Delta y\). In writing (36) we have stated explicitly only the terms which will contribute in the limit \(\mathbf{r}_{2}\rightarrow\mathbf{r}_{1}\), suppressing the odd contributions. To take the limit we must evaluate \(\nabla_{1}^{2}\tilde{\Phi}_{12}\) in this limit, which can be obtained from the definition of the kinetic energy injection rate \(\eta\)
\[\eta =\langle\partial_{i}f\partial_{i}\psi\rangle=-\langle f\nabla^{2}\psi\rangle,\] \[=-\lim_{2\to 1}\nabla_{1}^{2}\langle\psi_{1}^{\prime}f_{2} \rangle=-\lim_{2\to 1}\nabla_{1}^{2}\chi_{12}. \tag{37}\]
We obtain \(\langle v_{y}^{\omega\prime}\partial_{y}\psi^{\prime}\rangle\) by taking \(\Delta x,\Delta y\to 0\) of (36) using
(37) and get:
\[\langle v_{y}^{\omega\prime}\partial_{y}\psi^{\prime}\rangle=\eta\sqrt{\frac{ \alpha}{\epsilon}}y=\frac{\sqrt{\epsilon\alpha}}{l_{f}^{2}}y, \tag{38}\]
which is in good agreement with DNS, as presented in Fig. 7(a). Note that here we have assumed that there is a negligible amount of kinetic energy injected into the \(k_{x}=0\) and \(k_{y}=0\) modes (but have not assumed that the forcing is isotropic) so that \(\tilde{\eta}=\nabla_{1}^{2}\tilde{\Phi}_{12}\approx\nabla_{1}^{2}\Phi_{12}=\eta\). As a whole we thus get that at leading order
\[\partial_{y}J^{\prime}_{y}=\partial_{y}\Psi\partial_{y}\langle v_{y}^{\omega \prime}\partial_{y}\psi^{\prime}\rangle\approx\eta. \tag{39}\]
We could have inferred that this flux term will give a contribution of the order of \(\eta\) based on an order-of-magnitude estimate. To see this, we first recall that \(\langle v_{y}^{\omega\prime}\psi^{\prime}\rangle=\partial_{y}\langle vu\rangle\), where \(u=-\partial_{y}\psi^{\prime},v=\partial_{x}\psi^{\prime}\). On the other hand, we would like to evaluate \(\partial_{y}\langle v_{y}^{\omega\prime}\partial_{y}\psi^{\prime}\rangle=- \partial_{y}\langle v_{y}^{\omega\prime}u\rangle\). As the fluctuations are determined at small scales, we thus expect that derivatives acting on the fluctuating fields inside the average will get a contribution from scales \(\sim l_{f}\), which leads to the estimate \(\partial_{y}\langle v_{y}^{\omega\prime}u\rangle\sim\partial_{y}\langle vu \rangle/l_{f}^{2}\sim\sqrt{\alpha\epsilon}/l_{f}^{2}\) in agreement with equation (38). Note that the sign of the flux, implying that it carries kinetic energy away from the jet region, seems to be a non-trivial result of the calculation. The direction of the flux is evidently linked to the direction of transfer of potential energy between scales: when potential energy is transferred from the fluctuations to the mean flow, that suppresses the direct cascade in that region, and kinetic energy is carried away from this region.
The result for \(J^{\prime}_{y}\), (39), suggests that all the kinetic energy which is injected locally is carried away by a spatial flux due to the presence of the mean flow. In particular, if there is no spatial flux due to non-linear fluctuations-fluctuations interactions which brings kinetic energy to this region from other regions, this implies that the dissipation of kinetic energy in the jet region is negligible, \(D^{\prime}\ll\eta\). This is indeed in agreement with our results from DNS, as can be seen from Fig. 8 where the profiles of the terms in the kinetic energy balance (31) are shown. In the jet region we indeed see that the balance is between the kinetic energy injection and the divergence of the flux \(\partial_{y}(U(\langle\langle v_{y}^{\omega^{\prime}}u^{\prime}\rangle)\) due to the mean flow, in accordance with Eq.(39). This implies that the kinetic energy is carried away from the jet region, where the mean flow is strong, before it has time to cascade to small scales and dissipate there -- so that the mean flow effectively arrests the direct cascade. The kinetic energy is then deposited in the region in between the jets where the divergence of this flux becomes negative in most of the region as seen in Fig. 8. This is also the region where dissipation of kinetic energy occurs. Note however, that other terms in the flux \(J^{\prime}_{y}\) also become important in that region, redistributing kinetic energy in the opposite direction to that of \(U(\langle v_{y}^{\omega^{\prime}}u^{\prime}\rangle\), as seen in the red curve in Fig. 8. In particular cubic terms in fluctuations (not shown separately here) have an important contribution to the flux, which is probably related to the presence of a coherent vortex in that region.
### Local scale-to-scale flux: the filtering approach
The presence of a condensate makes our problem inhomogeneous due to the effects of the large-scale mean flow. In the previous section, we have shown indications
Figure 7: The terms (a) \(\langle v_{y}^{\omega\prime}\partial_{y}\psi^{\prime}\rangle\) and (b) \(\langle vu\rangle\) as measured from the DNS (solid lines), rescaled and compared with their theoretical predictions (dashed line), for different values of the parameter \(\delta\). The two regions where the leading order solution for the mean flow applies are delimited by vertical dashed lines.
Figure 8: Kinetic energy balance of the fluctuations (31). The dominant flux term in the jet region \(U\langle v_{y}^{\omega^{\prime}}y^{\prime}\rangle=\partial_{y}\Psi\langle v _{y}^{\omega\prime}\partial_{y}\psi^{\prime}\rangle\) is plotted separately from the other flux terms. The two regions where the leading order solution for the mean flow applies are delimited by vertical dashed lines. (Simulation-B)
that this inhomogeneity affects the transfer of kinetic energy to small scales so that we expect the direct cascade to proceed inhomogeneously in space. In this section, we would like to confirm this scenario by directly examining the kinetic energy flux between scales for different regions in the flow. The flux in Fourier space only gives the mean flux for the entire flow, so cannot differentiate between different spatial regions. Instead, we employ a real space filtering technique [30], combining local spatial information and information about transfer between scales. It relies on a coarse-graining of the fields in real space using a convolution kernel with a characteristic length scale. The convolution with the kernel effectively filters features on scales smaller than its length scale. This approach is somewhat similar to a 2D wavelet transform, that keeps the spatial dependence. It then allows for the decomposition of the fluid kinetic energy (or other quadratic integrals) into band-pass contributions from a series of length scales in real space, writing the corresponding budget equation gives the transfers of turbulent energy both in space and in scale. The main feature of this approach which will be useful here is the scale-to-scale flux term which is space dependent in this approach. It will allow us to determine the spatial distribution of the flux across scales of potential and kinetic energy. The approach was previously applied to incompressible Navier-Stokes [30; 37; 38; 39], as well as other kinds of flows, including compressible flows [40]. Here we will perform the scale decomposition and derive the analogous balance equations for the LQG equations.
Following [30], we define a smooth low-pass filter as
\[\overline{\psi}_{l}(\mathbf{r})\equiv\int\mathrm{d}\mathbf{x}^{\prime}\,G_{l}(\mathbf{x} ^{\prime})\psi(\mathbf{x}^{\prime}+\mathbf{r}), \tag{40}\]
where the convolution kernel \(G_{l}(\mathbf{x})\) is taken to be smooth, non-negative, normalized \(\int\mathrm{d}\mathbf{r}\,G_{l}(r)=1\) and spatially localized. The filter scales with \(l\) as \(G_{l}(\mathbf{r})=l^{-2}G(\mathbf{r}/l)\). Specifically, we will choose the Gaussian kernel \(G_{l}(\mathbf{r})=e^{-r^{2}/2l}/(2\pi l^{2})\) when applying filtering to DNS. We may use the filtering operator to write equations for the filtered quantities with a given length scale. In doing so, non-linear interactions cause the emergence of terms representing energy transfer between scales. Filtering is a type of averaging, so the balance equations derived in this section for the large scale kinetic and potential energy are identical in structure to those one derives for the mean flow (where the averaging is over time). In particular, acting with the filter on (2) results in the filtered equations of motion
\[\partial_{\tau}\overline{\psi_{l}}+\mathbf{\nabla}\left(\overline{\mathbf{v}_{l}^{ \prime\prime}}\overline{\psi}_{l}+\mathbf{\xi}_{l}\right)=\overline{f_{l}}+\alpha \nabla^{2}\overline{\psi_{l}}-\nu(-\nabla^{2})^{p}\overline{\psi_{l}}, \tag{41}\]
where \(\xi\) is the space dependant flux of the stream function to small scales, defined as
\[\mathbf{\xi}_{l}\equiv\overline{\left(\mathbf{v}^{\omega}\psi\right)}_{l}-\overline{ \mathbf{v}_{l}^{\prime\prime}}\ \overline{\psi}_{l}. \tag{42}\]
Note that aside from the additional, small scale, spatial flux term \(\mathbf{\xi}_{l}\), Eq. (41) is the same as the regular LQG equation (2) (with \(\psi\) replaced by \(\bar{\psi}_{l}\)).
We are interested in the potential \(\overline{e}_{l}\equiv\frac{1}{2}\overline{\psi}_{l}^{2}\) and kinetic \(\overline{h}_{l}\equiv\frac{1}{2}\left[\partial_{l}\overline{\psi_{l}}\right]^ {2}\) energy balance. We start with the potential energy flux, obtained by multiplying (41) by \(\overline{\psi}_{l}\) and writing the non-linear terms as a divergence of a flux and a transfer term between scales:
\[\frac{\partial\overline{e}_{l}}{\partial\tau}+\mathbf{\nabla}\cdot\mathbf{J}_{l}^{e}=P _{l}^{e}-\Pi_{l}-D_{l}^{e}, \tag{43}\]
where \(\mathbf{J}_{l}^{e}\) is a spatial flux term of large-scale energy, \(P_{l}^{e}\) is the production of large-scale energy, \(D_{l}^{e}\) is the dissipation of energy at large scales and \(\Pi_{l}\) is the scale-to-scale energy flux, positive if the transfer is out of the large scales to small scales. The terms are given by:
\[\Pi_{l} = -\mathbf{\nabla}\overline{\psi_{l}}\cdot\mathbf{\xi}_{l}, \tag{44}\] \[\mathbf{J}_{l}^{e} = \overline{\mathbf{v}_{l}^{\prime\prime}}\overline{e}_{l}+\overline{ \psi_{l}}\mathbf{\xi}_{l}-\alpha\mathbf{\nabla}\overline{e}_{l}+\nu\mathbf{I}_{l}^{e,p},\] (45) \[D_{l}^{e} = \alpha\left(\partial_{i}\overline{\psi_{l}}\right)^{2}+\nu\left( \partial_{i_{1}}\cdots\partial_{i_{p}}\overline{\psi_{l}}\right)^{2},\] (46) \[P_{l}^{e} = \overline{\psi_{l}}\ \overline{f_{l}}, \tag{47}\]
where \(\mathbf{I}_{l}^{e,p}\) is the spatial transport due to hyper-viscosity \(\mathbf{\nabla}\cdot\mathbf{I}_{l}^{e,p}\equiv\left[\overline{\psi_{l}}(-\nabla^{2}) ^{p}\overline{\psi_{l}}-(\partial_{i_{1}}\cdots\partial_{i_{p}}\overline{\psi _{l}})^{2}\right]\).
Similarly, to derive the balance for the kinetic energy one takes the derivative of (41) \(\partial_{i}\), and multiplies it by \(\partial_{i}\overline{\psi_{l}}\), which gives the equation for low-pass kinetic energy density balance
\[\frac{\partial}{\partial\tau}\overline{h}_{l}+\mathbf{\nabla}\cdot\mathbf{J}_{l}^{h}=P _{l}^{h}-Z_{l}-D_{l}^{h}, \tag{48}\]
where \(\mathbf{J}_{l}^{h}\) is the spatial flux of large-scale kinetic energy (compare the nonlinear contribution to (26)), \(P_{l}^{h}\) is the production of kinetic energy at large scales, \(D_{l}^{h}\) is the dissipation of kinetic energy at large scales and \(Z_{l}\) is the scale-to-scale kinetic energy flux. The different terms are given by:
\[Z_{l} = -(\partial_{j}\partial_{i}\overline{\psi}_{l})\partial_{i}\xi_{j}, \tag{49}\] \[\mathbf{J}_{l}^{h} = (\mathbf{\nabla}\overline{\psi}_{l})\overline{\psi_{l}}^{a}\partial_{j }\overline{\psi}_{l}-\overline{\mathbf{v}_{j}^{\omega}}\overline{\psi}_{l}\nabla^{2} \overline{\psi}_{l}+\partial_{i}\overline{\psi}_{l}\partial_{i}\mathbf{\xi}_{l}\] (50) \[-\alpha\mathbf{\nabla}\overline{h}_{l}+\nu\mathbf{I}_{l}^{h,p},\] \[D_{l}^{h} = \alpha\left(\partial_{i_{1}}\partial_{i_{2}}\overline{\psi_{l}} \right)^{2}+\nu\left(\partial_{i_{1}}\cdots\partial_{i_{p+1}}\overline{\psi_{l}} \right)^{2},\] (51) \[P_{l}^{h} = \partial_{i}\overline{\psi_{l}}\partial_{l}\overline{f_{l}}, \tag{52}\]
with \(\mathbf{\nabla}\mathbf{I}_{l}^{h,p}\equiv\left[\left(\partial_{i}\overline{\psi_{l}} \right)(-\nabla^{2})^{p}\left(\partial_{i}\overline{\psi_{l}}\right)-(\partial_{i _{1}}\cdots\partial_{i_{p+1}}\overline{\psi_{l}})^{2}\right]\). The transfer terms \(\Pi_{l}\) and \(Z_{l}\) can either be positive, transfering energy from scale \(l\) to smaller scales (acting as a sink) or negative, transferring energy from small scales to \(l\) (acting as a source). The direct cascade of kinetic energy corresponds to a positive flux (large to small scales), i.e. one expects \(Z_{l}>0\) on average, while an inverse transfer of potential energy implies \(\Pi_{l}<0\) on average. In addition, we expect that \(Z_{l}\approx 0\) for large enough scales \(l\), as kinetic energy is transferred from the forcing scale to smaller scales, and similarly that \(\Pi_{l}\approx 0\) for small enough \(l\). The indications above that the direct cascade does not occur in regions where the
jets are strong leads to the expectation that \(Z_{l}\approx 0\) in those regions and that the transfer of kinetic energy to small scales is concentrated in between the jets.
We show a short time average over \(15T_{L}\) of \(\Pi_{l}\) and \(Z_{l}\) in Fig. 9, which demonstrates their spatial distribution. In previous studies of the inter-scale flux in turbulent flows, the flux was observed to be statistically isotropic (as expected) and had regions of both positive and negative contributions on the level of a single snapshot [22]. A definite sign thus emerged only upon averaging. This is roughly what we observe for the potential energy inter-scale flux \(\Pi_{l}\) averaged over short times, Fig. 9(a)(b). In the panel below, we also show the flux when averaged in the \(x\) (homogeneous) direction. Then, though the result is still highly fluctuating, a negative flux, on average independent of \(y\) in the jet region, emerges for the larger coarse-graining scale \(l\). At the small scales, as expected the flux is homogeneous and fluctuating around zero. Note that at the large scales there is an imprint of the jet structure on the flux, with stronger fluctuations in the inter-jet region.
For the kinetic energy flux \(Z_{l}\) the distinction between jet regions and inter-jet regions is evident already at the level of a short-time average, Fig. 9(c)(d). In particular, the flux is visibly suppressed in the jet regions, and for small \(l\) is mostly positive between them (rather than having spatially distributed patches of positive and negative contributions of almost comparable magnitude). At large scales \(l\) larger fluctuations can be seen in between the jets, but a definite sign is harder to distinguish. These observations are further quantified in the panel below, where upon averaging in the \(x\) direction the difference between the two regions is even more clearly seen for the smaller scale \(l\). For the larger \(l\), the flux fluctuates around zero in the jet region while in between jets a very small negative flux emerges (related to the kinetic energy of the mean flow in that region, which has large gradients there). Thus, we see that most of the direct cascade is indeed concentrated in regions between jets, where the bias between a positive and a negative transfer is so much amplified.
Finally, to more systematically quantify the effects observed in Fig. 9 we consider the spatially averaged inter-scale fluxes with varying coarse-graining scale \(l\). We average both in space and in time, starting once the simulations reach the steady state and up to \(100T_{L}\). To examine the difference in \(Z_{l}\) between spatial regions inside and outside the jets, we split the spatial average of \(Z_{l}(x,y)\) into the jet region \(A_{\text{jet}}\) and the region outside the jet \(A_{\text{inter-jet}}\). We choose \(A_{\text{jet}}\) as the region where the leading order solution for the mean flow applies, as also used in previous figures, and \(A_{\text{non-jet}}=A-A_{\text{jet}}\) with \(A\) being the entire domain. We can also define the total average flux (averaged over the whole domain) given by: \(\left\langle Z_{l}\right\rangle=\left(A_{\text{jet}}/A\right)\left\langle Z_{ l}\right\rangle|_{A_{\text{jet}}}+\left(A_{\text{inter-jet}}/A\right)\left\langle Z _{l}\right\rangle|_{A_{\text{inter-jet}}}\). For truly homogeneous turbulent flow, the partition would not affect the measurement (as long as both parts are large enough so that the statistics are comparable or the averaging time is long enough). The results are presented in Fig. 10. As per our expectation, the potential energy inter-scale flux \(\left\langle\Pi_{l}\right\rangle\) is negative everywhere, corresponding to an inverse transfer, while \(\left\langle Z_{l}\right\rangle\) is everywhere pos
Figure 9: Potential energy flux between scales \(\Pi_{l}\) normalized by the total injection rate \(\epsilon\) for scales \(l/L=0.11\) (a) and \(l/L=0.89\) (b). Kinetic energy flux between scales \(Z_{l}\) normalized by the total injection rate \(\eta\) for scales \(l/L=0.11\) (c) and \(l/L=0.89\) (d). Below each plot the average over \(x\) is shown. The fields shown are obtained upon taking a short-time average over \(15T_{L}\) in steady-state. The two regions where the leading order solution for the mean flow applies are delimited by vertical dashed lines. (Simulation-K)
itive (up to a very slight negative flux for \(l/L>0.3\) in the inter-jet region), as expected for a direct cascade. Moreover, the inter-scale flux from small scales \(l\lesssim 0.1\) is significantly suppressed in the jet region, implying that so is the direct cascade. This means that the spatial flux \(\mathbf{J}_{l}^{h}\) dominates over the inter-scale flux \(Z_{l}\) in the jet region at scales smaller than the forcing scale. Furthermore, at the smallest scales we observe that the total inter-scale flux is completely dominated by the inter-jet region (which occupies a smaller area fraction), in agreement with our observation that the overwhelming majority of the dissipation occurs there, Fig. 8. The presence of the mean flow also affects the potential energy inter-scale flux \(\langle\Pi_{l}\rangle\) at large scales \(l/L>0.3\), though less dramatically. We observe that the inter-scale flux is reduced in between jets at large enough scales. This is probably a consequence of the inverse transfer being mostly mediated by the mean flow, which takes the form of a vortex in that region. The size of the vortex being of the order of \(0.1L\) may thus explain the observed decrease.
### Influence of the inverse cascade on the direct cascade in other 2D flows
We have seen that the condensate has a dramatic effect on the direct cascade in LQG, an effect that appears to be absent in 2DNS [35]. That raises the question of what determines for which types of 2D flows (with an inverse cascade) the latter effect could occur. In particular, recall that LQG and 2DNS are part of a wider class of active scalar equations where a scalar \(q\) is advected by a velocity with stream-function \(\phi\), with the relation between the two given by \(q_{\mathbf{k}}=|\mathbf{k}|^{m}\phi_{\mathbf{k}}\)[41]. Here \(m\) controls the range of the dynamics, for 2DNS, \((m,q,\phi)=(2,\omega,\psi)\) and for LQG \((m,q,\phi)=(-2,\psi,\omega)\) so the velocity is given by derivatives of the scalar, making the dynamics local. All these flows have two positive definite conserved quantities, we shall call \(E=\frac{1}{2}\int q\phi d^{2}x\) and \(Z=\frac{1}{2}\int q^{2}d^{2}x\), where \(Z\) cascades to small scales while \(E\) is transferred to large scales.
We have seen that for LQG the arrest of the direct cascade occurs in the regions where the mean flow is strong due to a spatial flux of \(Z\) away from those regions. A natural question is whether this mechanism could occur for other active scalar flows. To answer this question we consider the balance of \(Z\) for the turbulent fluctuations. First note that there are two types of terms involving the mean flow which enter this balance and can interfere with a homogeneous direct cascade: a transfer term between the mean flow and the fluctuations, and a spatial flux term. We expect \(Z\) to be transferred to small scales, and therefore an exchange term would tend to remove \(Z\) from the mean flow and transfer it to the fluctuations--enhancing the \(Z\) injection into the fluctuations in the regions of a strong mean flow. Thus, it is only a spatial flux term which could arrest the direct cascade as in LQG.
We now show that a spatial flux of \(Z\) fluctuations due to the mean flow is absent in models where interactions are non-local. In particular, we demonstrate this both for SWQG with \(L_{d}>l_{f}\) (for \(L_{d}\ll l_{f}\) the deformation radius influences the direct cascade and we expect a transition to the LQG regime for scales \(L_{d}<l<l_{f}\)), and for an active scalar with \(m>0\), assuming the flow is statistically homogeneous in the direction of the mean flow (i.e. that there is no trivial spatial flux due to the inhomogeneity of the turbulence). This is a consequence of the \(Z=q^{2}\) balance for the fluctuations in the steady state:
\[\partial_{i}\left\langle u_{i}^{\prime}\frac{q^{\prime 2}}{2}\right\rangle+ \partial_{i}Q\langle u_{i}q^{\prime}\rangle+U_{i}\partial_{i}\left\langle\frac {q^{\prime 2}}{2}\right\rangle=\eta-D, \tag{53}\]
where the third term (which is a spatial flux of \(Z^{\prime}=q^{\prime 2}\), due to advection by the mean flow) vanishes for a flow statistically homogeneous in the direction of \(U\). Thus, the feedback between the condensate and the direct cascade as we have demonstrated in LQG does not exist for an active scalar with \(m>0\) which has long-range interactions, but might,exist in models with \(m<0\) where small scale interactions are amplified.
Finally, let us also discuss if the transfer of \(Z\) from the mean flow to the fluctuations, \(\partial_{i}Q\langle u_{i}q^{\prime}\rangle\), could significantly enhance the direct cascade in regions of strong mean flow (or large \(Q\) gradients). That requires for this term to be of order \(\eta\) (the \(Z\) injection rate), which we now show is not the case in 2DNS and SWQG. To estimate it let us assume a jet geometry for simplicity, giving \(\partial_{y}Q\langle u_{y}q^{\prime}\rangle\equiv\partial_{y}Q\langle vq^{ \prime}\rangle\) where we denote \(u=-\partial_{y}\phi^{\prime},v=\partial_{x}\phi^{\prime}\) as we had above. For SWQG (and
Figure 10: Average energy fluxes across length scale \(l\) inside (blue) and outside (orange) the jet region. The total flux is denoted by a dashed line (green). (a) Potential energy flux \(\langle\Pi_{l}\rangle\). (b) Kinetic energy flux \(\langle Z_{l}\rangle\).(Simulation-K)
2DNS) we have \(\langle vq^{\prime}\rangle=-\partial_{y}\langle uv\rangle\):
\[\langle vq^{\prime}\rangle=\langle\partial_{x}\psi^{\prime}(\nabla^{2}-L_{d}^{-2 })\psi^{\prime}\rangle=-\langle\psi^{\prime}\partial_{x}\nabla^{2}\psi^{\prime }\rangle=-\partial_{y}\langle uv\rangle, \tag{54}\]
where we have already demonstrated the last equality (the Taylor identity) in equation (22) above. Thus, an order of magnitude estimate provided that \(L_{d}>l_{f}\) gives \(\partial_{y}Q\langle vq^{\prime}\rangle\sim U^{\prime\prime\prime}\partial_{y }\langle uv\rangle\sim\epsilon/L^{2}\ll\epsilon/l_{f}^{2}=\eta\) where \(\epsilon\) is the injection rate of \(E\), meaning that the transfer term is small. This is consistent with the observations in 2DNS [35], where the cubic-in-fluctuations spatial flux of \(Z\) was more significant (though still small) compared to the transfer term.
## V Discussion
In this work we have characterized the second order statistics of a jet condensate forming in the large-scale-quasi-geostrophic equation, where potential energy experiences an inverse transfer while kinetic energy cascades to small scales. We have demonstrated that in the regions where the jets are strong the quasi-linear approximation is sufficient to obtain the second-order, two-point correlation functions of all the fluctuating fields (\(\psi\) and its derivatives). This is the case since the direct cascade is effectively arrested in those regions, so that non-linear fluctuation-fluctuation interactions are unimportant even for the kinetic energy (and thus can be neglected when determining e.g. correlators of \(\nabla\psi\)). Using a local coarse-graining approach we have shown that the direct cascade is indeed mostly limited to the inter-jet regions. In the regions where the jets are strong, there is instead a spatial flux of kinetic energy, mediated by the mean flow, which prevents the direct cascade from developing, and which carries the kinetic energy to the inter-jet regions. At the same time, in between the jets we find regions where the quasi-linear approximation for the potential energy necessarily cannot work, since the mean-flow-fluctuations interactions (proportional to \(U=-\partial_{y}\Psi\)) in those regions are small, and there is no other quasi-linear terms which can facilitate a transfer between mean flow and fluctuations. This is a consequence of interactions being local in LQG, so that there are no non-local (e.g. pressure) terms related to the mean flow which can redistribute the energy. Thus we find that in LQG the domain can be decomposed into two distinct regions: one where the dynamics is quasi-linear both for potential energy and for kinetic energy and another where fluctuation-fluctuation interactions overwhelm mean-flow-turbulence interactions for both, which is also where the direct cascade is concentrated. We argue that both phenomena are related to the locality of interactions in LQG, and does not occur for flows with long-range interactions, i.e. an active scalars with \(m>0\), as well as models with short-range interactions reaching beyond the forcing scale, namely SWQG with \(L_{d}>l_{f}\). It remains to be seen if active scalars with \(m<0\) or SWQG with \(L_{d}<l_{f}\), both having dominant interactions below the forcing scale, can exhibit an arrest of the direct cascade as we have found for the limiting LQG case (\(m=-2\), \(L_{d}=0\)). More generally, understanding the similarities and differences in the condensate state between these two classes of flows away from the LQG limit is an interesting direction for future work.
For the regions where the quasi-linear approximation applies, we have found that fluctuations are suppressed compared to the mean flow with powers of \(\delta\), the parameter which quantifies the strength of the condensate. Furthermore, we find that different correlation functions scale differently with \(\delta\) and that correlators which are odd with respect to parity+time reversal symmetry are significantly suppressed compared to even correlators. Such a hierarchy was previously observed in 2DNS [19], and points to the fact that constructing a closed perturbative quasi-linear theory for the condensate may be a subtle issue, as it cannot simply rely on a uniform scaling for the fluctuations. Related to this issue, in this work we have determined that even correlators arise from zero modes of an advection operator. We found that these zero modes are homogeneous in the jet region, depending only on \(\Delta x\) and \(\Delta y\). How exactly those modes are to be determined, including their scaling with \(\delta\), however, remains unclear and is left for future work.
## Appendix A LQG from SWQG and consistency of limits
We first briefly remind the physical origin of the shallow water quasi-geostrophic equation, from which the large-scale quasi-geostrophic (LQG) system is derived. It describes a rotating shallow fluid layer, where the horizontal scale of the fluid motion, \(L\), is assumed much larger than the layers' mean depth \(H\), and which is under the influence of gravity \(g\). Assuming a constant rotation rate \(\Omega\hat{\mathbf{z}}\), and a characteristic velocity \(U\), the ratio between inertia and the Coriolis force is given by the Rossby number \(\text{Ro}=U/\Omega L\). A perturbative expansion in \(Ro\ll 1\), while assuming \(\text{Ro}(L/L_{d})^{2}\sim o(1)\) then gives the SWQG equation [25]
\[\partial_{t}q+\mathbf{v}\cdot\nabla q=\partial_{t}q+J(\psi,q)=0;\quad q=\left( \nabla^{2}-L_{d}^{-2}\right)\psi, \tag{55}\]
where \(q\) is the potential vorticity, \(\psi\) is the stream fucntion with \(\mathbf{v}=\mathbf{\hat{z}}\times\mathbf{\nabla}\psi\), \(\omega=\nabla^{2}\psi=\left(\mathbf{\nabla}\times\mathbf{v}\right)\mathbf{\hat{z}}\) is the vorticity, \(J(\psi,q)\) is the Jacobian operator defined as \(J(\psi,q)=\partial_{x}\psi\partial_{y}q-\partial_{y}\psi\partial_{x}q=\epsilon _{ij}\partial_{i}\psi\partial_{j}q\) with \(\epsilon_{ij}\) the 2D Levi-Civita symbol. The length scale \(L_{d}=\sqrt{gH}/2\Omega\) is the Rossby deformation radius. Here, hydrostatic balance relates the variation in the layers' depth \(\delta h\) to the pressure \(g\rho\mathbf{\nabla}\delta h=\mathbf{\nabla}p\). While geostrophic balance relates the stream-function to the pressure so that in total \(\psi=(g/\Omega)\delta h\), and there is a single equation for \(\psi\). There are two quadratic invariants in the SWQG system: energy (potential + kinetic) \(\int\mathrm{d}^{2}x\,q\psi\) and square potential
vorticity \(\int\mathrm{d}^{2}x\,q^{2}\). As a consequence, it permits both a direct and an inverse cascade.
Including forcing \(f\), friction \(\alpha\) (linear drag on velocity) and (hyper) viscosity \(\nu\), and using the box scale \(L\) to non-dimensionalize lengths we can write the SWQG equation as
\[\partial_{t}\left(\nabla^{2}-\left(\frac{L}{L_{d}}\right)^{2}\frac {1}{L^{2}}\right)\psi+\epsilon_{ij}\partial_{i}\psi\partial_{j}\nabla^{2}\psi\\ =f-\alpha\nabla^{2}\psi+\nu(-\nabla^{2})^{p}\nabla^{2}\psi. \tag{10}\]
Taking the limit \(\left(L_{d}/L\right)^{2}\to 0\) formally gives \(\partial_{t}\psi=0\), which is a purely decaying system. Following [28], to capture the emerging slow dynamics we will work in rescaled time \(\tau=t(L_{d}/L)^{2}\) with the limit \(L_{d}/L\to 0\), giving
\[-\partial_{\tau}\psi+L^{2}\epsilon_{ij}\partial_{i}\psi\partial_{ j}\nabla^{2}\psi\\ =fL^{2}-\alpha L^{2}\nabla^{2}\psi+L^{2}\nu(-\nabla^{2})^{p+1}\psi. \tag{11}\]
Next, we define a new stream-function variable \(\tilde{\psi}=L^{2}\psi\) and a corresponding forcing \(\tilde{f}=-fL^{2}\), drag \(\tilde{\alpha}=\alpha L^{2}\) and viscosity \(\tilde{\nu}=\nu L^{2}\). With the chosen scaling the relation between the stream-function and the height perturbation in the shallow water system becomes \(\tilde{\psi}=(gL^{2}/\Omega)\delta h\). We thus arrive at the LQG equation 2.
Let us also demonstrate that the LQG equation can be consistently derived directly from the rotating shallow water equations in the geostrophic limit \(\mathrm{Ro}\to 0\).
For a single-layer fluid, and including the Coriolis term, the inviscid shallow water equations (SW) are
\[\partial_{t}\mathbf{u}+(\mathbf{u}\mathbf{\nabla})\mathbf{u}+\mathbf{f}_{c}\times\mathbf{ u} =-g\mathbf{\nabla}\eta, \tag{12}\] \[\partial_{t}h+\mathbf{\nabla}(\mathbf{u}h) =0, \tag{13}\]
where \(\mathbf{u}=(u,v)\) is the horizontal velocity, \(h\) is the height of the upper free surface (where the bottom surface is assumed flat), \(\mathbf{f}_{c}=\Omega\mathbf{\dot{z}}\) and \(g\) is gravity. We apply the geostrophic scaling [25] - assuming that \(\mathbf{u}=(u,v)\sim U\), \((x,y)\sim L\) and an advective time scale \(T\sim L/U\). We decompose the free layer height as \(h=\overline{h}+\delta h\) with the mean height \(\overline{h}=H=\mathrm{const.}\) and the variation \(\delta h\sim\mathrm{Ro}H(L/L_{d})^{2}\). The assumptions so far are the same as those made to obtain SWQG. Here, we also rescale the time by \(\tau=t(L_{d}/L)^{2}\). We will consider two limits, \(\mathrm{Ro}\to 0\) and \((L_{d}/L)\to 0\), and we must specify the relation between them. For what follows we assume that when both limits are taken \(\mathrm{Ro}\) tends to zero faster than \((L_{d}/L)\). Thus we may assume that \((L_{d}/L)\sim\mathrm{Ro}^{b}\) with \(0<b<1/2\). Note that with this scaling the height perturbations are still small compared to the mean height as \(\delta h/H\sim\mathrm{Ro}^{1-2b}\ll 1\). With this scaling, we obtain the non-dimensional SW momentum equation:
\[\mathrm{Ro}^{1+2b}\partial_{\tau}\mathbf{u}^{\prime}+\mathrm{Ro}(\mathbf{u}^{\prime} \mathbf{\nabla})\mathbf{u}^{\prime}+\mathbf{f}_{c}^{\prime}\times\mathbf{u}^{\prime}=-\mathbf{ \nabla}\eta^{\prime}, \tag{14}\]
and the non-dimensional SW height variation equation:
\[\mathrm{Ro}\partial_{\tau}\delta h^{\prime}+\mathrm{Ro}^{1-2b}( \mathbf{u}^{\prime}\mathbf{\nabla}^{\prime})\delta h^{\prime}\\ +(\mathbf{\nabla}\cdot\mathbf{u}^{\prime})\left(1+\mathrm{Ro}^{1-2b} \delta h^{\prime}\right)=0. \tag{15}\]
Having expressed both small parameters as a functions of \(\mathrm{Ro}\) we expand the velocity \(u^{\prime},v^{\prime}\) in \(\epsilon_{i}=\epsilon_{i}(\mathrm{Ro})\) such that \(1=\epsilon_{0}\gg\epsilon_{1}\gg...\) and similarly we expand the height variation \(\delta h^{\prime}\) in \(\mu_{i}=\mu_{i}(\mathrm{Ro})\) such that \(1=\mu_{0}\gg\mu_{1}\gg...\).
\[u^{\prime}=\sum_{i=0}^{\infty}\epsilon_{i}u_{i}^{\prime},\quad v^{\prime}=\sum _{i=0}^{\infty}\epsilon_{i}v_{i}^{\prime},\quad\delta h^{\prime}=\sum_{i=0}^{ \infty}\mu_{i}\delta h_{i}^{\prime}. \tag{16}\]
We leave the asymptotic series arbitrary for now. Substituting the series (16) into the re-scaled time momentum equation (14) we get:
\[\mathbf{f}_{c}^{\prime}\times\mathbf{u}_{0}^{\prime}+O(\epsilon_{1};\mu_{1};\mathrm{ Ro})=-\mathbf{\nabla}\delta h_{0}^{\prime}, \tag{17}\]
where \(\mathbf{f}_{c}^{\prime}\equiv f_{0}^{\prime}\mathbf{\dot{z}}=1\mathbf{\hat{z}}\). The dominant balance (for any \(\epsilon_{1},\mu_{1}\ll 1\)) is between the pressure and Coriolis force thus
\[f_{0}^{\prime}u_{0}^{\prime}=-\partial_{y}\delta h_{0}^{\prime};\quad f_{0}^{ \prime}v_{0}^{\prime}=\partial_{x}\delta h_{0}^{\prime}\quad\Rightarrow\mathbf{ \nabla}\mathbf{u}_{0}^{\prime}=0. \tag{18}\]
The re-scaled mass conservation (15) gives at leading order the same result. This allows for the definition of the stream function \(\psi_{0}^{\prime}\equiv\delta h_{0}^{\prime}/f_{0}^{\prime}\).
Moving on to the next order in perturbation theory to get the dynamics, we consider the next order of the momentum equation (14).
\[\mathrm{Ro}^{1+2b}\partial_{\tau}\mathbf{u}_{0}^{\prime}+\mathrm{Ro}( \mathbf{u}_{0}^{\prime}\mathbf{\nabla})\mathbf{u}_{0}^{\prime}+\epsilon_{1}\mathbf{f}_{0}^{ \prime}\times\mathbf{u}_{1}^{\prime}\\ =-\mu_{1}\mathbf{\nabla}\delta h_{1}^{\prime}+O(\mathrm{Ro},\epsilon_{2},\mu_{2}). \tag{19}\]
Taking its curl gives the vorticity \(\omega=\nabla\times\mathbf{u}\) equation
\[\mathrm{Ro}(\mathbf{u}_{0}^{\prime}\mathbf{\nabla})\omega_{0}^{\prime}=-f_{0}^{\prime} \epsilon_{1}(\mathbf{\nabla}\cdot\mathbf{u}_{1}^{\prime})+O(\mathrm{Ro},\epsilon_{2}, \delta_{2}), \tag{20}\]
where the time derivative term has been neglected as it is of higher-order in \(\mathrm{Ro}\) than the advection term. Note that the only non-trivial option, in this case, is for \(\epsilon_{1}=\mathrm{Ro}\) and in general we may assume that \(\epsilon_{n}=\mathrm{Ro}^{n}\). To proceed consider the next order of (15). First, we note that (18) gives \((\mathbf{u}_{0}^{\prime}\mathbf{\nabla})\delta h_{0}^{\prime}=0\) and that \(\mathrm{Ro}^{1-2b}\ll 1\), thus we get
\[\mathrm{Ro}\partial_{\tau}\delta h_{0}^{\prime}+\mathrm{Ro}^{1-2b} \mu_{1}(\mathbf{u}_{0}^{\prime}\mathbf{\nabla})\delta h_{1}^{\prime}\\ =-\mathrm{Ro}(\mathbf{\nabla}\cdot\mathbf{u}_{1}^{\prime})+O(\mu_{1} \mathrm{Ro};\mathrm{Ro}^{2-2b}). \tag{21}\]
Using the \((\mathbf{\nabla}\cdot\mathbf{u}_{1}^{\prime})\) term to relate (20) and (21) we obtain
\[\partial_{\tau}\left(\frac{\delta h_{0}^{\prime}}{f_{0}^{\prime}}\right)-(\mathbf{u }_{0}^{\prime}\mathbf{\nabla})\omega_{0}^{\prime}=\frac{\mu_{1}}{\mathrm{Ro}^{2b}} \frac{1}{f_{0}^{\prime}}(\mathbf{u}_{0}^{\prime}\mathbf{\nabla})\delta h_{1}^{\prime}. \tag{22}\]
We wish to obtain a solution for which the leading order velocity does not vanish, thus \(\partial_{\tau}\delta h_{0}^{\prime}\) must be determined at this order and \(\mu_{1}\leq\mathrm{Ro}^{2b}\). If we assume \(\mu_{1}=\mathrm{Ro}^{2b}\gg\mathrm{Ro}\gg\mathrm{Ro}^{1+2b}\) we obtain from (19)\(\mathbf{\nabla}\delta h_{1}^{\prime}=0\) and thus \((\mathbf{u}_{0}^{\prime}\mathbf{\nabla})\delta h_{1}^{\prime}=0\). Therefore in any case the RHS term can be neglected and equation (22) reduces to
\[\partial_{\tau}(\delta h_{0}^{\prime}/f_{0}^{\prime})-(\mathbf{u}_{0}^{\prime}\mathbf{ \nabla})\omega_{0}^{\prime}=0, \tag{23}\]
and with the definitions \(\psi^{\prime}_{0}\equiv\delta h^{\prime}_{0}/f^{\prime}_{0}\) and \(J(\omega,\psi)=\partial_{x}\omega\partial_{y}\psi-\partial_{y}\omega\partial_{x}\psi\) we obtain the dimensionless invicid LQG equation
\[\partial_{\tau^{\prime}}\psi^{\prime}+J(\omega^{\prime},\psi^{\prime})=0. \tag{106}\]
Returning the dimensions using \(\psi\sim UL\) we have that \(\partial_{\tau}\psi+L^{2}J(\omega,\psi)=0\). We may absorb the additional factor \(L^{2}\) by redefining the stream function as \(\tilde{\psi}=L^{2}\psi\) and thus \(\tilde{\omega}=\nabla^{2}\tilde{\psi}=L^{2}\omega\). Dropping the tilde notation we arrive at the LQG equation for the re-scaled stream-function
\[\partial_{\tau}\psi+J(\omega,\psi)=0. \tag{107}\]
## Appendix B Solution for the two-point function
Here we describe solution to equation (14) for the leading order of the two-point function. The solution to (14) is the sum of the solution to the homogeneous equation and the particular solution to the inhomogeneous equation. We begin by describing the latter, assuming the forcing is homogeneous in \(y\), so that \(\chi_{12}\) depends only on \(y_{-}\equiv\Delta y/2\) and \(\Delta x\). In principle, in the variables \(y_{+},y_{-},x_{1}\) the equation can be straightforwardly integrated to obtain the inhomogeneous solution. However, there is a subtle point that has to do with the consistency of the perturbation theory for some of the fluctuations modes.
Consider the Fourier transform of equation (14) with respect to \(\Delta x\) (equivalently \(x_{1}\)), denoting the corresponding wavenumber by \(k_{x}\), and with respect to \(\Delta y\), denoting the wavenumber by \(k_{y}\). In \(\Delta x\), this is possible since the mean flow solution is homogeneous in \(x\) and is applicable throughout the \(x\) direction, which is periodic. In \(\Delta y\), we assume that correlations decay with \(\Delta y\) within the region of applicability of the mean flow solution (note that this is not necessarily the case in \(y_{+}\), and that \(y_{+}\) is not a periodic coordinate since the leading order mean flow has a finite range of applicability in \(y\)). We see that for \(k_{x}=0\) and \(k_{y}=0\) while the left hand side of the equation turns to zero, the right hand side does not. Thus, modes of the forcing with \(k_{x}=0\) or \(k_{y}=0\) need to be treated separately, and equation (14) is not the leading order equation. This is easily understood for \(k_{x}=0\): perturbations with \(k_{x}=0\) are not advected by the mean flow, so cubic terms or dissipative terms must be important in balancing the injection of the forcing into such modes. That \(k_{x}=0\) and \(k_{y}=0\) modes cannot be treated in a quasi-linear approximation is a completely general statement for mean-flow-turbulence interactions, e.g. also for 2DNS [19]. Therefore, in equation (14) we should subtract these modes from the forcing correlation function:
\[\tilde{\chi}_{12}=\chi_{12}-\frac{1}{L_{y}}\int_{-\frac{L_{y}}{2}}^{\frac{L_{y }}{2}}\chi_{12}(\Delta x,s)ds-\frac{1}{L_{x}}\int_{-\frac{L_{y}}{2}}^{\frac{L_ {x}}{2}}\chi_{12}(s,\Delta y)ds. \tag{108}\]
Then, the inhomogeneous solution reads
\[\langle\psi^{\prime}_{1}\psi^{\prime}_{2}\rangle_{\text{ inh}}=(y_{1}+y_{2})\sqrt{\frac{\alpha}{\epsilon}}\int_{0}^{\Delta x}dz\int_{0}^{ \Delta y/2}dz^{\prime}\tilde{\chi}_{12}\left(z,z^{\prime}\right). \tag{109}\]
In (109) we choose the initial point of the integration to be at coincident points \(\Delta y=0\), and \(\Delta x=0\) which makes the inhomogeneous part symmetric with respect to the replacement \(r_{1}\to r_{2}\) (i.e. even under reflection \(\Delta x\rightarrow-\Delta x\), \(\Delta y\rightarrow-\Delta y\)). On the other hand, it is odd with respect to \(\Delta x\rightarrow-\Delta x\) (and \(\Delta y\rightarrow-\Delta y\)) separately. This is what we expect from the fact that the forcing combined with the mean flow break the parity+time reversal symmetry \(x\rightarrow-x,t\rightarrow-t\) of the system, see the discussion in [29].
Let us consider the influence of the subtraction of the modes with \(k_{x}=0\) and \(k_{y}=0\) from the forcing. Assume a typical forcing length scale \(l_{f}\), it will be convenient to denote \(\chi_{12}(\Delta x,\Delta y)=\epsilon\Phi(\frac{\Delta x}{l_{f}},\frac{\Delta y }{l_{f}})\), such that \(\Phi(0,0)=1\). This gives
\[\langle\psi^{\prime}_{1}\psi^{\prime}_{2}\rangle_{\text{ inh}}=2y_{+}l_{f}^{2}\sqrt{\alpha\epsilon}\int_{0}^{\frac{\Delta x}{l_{f}}}dz \int_{0}^{\frac{\Delta y}{2l_{f}}}dz^{\prime}\tilde{\Phi}\left(z,z^{\prime} \right), \tag{110}\]
where
\[\tilde{\Phi}(z,z^{\prime})=\] \[=\Phi(z,z^{\prime})-\frac{l_{f}}{L_{y}}\int_{-\frac{L_{y}}{2l_{f} }}^{\frac{L_{y}}{2l_{f}}}\Phi(z,s)ds-\frac{l_{f}}{L_{x}}\int_{-\frac{L_{x}}{2l _{f}}}^{\frac{L_{x}}{2l_{f}}}\Phi(s,z^{\prime})ds\] \[=\Phi(z,z^{\prime})-\tilde{\Phi}(z,k_{y}=0)-\tilde{\Phi}(k_{x}=0,z^{\prime}) \tag{111}\]
with \(\tilde{\Phi}\) being the Fourier transform of \(\Phi\) with respect to \(\Delta x\) or \(\Delta y\). Here, \(\tilde{\Phi}(z,z^{\prime})\) has no modes with \(k_{x}=0\) or \(k_{y}=0\): \(\dot{\tilde{\Phi}}\left(z,k_{y}=0\right)=\dot{\tilde{\Phi}}\left(k_{x}=0,z^{ \prime}\right)=0\). For the forcing we have been using in DNS, a direct calculation gives \(\dot{\tilde{\Phi}}\left(z,k_{y}=0\right)=\frac{1}{\pi}\cos 2\pi z\) and \(\dot{\tilde{\Phi}}\left(k_{x}=0,z^{\prime}\right)=\frac{1}{\pi}\cos 2\pi z^{\prime}\).
Now, for \(\Delta x,\Delta y\leq l_{f}\) the replacement of \(\Phi\) by \(\tilde{\Phi}\) does not change the result at leading order: the difference between (110) and the expression when \(\tilde{\Phi}(z,z^{\prime})\) is replaced by \(\Phi(z,z^{\prime})\) is of order \(O(l_{f}/L)\) (after integration). Similarly, we expect that the contribution to two-point correlation functions from \(k_{x}=0\) and \(k_{y}=0\) modes of the forcing, finding which requires a fully non-linear treatment not carried out here, will be small, of order \(O(l_{f}/L)\), compared with the leading order.
On the other hand, for e.g. \(\Delta y\approx L/2\) (similarly for \(\Delta x\approx L/4\)) we notice that
\[\int_{0}^{\frac{L_{y}}{2l_{f}}}dz^{\prime}\tilde{\Phi}\left(z,z^{\prime}\right) \approx\int_{0}^{\frac{L_{y}}{2l_{f}}}dz^{\prime}\tilde{\Phi}\left(z,z^{ \prime}\right)=\frac{1}{2}\int_{-\frac{L_{y}}{2l_{f}}}^{\frac{L_{x}}{2l_{f}}} dz^{\prime}\tilde{\Phi}\left(z,z^{\prime}\right), \tag{112}\] \[=\frac{L_{y}}{2l_{f}}\dot{\tilde{\Phi}}\left(z,k_{y}=0\right),\]
where we have used that \(\tilde{\Phi}\) is a decaying function of \(z^{\prime}\) assuming that \(\int_{\frac{L_{y}}{2l_{f}}}^{\frac{L_{y}}{2l_{f}}}dz^{\prime}\tilde{\Phi} \left(z,z^{\prime}\right)\to 0\) as \(L_{y}/l_{f}\rightarrow\infty\), that
we are working in the regime \(L/l_{f}\gg 1\), and that \(\tilde{\Phi}\left(z,z^{\prime}\right)\) is even in \(z^{\prime}\): \(\tilde{\Phi}(z,z^{\prime})=\tilde{\Phi}(z,-z^{\prime})\) (corresponding to the assumed statistical reflection symmetry \(y\rightarrow-y\) of the forcing). We then have
\[\int_{0}^{\frac{\Delta x}{l_{f}}}dz\int_{0}^{\frac{L_{y}}{2l_{f}}}dz^{\prime} \tilde{\Phi}\left(z,z^{\prime}\right)\approx\frac{L_{y}}{2l_{f}}\int_{0}^{ \frac{\Delta x}{l_{f}}}dz\hat{\tilde{\Phi}}\left(z,k_{y}=0\right)=0. \tag{10}\]
Thus, we see that the forcing influences two-point correlation functions only for \(\Delta x,\Delta y\leq l_{f}\), where we can use \(\tilde{\Phi}(z,z^{\prime})\approx\Phi(z,z^{\prime})\), while for \(\Delta x,\Delta y\approx L\) the contribution from the inhomogeneous solution is negligible.
While the forcing provides the leading order contribution to the odd in \(\Delta x\) part of the correlation function, corresponding to parity+time reversal symmetry breaking, the even contribution at leading order must come from the homogeneous solutions to (10). Those are the zero modes of the advection operator \(\mathcal{L}_{1}+\mathcal{L}_{2}=\nabla_{1}^{2}\partial_{x_{1}}+\nabla_{2}^{2 }\partial_{x_{2}}=\partial_{y_{+}}\partial_{y_{-}}\partial_{x_{1}}\) :
\[\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle_{\text{hom}}=C( \Delta y,\Delta x)+C_{1}(y_{+},\Delta x)+C_{2}(y_{+},\Delta y). \tag{11}\]
To determine which zero modes contribute to the correlation function we need to take into account the boundary conditions. First, we assume that the fluctuations decorrelate as \(\Delta y\to L\), as confirmed in DNS Fig. 2(b), implying that \(C_{1}(y_{+},\Delta x)=0\). Indeed, the odd and even parts of the correlation function should decay to zero separately in this limit. Also, since \(C(\Delta x,\Delta y)\) is independent of \(y_{+}\) while \(C_{2}\) is independent of \(\Delta x\), \(C_{1}(y_{+},\Delta x)\) should separately decay to zero as \(\Delta y\to L\), implying it must be identically zero. This gives
\[\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle =C(\Delta y,\Delta x)+C_{2}(y_{+},\Delta y) \tag{12}\] \[+2y_{+}\sqrt{\frac{\alpha}{\epsilon}}\int_{0}^{\Delta x}dz\int_{ 0}^{\Delta y/2}dz^{\prime}\tilde{\chi}_{12}\left(z,z^{\prime}\right).\]
Next, \(C_{2}(y_{+},\Delta y)\) is in fact a zero mode of the individual advection operators \(\mathcal{L}_{i}\), reflecting the fact that \(k_{x}=0\) modes of \(\psi^{\prime}\) are not advected by a mean flow pointing in the \(\hat{x}\) direction, irrespective of the shape of the mean flow (as discussed above). So, such contributions to the correlation function are not constrained by the quasi-linear approximation. While there is no a-priori reason to set them to zero, we may thus expect that modes with \(k_{x}=0\) do not contribute significantly. Indeed, we see empirically in our DNS that when setting \(\Delta y=0\) the even part of the two-point correlation function (with varying \(\Delta x\)) is independent of \(y_{+}\), see Fig. 2(a) and Fig. 3(b). Thus, it does not contribute, at least to leading order. Finally, the full solution reads
\[\left\langle\psi_{1}^{\prime}\psi_{2}^{\prime}\right\rangle =C(\Delta y,\Delta x) \tag{13}\] \[+2y_{+}\sqrt{\frac{\alpha}{\epsilon}}\int_{0}^{\Delta x}dz\int_{ 0}^{\Delta y/2}dz^{\prime}\tilde{\chi}_{12}\left(z,z^{\prime}\right).\]
## Appendix C Spatial and temporal resolution of DNS
For all DNS, a constant time step \(dt\), different for each DNS was used (Table 2), with the forcing amplitude normalized by \(\sqrt{dt}\) so that energy injection is independent of it. The grid spacing \(dx=dy\approx 0.05\) is the same for all simulations considered at the default resolution of 64x128. To verify the adequacy of the choice of \(dt\) and \(dx\), they are compared with the smallest physical time scale \(\tau_{E}(l_{\nu})\) and length scale \(l_{\nu}\) respectively as presented in Table 2. While the temporal resolution is smaller by at least 4 orders of magnitude than \(\tau_{E}(l_{\nu})\), the grid spacing is relatively close to the Kolmogorov scale \(l_{\nu}\). The large difference between the spatial and temporal resolutions required is due to the hyper-viscosity used in the evolution equation (2). It allows us to use a relatively large grid spacing (or low resolution), as for \(p=7\) the energy cutoff is extremely sharp leaving only a small fraction of the total energy at length scales \(l_{\nu}<l\leq dx\). To verify that the simulations are spatially fully developed and that this resolution is not too coarse, two high-resolution simulations (Sim-B(*2) and Sim-B(*4)) were performed with the same parameters as Simulation-B but with \(dx=dy\approx 0.025\) and \(dx=dy\approx 0.0125\) corresponding to 128x256 and 256x512 resolutions respectively. The resulting condensate is exactly the same as for the lower resolution, as can be appreciated from the snapshot comparison in Fig. 11 and from the averaged terms in Fig. 12.
Finally, to demonstrate that the choice of hyper-viscosity in eq. (2) does not affect the mean flow condensate, we performed an additional low hyper-viscosity run with the same parameters as Sim-B(*2) but with \(p=5\) and \(\nu=6.3\times 10^{-13}\) - Sim-B(p5). The value of \(\nu\) was chosen so that the Kolmogorov scale for both Sim-B(*2) and Sim-B(p5) is \(l_{\nu}\approx 0.2\). The higher resolution (compared
Figure 11: Comparison of the LQG jet condensate at two spatial resolutions, showing the velocity \(\mathbf{v}=\hat{z}\times\mathbf{\nabla}\psi\) snapshot (a) Simulation-B with \(dx=dy=0.0491\) and (b) Simulation-B(*2) with \(dx=dy=0.0245\).
to the \(p=5\) case, we have to find the same behavior as the energy cutoff to Sim-B) is used in the comparison as the energy cutoff is not as sharp in the \(p=5\) case, requiring a larger separation of scales to ensure convergence. The comparison demonstrates that the choice of hyper-viscosity does not affect the mean flow, as presented in Fig. 13. Note that due to the longer time required to integrate the equations at the high resolution, only limited statistics were obtained amounting to \(\sim 30T_{L}\). To make the comparison quantitative, the averaged terms presented in Figs. 12,13 are over \(30T_{L}\) for all simulations.
|
2308.02516
|
Exact solution of maximally flat antireflection coatings for coherent
and incoherent light
|
This paper presents two approaches to the precise design of maximally flat
antireflection coatings reducing the reflectance of the substrate to near zero
in a certain region around the central frequency. The first ideal case concerns
coherent light interference, where it is required that for a chosen central
frequency the maximum number of reflectance derivatives of the reflectance with
respect to the frequency is zero. This approach makes it possible to determine
the refractive indices of a system of homogeneous quarter-wave thin films by
solving set of nonlinear equations that can be found in explicit form for a
maximum of four layers; for higher numbers of layers, the result must be sought
numerically. The second case is the incoherent superposition of light waves,
which refers to the determination of the refractive indices of the layers
independently of their thicknesses. For these two physically ideal cases, the
obtained refractive indices of the layers and the dependence of their
reflectivity on the light frequency are compared. A simple new method for
approximating the refractive indices of a maximally flat system of
antireflective layers is also proposed.
|
Jaromír Křepelka
|
2023-07-29T14:43:00Z
|
http://arxiv.org/abs/2308.02516v2
|
# Exact solution of maximally flat antireflection coatings for coherent and incoherent light
###### Abstract
This paper presents two approaches to the precise design of maximally flat antireflection coatings reducing the reflectance of the substrate to near zero in a certain region around the central frequency. The first ideal case concerns coherent light interference, where it is required that for a chosen central frequency the maximum number of reflectance derivatives of the reflectance with respect to the frequency is zero. This approach makes it possible to determine the refractive indices of a system of homogeneous quarter-wave thin films by solving set of nonlinear equations that can be found in explicit form for a maximum of four layers; for higher numbers of layers, the result must be sought numerically. The second case is the incoherent superposition of light waves, which refers to the determination of the refractive indices of the layers independently of their thicknesses. For these two physically ideal cases, the obtained refractive indices of the layers and the dependence of their reflectivity on the light frequency are compared. A simple new method for approximating the refractive indices of a maximally flat system of antireflective layers is also proposed.
thin layers, thick layers, maximally flat antireflection coating, exact solution, coherent light, incoherent light
## 1 Introduction
Antireflection coatings are widely used in optical applications to reduce the reflectance of optical surfaces, e.g. for lenses applied as components of various devices, photovoltaic cells, optical sensors and more. Their purpose is to increase the light transmittance of optical system, with the side effect of improving image contrast by removing stray light caused by multiple internal reflections. This is especially important for the perfect function of photographic cameras, video cameras, binocularas, telescopes, laser devices, etc. To ensure such function, systems of layers are essential, the mathematically motivated improvement of which began with Strong's [1] analysis of the reflectance of a substrate coated with a single layer of arbitrary thickness based on Fresnel's relations, which led to the frequently mentioned condition that the magnitude of the refractive index of the layer should be equal to the geometric average of the refractive indices of the substrate and the superstrate materials. This was followed by further theoretical advances in the analysis of the reflection properties of more complex systems, in particular thin interference layers, based on the principle of coherent summation of partial reflections of light waves from each material boundary. Among many researchers, we should mention Antonin Vasicek [2], a Czechoslovak pioneer in the theory, preparation and measurement of optical systems with thin or thick layers. Mathematical simplification of seemingly somewhat confusing and complex calculations of macroscopically measurable parameters of thin film systems, based on the transformation of tangential components of electric and magnetic field intensities of plane monofrequency electromagnetic waves in isotropic homogeneous media, was then achieved by a large number of authors starting with F. Abeles [3]. Let us remember, for example, the Czechoslovak thin film expert Zdenek Knittl [4]. The matrix approach to the calculation of parameters of thin film systems for plane waves, representing the transformation of tangential components of plane electromagnetic waves, was generalized by Dwight W. Berremann for anisotropic media [5].
The theoretical research possibilities have been extended, among other things, finding such layer systems that would reduce the reflectance of the substrate as much as possible over a very wide range of light spectra. For instance in [6] an exact solution for the refractive indices of quarter-wave systems of isotropic homogeneous dielectric layers is given for such an impedance matching, which, assuming perfect interference of all internally reflected waves (that is for coherent light), achieve maximally flat reflectance versus frequency (wavelength) in the sense that the maximum possible number of derivatives of reflectance versus frequency (or wavelength) for a chosen central frequency (central wavelength) is equal to zero. Such a mathematically rigorous problem leads to finding the roots of a system of nonlinear equations, which has an explicit solution for a maximum of 4 layers, for a larger number of layers it is necessary to find a solution using numerical methods. In this paper, this approach valid for ideal thin layers is compared with the approach applicable to ideally thick layers, where none of the internally reflected waves interferes with the other. Such an aim mathematically requires finding the minimum of a function of several variables. It should be noted that the obtained refractive indices of the layers in both cases require their gradual change from the refractive index of the superstrate towards the refractive index of the substrate (or vice versa), which are usually refractive indices that are not readily available technologically. However, although theoretical research on the design of antireflection layers seems to have been completed, with the development of nano-technology, new possibilities for the realization of antireflection surfaces are beginning to emerge (see e.g. [13]).
## 2 Theory
### Reflectance and transmittance of thin and thick layers
Let us consider the propagation of a plane monofrequency electromagnetic wave with angular frequency \(\omega\) (angular wave number in vacuum is \(\omega/c\), \(c\) is the speed of light in vacuum) in the geometry of parallel plane layers in a stratified isotropic medium [7]. There are \(k\) homogeneous layers deposited on the substrate (rear half-space), the front half-space is superstrate. Let
the electromagnetic wave incident the system from the superstrate be denoted as R (right) and from the opposite direction as L (left). Plane waves carry the same R or L designation even inside thin layers, the resulting superposed electromagnetic field is then a linear combination of both counter propagating waves. Let us number the layers in the R direction of the incident wave, starting with 0 (for the superstrate), 1 (the layer last deposited on the substrate), 2,..., \(k\) (the layer deposited on the substrate as the first), \(k+1\) (the substrate). Each medium is characterized by its generally complex refractive index \(n_{j}\), \(j=0,1,\ldots,k+1\), the imaginary part of which represents an absorption due to the conductance of the materials, for ideal dielectric environments the refractive indices are real quantities.
For simplicity, assume that the superstrate is a dielectric (e.g. air) in which the direction of propagation of the incident R wave is determined by the angle of incidence \(\theta_{0}\) measured from the normal of the plane material interface surfaces. Since the law of refraction \(n_{j}\sin(\theta_{j})=n_{0}\sin(\theta_{0})\) holds, i.e. the product of the refractive index and the sinus of the propagation angle is invariant for all evolved media (we do not discuss the physical meaning of the generally complex propagation angles \(\theta_{j}\)), the normal component of the propagation vector in the \(j\)-th medium is
\[\pm\frac{\omega}{c}\sqrt{n_{j}^{2}-(n_{0}\sin\theta_{0})^{2}}=\pm\frac{\omega} {c}n_{j}\cos(\theta_{j}).\]
Here, the plus sign applies to the L wave and the minus sign to the R wave in order to preserve the sense of the direction of wave propagation, assuming that the time dependence of the plane wave is a harmonic function of time \(t\) as \(\exp(\mathrm{i}\omega t)\) and the imaginary parts of the refractive indices of the absorbing materials are chosen to be nonpositive. Maxwell's equations provide a solution for such a configuration, which can be briefly described in the terms below. Denoting \(Z_{0}\) the vacuum impedance, then the admittance (ratio of tangential components of magnetic and electric field intensities) of the \(j\)-th medium is
\[Y_{j}=\frac{1}{Z_{0}}\times\left\{\frac{\sqrt{n_{j}^{2}-(n_{0}\sin\theta_{0})^ {2}}=n_{j}\cos(\theta_{j})}{\sqrt{n_{j}^{2}-(n_{0}\sin\theta_{0})^{2}}}=\frac{ n_{j}}{\cos(\theta_{j})},\right. \tag{1}\]
where the upper row defines the admittance of the \(j\)-th medium for the s (TE) wave and the lower row for the p (TM) wave. The correct calculation of square roots in the complex number domain should be taken into account when focusing on absorbing materials or the case of total internal reflection in dielectric materials.
Vectors of tangential components of the electric \(\mathbf{E}_{\mathrm{t},j}\) and magnetic \(\mathbf{H}_{\mathrm{t},j}\) field intensities composed of the counter propagating R (\(\mathbf{E}_{\mathrm{t},R,j}\), \(\mathbf{H}_{\mathrm{t},R,j}\)) and L (\(\mathbf{E}_{\mathrm{t},\mathrm{L},j}\), \(\mathbf{H}_{\mathrm{t},\mathrm{L},j}\)) waves in the \(j\)-th environment, are obtained from the tangential components of (only) electric field intensity but decomposed into R and L (\(\mathbf{E}_{\mathrm{t},R,j}\), \(\mathbf{E}_{\mathrm{t},\mathrm{L},j}\)) waves using the transformation matrix
\[\begin{pmatrix}\mathbf{E}_{\mathrm{t},j}\\ \mathbf{H}_{\mathrm{t},j}\end{pmatrix}=\begin{pmatrix}\pm 1&1\\ Y_{j}&\mp Y_{j}\end{pmatrix}\begin{pmatrix}\mathbf{E}_{\mathrm{t},R,j}\\ \mathbf{E}_{\mathrm{t},\mathrm{L},j}\end{pmatrix}, \tag{2}\]
where the upper sign applies to the s-polarized waves and the lower sign to the p-polarized waves.
We conclude that the signs following directly from Maxwell's equations (and often omitted in calculations) are necessary for the correct determination of the phases of the amplitude reflectances of s and p waves, as it is known from Fresnel's formulas for a single boundary between two materials, otherwise the p wave argument of amplitude reflectance would differ by \(\pi\) from the correct value, which may affect, for example, the evaluation of ellipsometric measurements or modal analysis of planar waveguides. It should be noted that when calculating (power) reflectances as quadratic quantities, the change of sign for the p wave does not apply.
The tangential components of the electric field intensity, but decomposed into R and L waves, are obtained from the tangential components of the composed electric and magnetic field intensities using the inverse of (2)
\[\begin{pmatrix}\mathbf{E}_{\mathrm{t},R,j}\\ \mathbf{E}_{\mathrm{t},j}\end{pmatrix}=\frac{1}{2}\begin{pmatrix}\pm 1&\frac{1}{Y_{j}} \\ 1&\mp\frac{1}{Y_{j}}\end{pmatrix}\begin{pmatrix}\mathbf{E}_{\mathrm{t},j}\\ \mathbf{H}_{\mathrm{t},j}\end{pmatrix}. \tag{3}\]
If we introduce for the phase change of the plane wave as its propagates between the boundaries of the \(j\)-th layer of thickness \(h_{j}\) the term \(\varphi_{j}=(\omega/c)h_{j}\sqrt{n_{j}^{2}-(n_{0}\sin\theta_{0})^{2}}\), then for the transformation of the tangential components of the electric and magnetic (composed) field intensities between the boundaries of the \(j\)-th layer to the tangential components of the electric field intensity of the counter propagating waves, we obtain for each of the perpendicular polarizations the interference matrix \(\mathbf{M}_{j}\)
\[\begin{split}&\mathbf{M}_{j}=\begin{pmatrix}\cos(\varphi_{j})& \pm\frac{\mathrm{i}}{Y_{j}}\sin(\varphi_{j})\\ \pm\mathrm{i}Y_{j}\sin(\varphi_{j})&\cos(\varphi_{j})\end{pmatrix},\\ &\begin{pmatrix}\mathbf{E}_{\mathrm{t},j}\\ \mathbf{H}_{\mathrm{t},j}\end{pmatrix}=\mathbf{M}_{j}\begin{pmatrix}\mathbf{E}_{ \mathrm{t},j+1}\\ \mathbf{H}_{\mathrm{t},j+1}\end{pmatrix}.\end{split} \tag{4}\]
Note that the (algebraic) spectral decomposition of the interference matrix \(\mathbf{M}_{j}\) contains its eigenvectors identical to the columns of the matrix defined in equation (2)
\[\begin{split}&\mathbf{M}_{j}=\begin{pmatrix}\pm 1&1\\ Y_{j}&\mp Y_{j}\end{pmatrix}\begin{pmatrix}\exp(\mathrm{i}\varphi_{j})&0\\ 0&\exp(-\mathrm{i}\varphi_{j})\end{pmatrix}\\ &\times\frac{1}{2}\begin{pmatrix}\pm 1&\frac{1}{Y_{j}}\\ 1&\mp\frac{1}{Y_{j}}\end{pmatrix}.\end{split} \tag{5}\]
In some cases, it is advantageous to use the decomposition of interference matrices in their eigenvectors and eigenvalues for an alternative calculation of the amplitude parameters of thin layers, in which explicit expressions for Fresnel reflectances (transmittances) at the boundaries of adjacent materials occur. From this decomposition it is clear that the matrix approach to the calculation of the thin film parameters is equivalent to an infinite sum of partially reflected and transmitted waves with phases shifted by \(\varphi_{j}=\omega\tau_{j}=2\pi\nu\tau_{j}\) during each of their passage through the layers. Such an approach, with use of the normalized spectral power density of the source radiation and the Wiener-Khinchin theorem, also allows us to show (e.g., [8]) how the smooth transition from coherent to incoherent light works in stratified media.
Then the resulting matrix transmitting the tangential components of the electric field intensity decomposed into counter propagating waves between the outer boundaries of the system of \(k\) layers is obtained by multiplying the interference matrices
in the correct order
\[\mathbf{S}=\frac{1}{2}\begin{pmatrix}\pm 1&\dfrac{1}{Y_{0}}\\ 1&\mp\dfrac{1}{Y_{0}}\end{pmatrix}\mathbf{M}_{1}\mathbf{M}_{2}\dots\mathbf{M}_{k }\begin{pmatrix}\pm 1&1\\ Y_{k+1}&\mp Y_{k+1}\end{pmatrix}, \tag{6}\]
\[\begin{pmatrix}\mathbf{E}_{\mathrm{tR},0}\\ \mathbf{E}_{\mathrm{tL},0}\end{pmatrix}=\mathbf{S}\begin{pmatrix}\mathbf{E}_{ \mathrm{tR},k+1}\\ \mathbf{E}_{\mathrm{tL},k+1}\end{pmatrix}. \tag{7}\]
The relation (6) for the transfer matrix \(\mathbf{S}\) holds if the tangential components of the electric and magnetic field intensities are continuous at the boundaries of the layers, which is not generally fullfiled when surface currents or surface charges occur at the boundaries between the layers.
Using the standard definition of amplitude reflectances \(r\) and transmittances \(t\) of waves incidenting the layer system from the superstrate (subscript R) and from the substrate (subscript L) directions, we can calculate these macroscopic complex quantities from the elements of the transfer matrix \(\mathbf{S}\) as follows
\[r_{\mathrm{R}}=\frac{S_{21}}{S_{11}},\;t_{\mathrm{R}}=\frac{1}{S_{11}},\;r_{ \mathrm{L}}=-\frac{S_{12}}{S_{11}},\;t_{\mathrm{L}}=\frac{Y_{k+1}}{Y_{0}} \frac{1}{S_{11}}, \tag{8}\]
from where we immediately have
\[\mathbf{S}=\frac{1}{t_{\mathrm{R}}}\begin{pmatrix}1&-r_{\mathrm{L}}\\ r_{\mathrm{R}}&d\end{pmatrix},\;d=t_{\mathrm{R}}t_{\mathrm{L}}-r_{\mathrm{R}} r_{\mathrm{L}}. \tag{9}\]
From the relation between inverse interference matrices and interference matrices as functions of \(\varphi_{j}\) or \(n_{j}\) for the cases of dielectric or absorbing media and from the physical requirement imposed on the transfer matrix \(\mathbf{S}\) defined by equation (6) for the same layer system in mirror symmetry, we can derive the correlation between the elements of the transmission matrix and hence the relation for the amplitude reflectances and transmittances from left and right (see [9]). Using this so-called theorem of reversibility, we obtain for non-absorbing systems \(S_{22}=S_{11}^{*}\), \(S_{21}=S_{12}^{*}\)), i.e. that the amplitude reflectances and transmittances of the system of thin dielectric layers (4 complex numbers) are definitely determined by only three real numbers using also the condition \(\det(\mathbf{S})=Y_{k+1}/Y_{0}\). For instance we can choose for this purpose the absolute value of the amplitude reflectance from the left and its argument and the argument of the amplitude transmittance from the left and express with their help all macroscopic parameters. This approach based on the properties of the transfer matrix with respect to mirror symmetry was generalized for systems of anisotropic thin layers in [10].
Due to the dependence of the divergence of the energy flux of a plane electromagnetic wave only on the normal (perpendicular) Cartesian coordinate, the power (energy) transfer in a system of thin layers in a planar arrangement is determined only by normal component of the Poynting vector calculated using tangential vectors of the intensity of the electric and magnetic fields separately for each of the counter propagating waves. Therefore, we will calculate the (power) reflectances and transmittances of the thin layer system following the relations
\[\begin{split}&\rho_{\mathrm{R}}=\left|r_{\mathrm{R}}\right|^{2}, \;\tau_{\mathrm{R}}=\frac{\mathrm{Re}(Y_{k+1})}{\mathrm{Re}(Y_{0})}\left|t_{ \mathrm{R}}\right|^{2},\\ &\rho_{\mathrm{L}}=\left|r_{\mathrm{L}}\right|^{2},\tau_{\mathrm{ L}}=\frac{\mathrm{Re}(Y_{0})}{\mathrm{Re}(Y_{k+1})}\left|t_{\mathrm{L}} \right|^{2},\end{split} \tag{10}\]
where \(\mathrm{Re}\) stands for the real part of a complex number. From this definition we obtain the transfer matrix for the normal components of Poynting vector
\[\mathbf{N}=\frac{1}{\tau_{\mathrm{R}}}\begin{pmatrix}1&-\rho_{\mathrm{L}}\\ \rho_{\mathrm{R}}&\delta\end{pmatrix},\;\delta=\tau_{\mathrm{R}}\tau_{ \mathrm{L}}-\rho_{\mathrm{R}}\rho_{\mathrm{L}}. \tag{11}\]
It means that for the normal components of Poynting vector at both sides of the thin film system we have
\[\begin{pmatrix}P_{\mathrm{nR},0}\\ P_{\mathrm{nL},0}\end{pmatrix}=\mathbf{N}\begin{pmatrix}P_{\mathrm{nR},k+1}\\ P_{\mathrm{nL},k+1}\end{pmatrix}. \tag{12}\]
If we are interested in the transformation of normal components of the Poynting vector of the waves propagating in one direction or the other, and thus in determining the reflectance and transmittance of a thick layer, we must first determine the attenuation of the field propagating through a layer of thickness \(h\) with complex refractive index \(n\), if the propagation angle \(\theta_{0}\) is measured in a medium with refractive index \(n_{0}\). This attenuation of the field is determined by the attenuation factor \(0<U\leq 1\)
\[U=\exp\left(-2h\frac{\omega}{c}\left|\mathrm{Im}\sqrt{n^{2}-(n_{0}\theta_{0}) ^{2}}\right|\right), \tag{13}\]
where the absolute value in the argument of the exp function is given to emphasize the energy passivity of the medium. With its help we can determine the transfer matrix of the normal components of the Poynting vector inside arbitrary thick layer
\[\mathbf{N}=\begin{pmatrix}1/U&0\\ 0&U\end{pmatrix}. \tag{14}\]
The relations for the transfer matrices of the normal components of the Poynting vector allow them to be combined for any number of systems alternating thick layers with systems of thin layers (some of which may be empty), just multiplying them in the correct order. This is of course true if the tangential components of the electric and magnetic field intensities, and hence the normal components of the Poynting vector, are continuous at the boundaries. The resulting product of these matrices then needs only to be compared with the matrix elements in the equation (11) to obtain the resulting (power) reflectances and transmittances. For example, for two thin film systems deposited on both sides of a thick substrate, the resulting reflectances and transmittances are determined from the relation
\[\begin{split}&\frac{1}{\tau_{\mathrm{R}}}\begin{pmatrix}1&- \rho_{\mathrm{L}}\\ \rho_{\mathrm{R}}&\delta\end{pmatrix}=\frac{1}{\tau_{\mathrm{R1}}}\begin{pmatrix} 1&-\rho_{\mathrm{L1}}\\ \rho_{\mathrm{R1}}&\delta_{1}\end{pmatrix}\begin{pmatrix}1/U_{2}&0\\ 0&U_{2}\end{pmatrix}\\ &\times\frac{1}{\tau_{\mathrm{R3}}}\begin{pmatrix}1&-\rho_{\mathrm{L3}}\\ \rho_{\mathrm{R3}}&\delta_{3}\end{pmatrix}\end{split} \tag{15}\]
where \(\delta\) has the same meaning for the whole system as in (11) and the quantities denoted here by the subscript 1 refer to a system of thin layers deposited from the left on a thick substrate and calculated from the equation (8) assuming an infinitely extensive substrate from the right. The attenuation factor \(U_{2}\) (equal to one for a dielectric medium) refers to the thick layer material, and the quantities denoted by the subscript 3 refer to the system of thin layers deposited on the substrate from the right, while the thick layer material serves as an half-infinite medium on the left. Any of the subsystems may be empty.
In this way, for example, the influence of the reflection from the rear boundary of the substrate on the measured parameters of the layers can be taken into account, which is otherwise considered as a half-space from which light is not reflected back to the system.
### Maximally flat antireflection system of layers for coherent light
The assumption of perfectly coherent light is equivalent to the propagation of a monofrequency field in ideal thin dielectric layers, whose interference effect is highest when the optical thicknesses of the layers are equal to a quarter of the chosen (central) wavelength or integer multiples thereof. Therefore, let us look for the refractive indices of the system of quarter-wave layers that best minimize the reflectance of the substrate around the central wavelength at perpendicular incidence.
Let us define maximally flat antireflection coatings as those that exhibit zero value of the highest possible number of reflectance derivatives with respect to the angular frequency \(\omega\) for the central frequency \(\omega_{c}\). Derivatives can alternatively be made with respect to the common phase change as the wave pass through each layer, i.e., the common phase layer thickness \(\varphi=\varphi_{j}=(\pi/2)\omega/\omega_{c}\), \(j=1,\ldots,k\). At the same time, the zero of the first derivative follows from the requirement of zero reflectance at the central wavelength, as it is a local minimum of reflectance.
From the Taylor expansion of the interference matrices around \(\varphi=\pi/2\) and the above requirement that the derivatives must be zero, we obtain a system of nonlinear equations of the form [6]
\[\begin{array}{l}N_{k_{j}}^{(a)}=n_{0}n_{k+1}N_{k_{j}}^{(b)},\quad 1\leq j \leq k,\quad j\;\mathrm{odd},\\ N_{k_{j}}^{(a)}=\frac{n_{0}}{n_{k+1}}N_{k_{j}}^{(b)},\quad 2\leq j \leq k,\quad j\;\mathrm{even},\end{array} \tag{16}\]
where the Pohlack's coefficients are expressed by the formulas [11]
\[\begin{array}{l}N_{k_{j}}^{(a)}=\sum_{M_{k_{j}}}\frac{n_{i_{1}}n_{i_{3}} \ldots n_{i_{j}}}{n_{i_{2}}n_{i_{4}}\ldots n_{i_{j-1}}},\quad 1\leq j\leq k, \quad j\;\mathrm{odd},\\ N_{k_{j}}^{(a)}=\sum_{M_{k_{j}}}\frac{n_{i_{1}}n_{i_{3}}\ldots n_{i_{j-1}}}{n _{i_{2}}n_{i_{4}}\ldots n_{i_{j}}}\quad 2\leq j\leq k,\quad j\;\mathrm{even},\\ N_{k_{j}}^{(b)}=\sum_{M_{k_{j}}}\frac{n_{i_{2}}n_{i_{4}}\ldots n_{i_{j-1}}}{n _{i_{1}}n_{i_{3}}\ldots n_{i_{j}}},\quad 1\leq j\leq k,\quad j\;\mathrm{odd},\\ N_{k_{j}}^{(b)}=\sum_{M_{k_{j}}}\frac{n_{i_{2}}n_{i_{4}}\ldots n_{i_{j}}}{n _{i_{1}}n_{i_{3}}\ldots n_{i_{j-1}}}\quad 2\leq j\leq k,\quad j\;\mathrm{even}. \end{array} \tag{17}\]
The summation set \(M_{k_{j}}=\{i_{1},i_{2},\ldots i_{j}\}\) is determined by the inequalities
\[\begin{array}{l}1\leq i_{1}\leq k-(j-1),\;i_{1}+1\leq i_{2}\leq k-(j-2), \ldots,\\ i_{j-1}+1\leq i_{j}\leq k-(j-j)\end{array} \tag{18}\]
or equivalently
\[\begin{array}{l}j\leq i_{j}\leq k,\;j-1\leq i_{j-1}\leq i_{j}-1,\ldots,\\ j-(j-1)\leq i_{1}\leq i_{2}-1.\end{array} \tag{19}\]
If both conditions (16) are satisfied, then the reflectance of the maximally flat antireflection system defined in this way is depending on the common phase layer thickness \(\varphi\) as follows
\[\rho(\varphi)=\frac{\rho_{0}\cos^{2k}\varphi}{1-\rho_{0}+\rho_{0}\cos^{2k} \varphi}, \tag{20}\]
where \(\rho_{0}=\left[(n_{0}-n_{k+1})/(n_{0}+n_{k+1})\right]^{2}\) is the reflectance of the bare substrate without layers that a maximally flat antireflection system periodically achieves at frequencies equal to even multiples of the central frequency.
From the equivalence of summation sets (18) and (19) and the transformation of the indices \(i_{j+1-m}\to i_{m},\;m=1,2,\ldots,j\) we obtain partial result that the first of the equations (16) is solved if following symmetry condition between the refractive indices of the layers is satisfied
\[n_{j}n_{k+1-j}=n_{0}n_{k+1},\quad j=1,2,\ldots,k. \tag{21}\]
The second set of equations (16) together with the conditions (21) represent a complete system of nonlinear equations for unknown refractive indices. In [6] it is shown that for one to four layers the solution (16) can be found in explicit form, for a higher number of layers it is necessary to find the solution of the system of equations of several variables numerically. The time required for the numerical calculation time increases dramatically with the number of layers, and the accuracy of the result is affected by the finite number of digits implemented in the computer software.
### Maximally flat antireflection system of layers for incoherent light
The reflectance of a system of \(k\) thick dielectric layers with refractive indices \(n_{j}\), \(j=1,\ldots,k\) located between media with refractive indices \(n_{0}\) (for example air) and \(n_{k+1}=n_{s}\) (for example substrate) is determined analogously according to the relation (15) by multiplying the transfer matrices of the normal components of the Poynting vector at each boundary, where the matrices with attenuation coefficients are unit. In more detail, for zero angle of incidence, the reflectance of each interface from the left is given by the Fresnel relation
\[\rho_{\mathrm{R},j}=\rho_{j}=\left(\frac{n_{j-1}-n_{j}}{n_{j-1}+n_{j}}\right) ^{2},\quad j=1,2,\ldots,k+1. \tag{22}\]
The reflectance from the right is the same as from the left \(\rho_{\mathrm{L},j}=\rho_{\mathrm{R},j}\), transmittances are \(\tau_{\mathrm{R},j}=\tau_{\mathrm{L},j}=1-\rho_{j}\) and the parameter \(\delta_{j}=\tau_{\mathrm{R},j}\tau_{\mathrm{L},j}-\rho_{\mathrm{R},j}\rho_{ \mathrm{L},j}=1-2\rho_{j}\). Therefore, the transfer matrix of the normal components of the Poynting vector for each interface \(j=1,2,\ldots,k+1\) is
\[\mathbf{N}_{j}=\frac{1}{\tau_{\mathrm{R},j}}\begin{pmatrix}1&-\rho_{\mathrm{L },j}\\ \rho_{\mathrm{R},j}&\delta_{j}\end{pmatrix}=\frac{1}{1-\rho_{j}}\begin{pmatrix} 1&-\rho_{j}\\ \rho_{j}&1-2\rho_{j}\end{pmatrix} \tag{23}\]
and thus the resulting transfer matrix \(\mathbf{N}\) of the normal components of the Poynting vector with the resulting reflectances and transmittances is given by the product of the sub-matrices in the correct order
\[\mathbf{N}=\mathbf{N}_{1}\mathbf{N}_{2}\ldots\mathbf{N}_{k+1}=\frac{1}{\tau_ {\mathrm{R}}}\begin{pmatrix}1&-\rho_{\mathrm{L}}\\ \rho_{\mathrm{R}}&\tau_{\mathrm{R}}\tau_{\mathrm{L}}-\rho_{\mathrm{R}}\rho_{ \mathrm{L}}\end{pmatrix}. \tag{24}\]
Hence, for the resulting reflectance from the left (in this case the same as from the right), we get the expression from the elements of the matrix \(\mathbf{N}\)
\[\rho_{\mathrm{R}}(n_{1},\ldots,n_{k})=-\frac{N_{12}}{N_{11}}=\frac{N_{21}}{N_ {11}}, \tag{25}\]
which need not be explicitly stated.
For our purpose, we understand \(\rho_{\mathrm{R}}\) as a function of the refractive indices of the layers with given refractive indices of the surrounding media for which we are looking for the minimum. Some numerical minimization methods can be used to find the minimum reflectance of layers with incoherent light, but there is
also an analytical solution. It is sufficient to set the derivatives of the function \(\rho_{\rm R}(n_{1},\ldots,n_{k})\) with respect to all variables equal to zero if \(n_{0}\) and \(n_{s}\) are fixed. From this requirement we obtain simple recurrent relations
\[n_{j}^{2}=n_{j-1}n_{j+1},\quad j=1,2,\ldots,k \tag{26}\]
with obvious boundary conditions for \(n_{0}\) and \(n_{k+1}=n_{s}\).
The solution of the equations (26) can be found in the form
\[n_{j}=n_{0}^{1-\frac{j}{k+1}}n_{s}^{\frac{j}{k+1}}, \tag{27}\]
which again indicates the validity of the relation (21) given for the coherent situation, which can be considered as a consequence of the mirror symmetry of the planar system of layers. Note that we can easily generalize the result (27) for oblique incidence and s or p polarization when we replace the refractive indices of the media with their admittances (1).
We can also find explicit expressions for the (minimum) size of the reflectance of a layer system and incoherent light
\[\begin{split}&\rho_{\rm min}=\frac{(k+1)(q^{1/(k+1)}-1)^{2}}{(k+1)q^ {2/(k+1)}-2(k-1)q^{1/(k+1)}+k+1},\\ & q=\frac{n_{0}}{n_{s}}\;{\rm or}\;\;q=\frac{n_{s}}{n_{0}}\end{split} \tag{28}\]
with the same result and expected limit equal to zero for the number of layers \(k\) going to infinity.
## 3 Results of numerical experiments
The figures 1-6 present the results of numerical calculations of the exact designed maximally flat antireflection systems for three selected numbers of layers, as described in the theoretical part of the paper. As expected from the requirement of impedance matching for coherent and incoherent electromagnetic fields, the refractive index profiles converge to stepwise smooth transition of their values from the superstrate to the substrate or vice versa as the number of layers increases. However, each of the two cases discussed in detail provides this transition with a different curve, but always such that the symmetry equation (21) is satisfied. It can also be seen that the band of low reflectance broadens as the number of layers increases, and in the case of the design for incoherent light the number of local alternating minima and maxima of reflectance increases (values below \(10^{-15}\) are not shown in the plots). The attempt to further reduce the reflectivity over a wider wavelength region of the systems thus designed by varying their thicknesses does not lead to the goal, since the solutions already obtained represent local extremes in themselves.
For illustration, fig. 7 shows the reflectance versus relative frequency \(\omega/\omega_{c}\) of a system of 1000 quarter-wave layers designed for maximally flat antireflection in the coherent and incoherent cases, the refractive indices of the superstrate and substrate are as in the previous figures. The low reflectance of the order of \(10^{-6}\) for incoherent light can be determined from the easily calculated refractive indices (see equation 27), but this is not the case for coherent light due to the large demands on computer time.
## 4 Discussion
The equation (27) for the refractive indices of the layers in the case of incoherent light satisfies the condition (21), originally derived for the coherent case. The profile of the refractive indices of the antireflection layers in figs. 1, 3, 5 suggest that it might be possible to match the desired refractive indices with a suitable curve that would resemble a Gaussian error function. Indeed, if we assume the sizes of the refractive indices of the layers to be of the form
\[\begin{split}& n_{j}=n_{0}^{1-\alpha_{j}^{(k)}}n_{s^{j}}^{ \alpha_{j}^{(k)}},\;j=0,\ldots,k+1,\\ &\alpha_{0}^{(k)}=0,\;\alpha_{k+1}^{(k)}=1.\end{split} \tag{29}\]
For example, from the known refractive indices \(n_{j}\) we get the requirement for the size of \(\alpha_{j}^{(k)}\)
\[\alpha_{j}^{(k)}=\frac{\log(n_{j}/n_{0})}{\log(n_{k+1}/n_{0})},\;j=0,\ldots,k +1. \tag{30}\]
For the incoherent case we obviously have \(\alpha_{j}^{(k)}=j/(k+1)\) from (27).
The auxiliary coefficients \(\alpha_{j}^{(k)}\) entering the expression for the refractive indices of the layers using the Gaussian error function can be assumed in the approximate form
\[\alpha_{j}^{(k)}=\frac{1}{2}\left[1+\mathrm{erf}\left(b_{k}(j-\frac{k+1}{2}) \right)\right],\;j=1,\ldots,k. \tag{31}\]
Since the function \(\mathrm{erf}(x)\) is odd, such a dependence satisfies the condition (21), now written as \(\alpha_{j}^{(k)}+\alpha_{k+1-j}^{(k)}=1\). However, the coefficients \(b_{k}\) have to be found numerically so that the fitted refractive indices are as close as possible to the solutions of equations (16), which can be done numerically by the least squares method. The refractive indices of the layers obtained by such
Figure 4: Dependence of the decimal logarithm of reflectance \(\log(\rho)\) on the relative frequency \(\omega/\omega_{c}\) of the system of 20 quarter-wave layers for the refractive index profiles shown in fig. 3, the red line refers to the refractive indices of the layer system designed for coherent light, the blue line is for the refractive indices of layers designed for incoherent light.
Figure 5: Refractive index profiles of 30 dielectric layers realizing maximally flat antireflection for coherent light (red line) compared to layers designed for incoherent light (blue line), substrate refractive index \(n_{s}=2\), superstrate refractive index \(n_{0}=1\).
Figure 6: Dependence of the decimal logarithm of reflectance \(\log(\rho)\) on the relative frequency \(\omega/\omega_{c}\) of the system of 30 quarter-wave layers for the refractive index profiles shown in fig. 5, the red line refers to the refractive indices of the layer system designed for coherent light, the blue line is for the refractive indices of layers designed for incoherent light.
Figure 7: Dependence of the decimal logarithm of reflectance \(\log(\rho)\) on the relative frequency \(\omega/\omega_{c}\) of the system of 1000 quarter-wave layers designed for maximally flat antireflection, the red line refers to the refractive indices of the layer system designed for coherent light, the blue line refers to the layer system designed for incoherent light.
a matching procedure differ from the exact values only at the fourth decimal place, as shown in fig. 8, which would hardly be observable in a graphical representation of the refractive index profile.
However, the coefficients \(b_{k}\) calculated in this way depend on the number of layers, as shown in fig. 9. We can also assume their dependence on the refractive indices of the surrounding media, but we would welcome this dependence to be as negligible as possible, which would allow us to easily make antireflection designs close to the theoretical best.
We can also ask how small differences in approximate refractive indices of the order of \(10^{-4}\) from the theoretically accurate values affect the quality of the resulting antireflection. Even if the deviations from the ideal state are small, they are still observable, as shown in fig. 10.
It can be supposed that the stepwise smooth profiles of the refractive indices of the layers according to the relation (31) will not represent poor antireflection designs, as it is proven by fig. 11 for 100 layers, compared to the ideal refractive indices designed for coherent light, which cannot be easily computed explicitly for 100 layers due to computational complexity.
However, the relation (31) can be used to design an antireflection with a different refractive index of the substrate with a suitable choice of the parameter \(b_{k}\). For example, fig. 12 shows the result of the calculation for 30 layers and \(n_{0}=1\), \(n_{s}=4\) with the choice of \(b_{30}=0.258\). Thus, the expression (31) allows the designer to play with different antireflection constructions that are more or less close to the exact solution.
Perhaps it is interesting to calculate in the coherent case the quantities \(P_{j}^{(k)}=2^{k}\alpha_{j}^{(k)}\) and compare them with the Schallenberg's approximation [12]. Not surprisingly, if we round the numbers \(P_{j}^{(k)}\) computed numerically according to the exact algorithm to integers, we get the solutions that follow from Schallenberg's procedure. For example, for 4 layers, according to [12] we have \(P^{(4)}=1\), 5, 11, 15, while the exact algorithm gives the numbers \(P^{(4)}=1.003732550742073\), 5.003732550742090, 10.996267449257910, 14.996267449257919. However, the
Fig. 8: The difference \(\Delta n\) in the refractive indices of the 30 layers obtained by approximation using the Gaussian error function (31) and the exact refractive indices calculated by solving the equations (16) depending on the layer order. The coefficient \(b_{30}=0.257959458606224\) entering (31) was obtained numerically, the refractive indices of the surrounding environments are \(n_{0}=1\), \(n_{s}=2\).
Fig. 10: Dependence of the decadic logarithm of reflectance \(\log(\rho)\) on the relative frequency \(\omega/\omega_{o}\) of a system of 30 quarter-wave layers designed for maximally flat antireflection in coherent light. The red line refers to the refractive indices of the layers designed according to the exact algorithm, the blue line refers to the refractive indices of the layers designed according to the approximate expression (31), refractive indices \(n_{0}=1\), \(n_{s}=2\).
Fig. 9: The size of the optimized coefficients \(b_{k}\) entering the approximation expression (31) depending on the number of layers, the refractive indices of the environment surrounding the layers are \(n_{0}=1\), \(n_{s}=2\).
Fig. 11: Dependence of the decadic logarithm of the reflectance \(\log(\rho)\) on the relative frequency \(\omega/\omega_{o}\) of a system of 100 quarter-wave layers designed according to the relation (31) with parameter \(b_{100}=0.1\) (blue line). For comparison, the red line refers to the refractive indices of the layers designed according to the exact algorithm, \(n_{0}=1\), \(n_{s}=2\).
refractive indices of the layers show small differences. For \(n_{0}=1\), \(n_{s}=2\) we get according to [12] 1.044273782427414, 1.241857812073484, 1.610490331949254, 1.91520656139714, while the exact algorithm yields values of 1.044442655609480, 1.242058637263131, 1.610229936009292, 1.91489689692627. The relatively small deviations between the exact and Schallenberg's procedures are substantial in order to be able to declare, on the basis of a comparison of the reflectances around the central frequency, that maximally flat antireflections for the coherent case provide only solutions of the equations (20) with the resulting reflectance according to (16). However, Schallenberg's solution is undoubtebly slightly better than the approximation using the error function (31).
## 5 Conclusions
In this paper two exact solutions of maximally flat antireflection layer systems are proposed for two ideal cases of coherent (for thin layers) and incoherent (for thick layers) light. The first case is based on the requirement that the maximum number of derivatives of the reflectance of the system of quarter-wave thin layers with respect to the frequency is equal to zero for the central frequency, while the second case is based on the requirement of the minimum reflectance of the system of thick layers. The first case requires finding the zero point of a system of functions of several variables, which can be done explicitly for systems of at most four layers, while the second case requires finding the minimum of a function of several variables for which an explicit solution has been found. The graphs then present the obtained profiles of the refractive indices and the corresponding reflectance curves as a function of the normalized light frequency. There is no speculation about the realization of such profiles, although it cannot be ruled out that the current development of nanotechnology will ultimately be able to achieve flat antireflection to the maximum extent. The possibility of using approximate expressions for layer refractive indices to easily construct quite good antireflection systems is also discussed.
_The author (orcid.org/0000-0003-0684-0775) thanks project OP PIK No. CZ.01.1.02/0.0/0.0/21_374/0027282 "Relief nano/micro structures for optical components in the automotive industry" SPP 843102011 and LM2023032 "Pierre Auger Observatory" of the Ministry of Education, Youth and Sports of the Czech Republic for their support._
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.